diff --git a/articles/ambassador_pattern_linking.md~ b/articles/ambassador_pattern_linking.md~ deleted file mode 100644 index 9b1b0329a6..0000000000 --- a/articles/ambassador_pattern_linking.md~ +++ /dev/null @@ -1,150 +0,0 @@ -page_title: Link via an Ambassador Container -page_description: Using the Ambassador pattern to abstract (network) services -page_keywords: Examples, Usage, links, docker, documentation, examples, names, name, container naming - -# Link via an Ambassador Container - -## Introduction - -Rather than hardcoding network links between a service consumer and -provider, Docker encourages service portability, for example instead of: - - (consumer) --> (redis) - -Requiring you to restart the `consumer` to attach it to a different -`redis` service, you can add ambassadors: - - (consumer) --> (redis-ambassador) --> (redis) - -Or - - (consumer) --> (redis-ambassador) ---network---> (redis-ambassador) --> (redis) - -When you need to rewire your consumer to talk to a different Redis -server, you can just restart the `redis-ambassador` container that the -consumer is connected to. - -This pattern also allows you to transparently move the Redis server to a -different docker host from the consumer. - -Using the `svendowideit/ambassador` container, the link wiring is -controlled entirely from the `docker run` parameters. - -## Two host Example - -Start actual Redis server on one Docker host - - big-server $ sudo docker run -d --name redis crosbymichael/redis - -Then add an ambassador linked to the Redis server, mapping a port to the -outside world - - big-server $ sudo docker run -d --link redis:redis --name redis_ambassador -p 6379:6379 svendowideit/ambassador - -On the other host, you can set up another ambassador setting environment -variables for each remote port we want to proxy to the `big-server` - - client-server $ sudo docker run -d --name redis_ambassador --expose 6379 -e REDIS_PORT_6379_TCP=tcp://192.168.1.52:6379 svendowideit/ambassador - -Then on the `client-server` host, you can use a Redis client container -to talk to the remote Redis server, just by linking to the local Redis -ambassador. - - client-server $ sudo docker run -i -t --rm --link redis_ambassador:redis relateiq/redis-cli - redis 172.17.0.160:6379> ping - PONG - -## How it works - -The following example shows what the `svendowideit/ambassador` container -does automatically (with a tiny amount of `sed`) - -On the Docker host (192.168.1.52) that Redis will run on: - - # start actual redis server - $ sudo docker run -d --name redis crosbymichael/redis - - # get a redis-cli container for connection testing - $ sudo docker pull relateiq/redis-cli - - # test the redis server by talking to it directly - $ sudo docker run -t -i --rm --link redis:redis relateiq/redis-cli - redis 172.17.0.136:6379> ping - PONG - ^D - - # add redis ambassador - $ sudo docker run -t -i --link redis:redis --name redis_ambassador -p 6379:6379 busybox sh - -In the `redis_ambassador` container, you can see the linked Redis -containers `env`: - - $ env - REDIS_PORT=tcp://172.17.0.136:6379 - REDIS_PORT_6379_TCP_ADDR=172.17.0.136 - REDIS_NAME=/redis_ambassador/redis - HOSTNAME=19d7adf4705e - REDIS_PORT_6379_TCP_PORT=6379 - HOME=/ - REDIS_PORT_6379_TCP_PROTO=tcp - container=lxc - REDIS_PORT_6379_TCP=tcp://172.17.0.136:6379 - TERM=xterm - PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin - PWD=/ - -This environment is used by the ambassador `socat` script to expose Redis -to the world (via the `-p 6379:6379` port mapping): - - $ sudo docker rm redis_ambassador - $ sudo ./contrib/mkimage-unittest.sh - $ sudo docker run -t -i --link redis:redis --name redis_ambassador -p 6379:6379 docker-ut sh - - $ socat TCP4-LISTEN:6379,fork,reuseaddr TCP4:172.17.0.136:6379 - -Now ping the Redis server via the ambassador: - -Now go to a different server: - - $ sudo ./contrib/mkimage-unittest.sh - $ sudo docker run -t -i --expose 6379 --name redis_ambassador docker-ut sh - - $ socat TCP4-LISTEN:6379,fork,reuseaddr TCP4:192.168.1.52:6379 - -And get the `redis-cli` image so we can talk over the ambassador bridge. - - $ sudo docker pull relateiq/redis-cli - $ sudo docker run -i -t --rm --link redis_ambassador:redis relateiq/redis-cli - redis 172.17.0.160:6379> ping - PONG - -## The svendowideit/ambassador Dockerfile - -The `svendowideit/ambassador` image is a small `busybox` image with -`socat` built in. When you start the container, it uses a small `sed` -script to parse out the (possibly multiple) link environment variables -to set up the port forwarding. On the remote host, you need to set the -variable using the `-e` command line option. - - --expose 1234 -e REDIS_PORT_1234_TCP=tcp://192.168.1.52:6379 - -Will forward the local `1234` port to the remote IP and port, in this -case `192.168.1.52:6379`. - - # - # - # first you need to build the docker-ut image - # using ./contrib/mkimage-unittest.sh - # then - # docker build -t SvenDowideit/ambassador . - # docker tag SvenDowideit/ambassador ambassador - # then to run it (on the host that has the real backend on it) - # docker run -t -i --link redis:redis --name redis_ambassador -p 6379:6379 ambassador - # on the remote host, you can set up another ambassador - # docker run -t -i --name redis_ambassador --expose 6379 sh - - FROM docker-ut - MAINTAINER SvenDowideit@home.org.au - - - CMD env | grep _TCP= | sed 's/.*_PORT_\([0-9]*\)_TCP=tcp:\/\/\(.*\):\(.*\)/socat TCP4-LISTEN:\1,fork,reuseaddr TCP4:\2:\3 \&/' | sh && top diff --git a/articles/b2d_volume_resize.md~ b/articles/b2d_volume_resize.md~ deleted file mode 100644 index 1b39b49eda..0000000000 --- a/articles/b2d_volume_resize.md~ +++ /dev/null @@ -1,101 +0,0 @@ -page_title: Resizing a Boot2Docker Volume -page_description: Resizing a Boot2Docker Volume in VirtualBox with GParted -page_keywords: boot2docker, volume, virtualbox - -# Getting “no space left on device” errors with Boot2Docker? - -If you're using Boot2Docker with a large number of images, or the images you're -working with are very large, your pulls might start failing with "no space left -on device" errors when the Boot2Docker volume fills up. The solution is to -increase the volume size by first cloning it, then resizing it using a disk -partitioning tool. - -We recommend [GParted](http://gparted.sourceforge.net/download.php/index.php). -The tool comes as a bootable ISO, is a free download, and works well with -VirtualBox. - -## 1. Stop Boot2Docker - -Issue the command to stop the Boot2Docker VM on the command line: - - $ boot2docker stop - -## 2. Clone the VMDK image to a VDI image - -Boot2Docker ships with a VMDK image, which can’t be resized by VirtualBox’s -native tools. We will instead create a VDI volume and clone the VMDK volume to -it. - -Using the command line VirtualBox tools, clone the VMDK image to a VDI image: - - $ vboxmanage clonehd /full/path/to/boot2docker-hd.vmdk /full/path/to/.vdi --format VDI --variant Standard - -## 3. Resize the VDI volume - -Choose a size that will be appropriate for your needs. If you’re spinning up a -lot of containers, or your containers are particularly large, larger will be -better: - - $ vboxmanage modifyhd /full/path/to/.vdi --resize - -## 4. Download a disk partitioning tool ISO - -To resize the volume, we'll use [GParted](http://gparted.sourceforge.net/download.php/). -Once you've downloaded the tool, add the ISO to the Boot2Docker VM IDE bus. -You might need to create the bus before you can add the ISO. - -> **Note:** -> It's important that you choose a partitioning tool that is available as an ISO so -> that the Boot2Docker VM can be booted with it. - - - - - - - - -


- -## 5. Add the new VDI image - -In the settings for the Boot2Docker image in VirtualBox, remove the VMDK image -from the SATA contoller and add the VDI image. - - - -## 6. Verify the boot order - -In the **System** settings for the Boot2Docker VM, make sure that **CD/DVD** is -at the top of the **Boot Order** list. - - - -## 7. Boot to the disk partitioning ISO - -Manually start the Boot2Docker VM in VirtualBox, and the disk partitioning ISO -should start up. Using GParted, choose the **GParted Live (default settings)** -option. Choose the default keyboard, language, and XWindows settings, and the -GParted tool will start up and display the VDI volume you created. Right click -on the VDI and choose **Resize/Move**. - - - -Drag the slider representing the volume to the maximum available size, click -**Resize/Move**, and then **Apply**. - - - -Quit GParted and shut down the VM. Remove the GParted ISO from the IDE controller -for the Boot2Docker VM in VirtualBox. - -## 8. Start the Boot2Docker VM - -Fire up the Boot2Docker VM manually in VirtualBox. The VM should log in -automatically, but if it doesn't, the credentials are `docker/tcuser`. Using -the `df -h` command, verify that your changes took effect. - - - -You’re done! - diff --git a/articles/baseimages.md~ b/articles/baseimages.md~ deleted file mode 100644 index 5a5addd1aa..0000000000 --- a/articles/baseimages.md~ +++ /dev/null @@ -1,68 +0,0 @@ -page_title: Create a Base Image -page_description: How to create base images -page_keywords: Examples, Usage, base image, docker, documentation, examples - -# Create a Base Image - -So you want to create your own [*Base Image*]( -/terms/image/#base-image)? Great! - -The specific process will depend heavily on the Linux distribution you -want to package. We have some examples below, and you are encouraged to -submit pull requests to contribute new ones. - -## Create a full image using tar - -In general, you'll want to start with a working machine that is running -the distribution you'd like to package as a base image, though that is -not required for some tools like Debian's -[Debootstrap](https://wiki.debian.org/Debootstrap), which you can also -use to build Ubuntu images. - -It can be as simple as this to create an Ubuntu base image: - - $ sudo debootstrap raring raring > /dev/null - $ sudo tar -C raring -c . | sudo docker import - raring - a29c15f1bf7a - $ sudo docker run raring cat /etc/lsb-release - DISTRIB_ID=Ubuntu - DISTRIB_RELEASE=13.04 - DISTRIB_CODENAME=raring - DISTRIB_DESCRIPTION="Ubuntu 13.04" - -There are more example scripts for creating base images in the Docker -GitHub Repo: - - - [BusyBox](https://github.com/docker/docker/blob/master/contrib/mkimage-busybox.sh) - - CentOS / Scientific Linux CERN (SLC) [on Debian/Ubuntu]( - https://github.com/docker/docker/blob/master/contrib/mkimage-rinse.sh) or - [on CentOS/RHEL/SLC/etc.]( - https://github.com/docker/docker/blob/master/contrib/mkimage-yum.sh) - - [Debian / Ubuntu]( - https://github.com/docker/docker/blob/master/contrib/mkimage-debootstrap.sh) - -## Creating a simple base image using `scratch` - -There is a special repository in the Docker registry called `scratch`, which -was created using an empty tar file: - - $ tar cv --files-from /dev/null | docker import - scratch - -which you can `docker pull`. You can then use that -image to base your new minimal containers `FROM`: - - FROM scratch - COPY true-asm /true - CMD ["/true"] - -The `Dockerfile` above is from an extremely minimal image - [tianon/true]( -https://github.com/tianon/dockerfiles/tree/master/true). - -## More resources - -There are lots more resources available to help you write your 'Dockerfile`. - -* There's a [complete guide to all the instructions](/reference/builder/) available for use in a `Dockerfile` in the reference section. -* To help you write a clear, readable, maintainable `Dockerfile`, we've also -written a [`Dockerfile` Best Practices guide](/articles/dockerfile_best-practices). -* If you're working on an Official Repo, be sure to check out the [Official Repo Guidelines](/docker-hub/official_repos/). diff --git a/articles/basics.md~ b/articles/basics.md~ deleted file mode 100644 index 4cdcab4aa4..0000000000 --- a/articles/basics.md~ +++ /dev/null @@ -1,179 +0,0 @@ -page_title: First steps with Docker -page_description: Common usage and commands -page_keywords: Examples, Usage, basic commands, docker, documentation, examples - -# First steps with Docker - -## Check your Docker install - -This guide assumes you have a working installation of Docker. To check -your Docker install, run the following command: - - # Check that you have a working install - $ sudo docker info - -If you get `docker: command not found` or something like -`/var/lib/docker/repositories: permission denied` you may have an -incomplete Docker installation or insufficient privileges to access -Docker on your machine. - -Please refer to [*Installation*](/installation) -for installation instructions. - -## Download a pre-built image - - # Download an ubuntu image - $ sudo docker pull ubuntu - -This will find the `ubuntu` image by name on -[*Docker Hub*](/userguide/dockerrepos/#searching-for-images) -and download it from [Docker Hub](https://hub.docker.com) to a local -image cache. - -> **Note**: -> When the image has successfully downloaded, you will see a 12 character -> hash `539c0211cd76: Download complete` which is the -> short form of the image ID. These short image IDs are the first 12 -> characters of the full image ID - which can be found using -> `docker inspect` or `docker images --no-trunc=true` - -{{ include "no-remote-sudo.md" }} - -## Running an interactive shell - - # Run an interactive shell in the ubuntu image, - # allocate a tty, attach stdin and stdout - # To detach the tty without exiting the shell, - # use the escape sequence Ctrl-p + Ctrl-q - # note: This will continue to exist in a stopped state once exited (see "docker ps -a") - $ sudo docker run -i -t ubuntu /bin/bash - -## Bind Docker to another host/port or a Unix socket - -> **Warning**: -> Changing the default `docker` daemon binding to a -> TCP port or Unix *docker* user group will increase your security risks -> by allowing non-root users to gain *root* access on the host. Make sure -> you control access to `docker`. If you are binding -> to a TCP port, anyone with access to that port has full Docker access; -> so it is not advisable on an open network. - -With `-H` it is possible to make the Docker daemon to listen on a -specific IP and port. By default, it will listen on -`unix:///var/run/docker.sock` to allow only local connections by the -*root* user. You *could* set it to `0.0.0.0:2375` or a specific host IP -to give access to everybody, but that is **not recommended** because -then it is trivial for someone to gain root access to the host where the -daemon is running. - -Similarly, the Docker client can use `-H` to connect to a custom port. - -`-H` accepts host and port assignment in the following format: - - tcp://[host][:port]` or `unix://path - -For example: - -- `tcp://host:2375` -> TCP connection on - host:2375 -- `unix://path/to/socket` -> Unix socket located - at `path/to/socket` - -`-H`, when empty, will default to the same value as -when no `-H` was passed in. - -`-H` also accepts short form for TCP bindings: - - host[:port]` or `:port - -Run Docker in daemon mode: - - $ sudo /docker -H 0.0.0.0:5555 -d & - -Download an `ubuntu` image: - - $ sudo docker -H :5555 pull ubuntu - -You can use multiple `-H`, for example, if you want to listen on both -TCP and a Unix socket - - # Run docker in daemon mode - $ sudo /docker -H tcp://127.0.0.1:2375 -H unix:///var/run/docker.sock -d & - # Download an ubuntu image, use default Unix socket - $ sudo docker pull ubuntu - # OR use the TCP port - $ sudo docker -H tcp://127.0.0.1:2375 pull ubuntu - -## Starting a long-running worker process - - # Start a very useful long-running process - $ JOB=$(sudo docker run -d ubuntu /bin/sh -c "while true; do echo Hello world; sleep 1; done") - - # Collect the output of the job so far - $ sudo docker logs $JOB - - # Kill the job - $ sudo docker kill $JOB - -## Listing containers - - $ sudo docker ps # Lists only running containers - $ sudo docker ps -a # Lists all containers - -## Controlling containers - - # Start a new container - $ JOB=$(sudo docker run -d ubuntu /bin/sh -c "while true; do echo Hello world; sleep 1; done") - - # Stop the container - $ sudo docker stop $JOB - - # Start the container - $ sudo docker start $JOB - - # Restart the container - $ sudo docker restart $JOB - - # SIGKILL a container - $ sudo docker kill $JOB - - # Remove a container - $ sudo docker stop $JOB # Container must be stopped to remove it - $ sudo docker rm $JOB - -## Bind a service on a TCP port - - # Bind port 4444 of this container, and tell netcat to listen on it - $ JOB=$(sudo docker run -d -p 4444 ubuntu:12.10 /bin/nc -l 4444) - - # Which public port is NATed to my container? - $ PORT=$(sudo docker port $JOB 4444 | awk -F: '{ print $2 }') - - # Connect to the public port - $ echo hello world | nc 127.0.0.1 $PORT - - # Verify that the network connection worked - $ echo "Daemon received: $(sudo docker logs $JOB)" - -## Committing (saving) a container state - -Save your containers state to an image, so the state can be -re-used. - -When you commit your container only the differences between the image -the container was created from and the current state of the container -will be stored (as a diff). See which images you already have using the -`docker images` command. - - # Commit your container to a new named image - $ sudo docker commit - - # List your containers - $ sudo docker images - -You now have an image state from which you can create new instances. - -Read more about [*Share Images via -Repositories*](/userguide/dockerrepos) or -continue to the complete [*Command -Line*](/reference/commandline/cli) diff --git a/articles/basics.md~~ b/articles/basics.md~~ deleted file mode 100644 index 4cdcab4aa4..0000000000 --- a/articles/basics.md~~ +++ /dev/null @@ -1,179 +0,0 @@ -page_title: First steps with Docker -page_description: Common usage and commands -page_keywords: Examples, Usage, basic commands, docker, documentation, examples - -# First steps with Docker - -## Check your Docker install - -This guide assumes you have a working installation of Docker. To check -your Docker install, run the following command: - - # Check that you have a working install - $ sudo docker info - -If you get `docker: command not found` or something like -`/var/lib/docker/repositories: permission denied` you may have an -incomplete Docker installation or insufficient privileges to access -Docker on your machine. - -Please refer to [*Installation*](/installation) -for installation instructions. - -## Download a pre-built image - - # Download an ubuntu image - $ sudo docker pull ubuntu - -This will find the `ubuntu` image by name on -[*Docker Hub*](/userguide/dockerrepos/#searching-for-images) -and download it from [Docker Hub](https://hub.docker.com) to a local -image cache. - -> **Note**: -> When the image has successfully downloaded, you will see a 12 character -> hash `539c0211cd76: Download complete` which is the -> short form of the image ID. These short image IDs are the first 12 -> characters of the full image ID - which can be found using -> `docker inspect` or `docker images --no-trunc=true` - -{{ include "no-remote-sudo.md" }} - -## Running an interactive shell - - # Run an interactive shell in the ubuntu image, - # allocate a tty, attach stdin and stdout - # To detach the tty without exiting the shell, - # use the escape sequence Ctrl-p + Ctrl-q - # note: This will continue to exist in a stopped state once exited (see "docker ps -a") - $ sudo docker run -i -t ubuntu /bin/bash - -## Bind Docker to another host/port or a Unix socket - -> **Warning**: -> Changing the default `docker` daemon binding to a -> TCP port or Unix *docker* user group will increase your security risks -> by allowing non-root users to gain *root* access on the host. Make sure -> you control access to `docker`. If you are binding -> to a TCP port, anyone with access to that port has full Docker access; -> so it is not advisable on an open network. - -With `-H` it is possible to make the Docker daemon to listen on a -specific IP and port. By default, it will listen on -`unix:///var/run/docker.sock` to allow only local connections by the -*root* user. You *could* set it to `0.0.0.0:2375` or a specific host IP -to give access to everybody, but that is **not recommended** because -then it is trivial for someone to gain root access to the host where the -daemon is running. - -Similarly, the Docker client can use `-H` to connect to a custom port. - -`-H` accepts host and port assignment in the following format: - - tcp://[host][:port]` or `unix://path - -For example: - -- `tcp://host:2375` -> TCP connection on - host:2375 -- `unix://path/to/socket` -> Unix socket located - at `path/to/socket` - -`-H`, when empty, will default to the same value as -when no `-H` was passed in. - -`-H` also accepts short form for TCP bindings: - - host[:port]` or `:port - -Run Docker in daemon mode: - - $ sudo /docker -H 0.0.0.0:5555 -d & - -Download an `ubuntu` image: - - $ sudo docker -H :5555 pull ubuntu - -You can use multiple `-H`, for example, if you want to listen on both -TCP and a Unix socket - - # Run docker in daemon mode - $ sudo /docker -H tcp://127.0.0.1:2375 -H unix:///var/run/docker.sock -d & - # Download an ubuntu image, use default Unix socket - $ sudo docker pull ubuntu - # OR use the TCP port - $ sudo docker -H tcp://127.0.0.1:2375 pull ubuntu - -## Starting a long-running worker process - - # Start a very useful long-running process - $ JOB=$(sudo docker run -d ubuntu /bin/sh -c "while true; do echo Hello world; sleep 1; done") - - # Collect the output of the job so far - $ sudo docker logs $JOB - - # Kill the job - $ sudo docker kill $JOB - -## Listing containers - - $ sudo docker ps # Lists only running containers - $ sudo docker ps -a # Lists all containers - -## Controlling containers - - # Start a new container - $ JOB=$(sudo docker run -d ubuntu /bin/sh -c "while true; do echo Hello world; sleep 1; done") - - # Stop the container - $ sudo docker stop $JOB - - # Start the container - $ sudo docker start $JOB - - # Restart the container - $ sudo docker restart $JOB - - # SIGKILL a container - $ sudo docker kill $JOB - - # Remove a container - $ sudo docker stop $JOB # Container must be stopped to remove it - $ sudo docker rm $JOB - -## Bind a service on a TCP port - - # Bind port 4444 of this container, and tell netcat to listen on it - $ JOB=$(sudo docker run -d -p 4444 ubuntu:12.10 /bin/nc -l 4444) - - # Which public port is NATed to my container? - $ PORT=$(sudo docker port $JOB 4444 | awk -F: '{ print $2 }') - - # Connect to the public port - $ echo hello world | nc 127.0.0.1 $PORT - - # Verify that the network connection worked - $ echo "Daemon received: $(sudo docker logs $JOB)" - -## Committing (saving) a container state - -Save your containers state to an image, so the state can be -re-used. - -When you commit your container only the differences between the image -the container was created from and the current state of the container -will be stored (as a diff). See which images you already have using the -`docker images` command. - - # Commit your container to a new named image - $ sudo docker commit - - # List your containers - $ sudo docker images - -You now have an image state from which you can create new instances. - -Read more about [*Share Images via -Repositories*](/userguide/dockerrepos) or -continue to the complete [*Command -Line*](/reference/commandline/cli) diff --git a/articles/certificates.md~ b/articles/certificates.md~ deleted file mode 100644 index ebd606f385..0000000000 --- a/articles/certificates.md~ +++ /dev/null @@ -1,114 +0,0 @@ -page_title: Using certificates for repository client verification -page_description: How to set up and use certificates with a registry to verify access -page_keywords: Usage, registry, repository, client, root, certificate, docker, apache, ssl, tls, documentation, examples, articles, tutorials - -# Using certificates for repository client verification - -In [Running Docker with HTTPS](/articles/https), you learned that, by default, -Docker runs via a non-networked Unix socket and TLS must be enabled in order -to have the Docker client and the daemon communicate securely over HTTPS. - -Now, you will see how to allow the Docker registry (i.e., *a server*) to -verify that the Docker daemon (i.e., *a client*) has the right to access the -images being hosted with *certificate-based client-server authentication*. - -We will show you how to install a Certificate Authority (CA) root certificate -for the registry and how to set the client TLS certificate for verification. - -## Understanding the configuration - -A custom certificate is configured by creating a directory under -`/etc/docker/certs.d` using the same name as the registry's hostname (e.g., -`localhost`). All `*.crt` files are added to this directory as CA roots. - -> **Note:** -> In the absence of any root certificate authorities, Docker -> will use the system default (i.e., host's root CA set). - -The presence of one or more `.key/cert` pairs indicates to Docker -that there are custom certificates required for access to the desired -repository. - -> **Note:** -> If there are multiple certificates, each will be tried in alphabetical -> order. If there is an authentication error (e.g., 403, 404, 5xx, etc.), Docker -> will continue to try with the next certificate. - -Our example is set up like this: - - /etc/docker/certs.d/ <-- Certificate directory - └── localhost <-- Hostname - ├── client.cert <-- Client certificate - ├── client.key <-- Client key - └── localhost.crt <-- Registry certificate - -## Creating the client certificates - -You will use OpenSSL's `genrsa` and `req` commands to first generate an RSA -key and then use the key to create the certificate. - - $ openssl genrsa -out client.key 1024 - $ openssl req -new -x509 -text -key client.key -out client.cert - -> **Warning:**: -> Using TLS and managing a CA is an advanced topic. -> You should be familiar with OpenSSL, x509, and TLS before -> attempting to use them in production. - -> **Warning:** -> These TLS commands will only generate a working set of certificates on Linux. -> The version of OpenSSL in Mac OS X is incompatible with the type of -> certificate Docker requires. - -## Testing the verification setup - -You can test this setup by using Apache to host a Docker registry. -For this purpose, you can copy a registry tree (containing images) inside -the Apache root. - -> **Note:** -> You can find such an example [here]( -> http://people.gnome.org/~alexl/v1.tar.gz) - which contains the busybox image. - -Once you set up the registry, you can use the following Apache configuration -to implement certificate-based protection. - - # This must be in the root context, otherwise it causes a re-negotiation - # which is not supported by the TLS implementation in go - SSLVerifyClient optional_no_ca - - - Action cert-protected /cgi-bin/cert.cgi - SetHandler cert-protected - - Header set x-docker-registry-version "0.6.2" - SetEnvIf Host (.*) custom_host=$1 - Header set X-Docker-Endpoints "%{custom_host}e" - - -Save the above content as `/etc/httpd/conf.d/registry.conf`, and -continue with creating a `cert.cgi` file under `/var/www/cgi-bin/`. - - #!/bin/bash - if [ "$HTTPS" != "on" ]; then - echo "Status: 403 Not using SSL" - echo "x-docker-registry-version: 0.6.2" - echo - exit 0 - fi - if [ "$SSL_CLIENT_VERIFY" == "NONE" ]; then - echo "Status: 403 Client certificate invalid" - echo "x-docker-registry-version: 0.6.2" - echo - exit 0 - fi - echo "Content-length: $(stat --printf='%s' $PATH_TRANSLATED)" - echo "x-docker-registry-version: 0.6.2" - echo "X-Docker-Endpoints: $SERVER_NAME" - echo "X-Docker-Size: 0" - echo - - cat $PATH_TRANSLATED - -This CGI script will ensure that all requests to `/v1` *without* a valid -certificate will be returned with a `403` (i.e., HTTP forbidden) error. diff --git a/articles/cfengine_process_management.md~ b/articles/cfengine_process_management.md~ deleted file mode 100644 index e32b266397..0000000000 --- a/articles/cfengine_process_management.md~ +++ /dev/null @@ -1,143 +0,0 @@ -page_title: Process Management with CFEngine -page_description: Managing containerized processes with CFEngine -page_keywords: cfengine, process, management, usage, docker, documentation - -# Process Management with CFEngine - -Create Docker containers with managed processes. - -Docker monitors one process in each running container and the container -lives or dies with that process. By introducing CFEngine inside Docker -containers, we can alleviate a few of the issues that may arise: - - - It is possible to easily start multiple processes within a - container, all of which will be managed automatically, with the - normal `docker run` command. - - If a managed process dies or crashes, CFEngine will start it again - within 1 minute. - - The container itself will live as long as the CFEngine scheduling - daemon (cf-execd) lives. With CFEngine, we are able to decouple the - life of the container from the uptime of the service it provides. - -## How it works - -CFEngine, together with the cfe-docker integration policies, are -installed as part of the Dockerfile. This builds CFEngine into our -Docker image. - -The Dockerfile's `ENTRYPOINT` takes an arbitrary -amount of commands (with any desired arguments) as parameters. When we -run the Docker container these parameters get written to CFEngine -policies and CFEngine takes over to ensure that the desired processes -are running in the container. - -CFEngine scans the process table for the `basename` of the commands given -to the `ENTRYPOINT` and runs the command to start the process if the `basename` -is not found. For example, if we start the container with -`docker run "/path/to/my/application parameters"`, CFEngine will look for a -process named `application` and run the command. If an entry for `application` -is not found in the process table at any point in time, CFEngine will execute -`/path/to/my/application parameters` to start the application once again. The -check on the process table happens every minute. - -Note that it is therefore important that the command to start your -application leaves a process with the basename of the command. This can -be made more flexible by making some minor adjustments to the CFEngine -policies, if desired. - -## Usage - -This example assumes you have Docker installed and working. We will -install and manage `apache2` and `sshd` -in a single container. - -There are three steps: - -1. Install CFEngine into the container. -2. Copy the CFEngine Docker process management policy into the - containerized CFEngine installation. -3. Start your application processes as part of the `docker run` command. - -### Building the image - -The first two steps can be done as part of a Dockerfile, as follows. - - FROM ubuntu - MAINTAINER Eystein Måløy Stenberg - - RUN apt-get update && apt-get install -y wget lsb-release unzip ca-certificates - - # install latest CFEngine - RUN wget -qO- http://cfengine.com/pub/gpg.key | apt-key add - - RUN echo "deb http://cfengine.com/pub/apt $(lsb_release -cs) main" > /etc/apt/sources.list.d/cfengine-community.list - RUN apt-get update && apt-get install -y cfengine-community - - # install cfe-docker process management policy - RUN wget https://github.com/estenberg/cfe-docker/archive/master.zip -P /tmp/ && unzip /tmp/master.zip -d /tmp/ - RUN cp /tmp/cfe-docker-master/cfengine/bin/* /var/cfengine/bin/ - RUN cp /tmp/cfe-docker-master/cfengine/inputs/* /var/cfengine/inputs/ - RUN rm -rf /tmp/cfe-docker-master /tmp/master.zip - - # apache2 and openssh are just for testing purposes, install your own apps here - RUN apt-get update && apt-get install -y openssh-server apache2 - RUN mkdir -p /var/run/sshd - RUN echo "root:password" | chpasswd # need a password for ssh - - ENTRYPOINT ["/var/cfengine/bin/docker_processes_run.sh"] - -By saving this file as Dockerfile to a working directory, you can then build -your image with the docker build command, e.g., -`docker build -t managed_image`. - -### Testing the container - -Start the container with `apache2` and `sshd` running and managed, forwarding -a port to our SSH instance: - - $ sudo docker run -p 127.0.0.1:222:22 -d managed_image "/usr/sbin/sshd" "/etc/init.d/apache2 start" - -We now clearly see one of the benefits of the cfe-docker integration: it -allows to start several processes as part of a normal `docker run` command. - -We can now log in to our new container and see that both `apache2` and `sshd` -are running. We have set the root password to "password" in the Dockerfile -above and can use that to log in with ssh: - - ssh -p222 root@127.0.0.1 - - ps -ef - UID PID PPID C STIME TTY TIME CMD - root 1 0 0 07:48 ? 00:00:00 /bin/bash /var/cfengine/bin/docker_processes_run.sh /usr/sbin/sshd /etc/init.d/apache2 start - root 18 1 0 07:48 ? 00:00:00 /var/cfengine/bin/cf-execd -F - root 20 1 0 07:48 ? 00:00:00 /usr/sbin/sshd - root 32 1 0 07:48 ? 00:00:00 /usr/sbin/apache2 -k start - www-data 34 32 0 07:48 ? 00:00:00 /usr/sbin/apache2 -k start - www-data 35 32 0 07:48 ? 00:00:00 /usr/sbin/apache2 -k start - www-data 36 32 0 07:48 ? 00:00:00 /usr/sbin/apache2 -k start - root 93 20 0 07:48 ? 00:00:00 sshd: root@pts/0 - root 105 93 0 07:48 pts/0 00:00:00 -bash - root 112 105 0 07:49 pts/0 00:00:00 ps -ef - -If we stop apache2, it will be started again within a minute by -CFEngine. - - service apache2 status - Apache2 is running (pid 32). - service apache2 stop - * Stopping web server apache2 ... waiting [ OK ] - service apache2 status - Apache2 is NOT running. - # ... wait up to 1 minute... - service apache2 status - Apache2 is running (pid 173). - -## Adapting to your applications - -To make sure your applications get managed in the same manner, there are -just two things you need to adjust from the above example: - - - In the Dockerfile used above, install your applications instead of - `apache2` and `sshd`. - - When you start the container with `docker run`, - specify the command line arguments to your applications rather than - `apache2` and `sshd`. diff --git a/articles/chef.md~ b/articles/chef.md~ deleted file mode 100644 index cb70215c54..0000000000 --- a/articles/chef.md~ +++ /dev/null @@ -1,74 +0,0 @@ -page_title: Chef Usage -page_description: Installation and using Docker via Chef -page_keywords: chef, installation, usage, docker, documentation - -# Using Chef - -> **Note**: -> Please note this is a community contributed installation path. The only -> `official` installation is using the -> [*Ubuntu*](/installation/ubuntulinux) installation -> path. This version may sometimes be out of date. - -## Requirements - -To use this guide you'll need a working installation of -[Chef](http://www.getchef.com/). This cookbook supports a variety of -operating systems. - -## Installation - -The cookbook is available on the [Chef Community -Site](http://community.opscode.com/cookbooks/docker) and can be -installed using your favorite cookbook dependency manager. - -The source can be found on -[GitHub](https://github.com/bflad/chef-docker). - -## Usage - -The cookbook provides recipes for installing Docker, configuring init -for Docker, and resources for managing images and containers. It -supports almost all Docker functionality. - -### Installation - - include_recipe 'docker' - -### Images - -The next step is to pull a Docker image. For this, we have a resource: - - docker_image 'samalba/docker-registry' - -This is equivalent to running: - - $ sudo docker pull samalba/docker-registry - -There are attributes available to control how long the cookbook will -allow for downloading (5 minute default). - -To remove images you no longer need: - - docker_image 'samalba/docker-registry' do - action :remove - end - -### Containers - -Now you have an image where you can run commands within a container -managed by Docker. - - docker_container 'samalba/docker-registry' do - detach true - port '5000:5000' - env 'SETTINGS_FLAVOR=local' - volume '/mnt/docker:/docker-storage' - end - -This is equivalent to running the following command, but under upstart: - - $ sudo docker run --detach=true --publish='5000:5000' --env='SETTINGS_FLAVOR=local' --volume='/mnt/docker:/docker-storage' samalba/docker-registry - -The resources will accept a single string or an array of values for any -Docker flags that allow multiple values. diff --git a/articles/dockerfile_best-practices.md~ b/articles/dockerfile_best-practices.md~ deleted file mode 100644 index 2ea796582d..0000000000 --- a/articles/dockerfile_best-practices.md~ +++ /dev/null @@ -1,429 +0,0 @@ -page_title: Best Practices for Writing Dockerfiles -page_description: Hints, tips and guidelines for writing clean, reliable Dockerfiles -page_keywords: Examples, Usage, base image, docker, documentation, dockerfile, best practices, hub, official repo - -# Best practices for writing Dockerfiles - -## Overview - -Docker can build images automatically by reading the instructions from a -`Dockerfile`, a text file that contains all the commands, in order, needed to -build a given image. `Dockerfile`s adhere to a specific format and use a -specific set of instructions. You can learn the basics on the -[Dockerfile Reference](https://docs.docker.com/reference/builder/) page. If -you’re new to writing `Dockerfile`s, you should start there. - -This document covers the best practices and methods recommended by Docker, -Inc. and the Docker community for creating easy-to-use, effective -`Dockerfile`s. We strongly suggest you follow these recommendations (in fact, -if you’re creating an Official Image, you *must* adhere to these practices). - -You can see many of these practices and recommendations in action in the [buildpack-deps `Dockerfile`](https://github.com/docker-library/buildpack-deps/blob/master/jessie/Dockerfile). - -> Note: for more detailed explanations of any of the Dockerfile commands ->mentioned here, visit the [Dockerfile Reference](https://docs.docker.com/reference/builder/) page. - -## General guidelines and recommendations - -### Containers should be ephemeral - -The container produced by the image your `Dockerfile` defines should be as -ephemeral as possible. By “ephemeral,” we mean that it can be stopped and -destroyed and a new one built and put in place with an absolute minimum of -set-up and configuration. - -### Use [a .dockerignore file](https://docs.docker.com/reference/builder/#the-dockerignore-file) - -For faster uploading and efficiency during `docker build`, you should use -a `.dockerignore` file to exclude files or directories from the build -context and final image. For example, unless`.git` is needed by your build -process or scripts, you should add it to `.dockerignore`, which can save many -megabytes worth of upload time. - -### Avoid installing unnecessary packages - -In order to reduce complexity, dependencies, file sizes, and build times, you -should avoid installing extra or unnecessary packages just because they -might be “nice to have.” For example, you don’t need to include a text editor -in a database image. - -### Run only one process per container - -In almost all cases, you should only run a single process in a single -container. Decoupling applications into multiple containers makes it much -easier to scale horizontally and reuse containers. If that service depends on -another service, make use of [container linking](https://docs.docker.com/userguide/dockerlinks/). - -### Minimize the number of layers - -You need to find the balance between readability (and thus long-term -maintainability) of the `Dockerfile` and minimizing the number of layers it -uses. Be strategic and cautious about the number of layers you use. - -### Sort multi-line arguments - -Whenever possible, ease later changes by sorting multi-line arguments -alphanumerically. This will help you avoid duplication of packages and make the -list much easier to update. This also makes PRs a lot easier to read and -review. Adding a space before a backslash (`\`) helps as well. - -Here’s an example from the [`buildpack-deps` image](https://github.com/docker-library/buildpack-deps): - - RUN apt-get update && apt-get install -y \ - bzr \ - cvs \ - git \ - mercurial \ - subversion - -### Build cache - -During the process of building an image Docker will step through the -instructions in your `Dockerfile` executing each in the order specified. -As each instruction is examined Docker will look for an existing image in its -cache that it can reuse, rather than creating a new (duplicate) image. -If you do not want to use the cache at all you can use the ` --no-cache=true` -option on the `docker build` command. - -However, if you do let Docker use its cache then it is very important to -understand when it will, and will not, find a matching image. The basic rules -that Docker will follow are outlined below: - -* Starting with a base image that is already in the cache, the next -instruction is compared against all child images derived from that base -image to see if one of them was built using the exact same instruction. If -not, the cache is invalidated. - -* In most cases simply comparing the instruction in the `Dockerfile` with one -of the child images is sufficient. However, certain instructions require -a little more examination and explanation. - -* In the case of the `ADD` and `COPY` instructions, the contents of the file(s) -being put into the image are examined. Specifically, a checksum is done -of the file(s) and then that checksum is used during the cache lookup. -If anything has changed in the file(s), including its metadata, -then the cache is invalidated. - -* Aside from the `ADD` and `COPY` commands cache checking will not look at the -files in the container to determine a cache match. For example, when processing -a `RUN apt-get -y update` command the files updated in the container -will not be examined to determine if a cache hit exists. In that case just -the command string itself will be used to find a match. - -Once the cache is invalidated, all subsequent `Dockerfile` commands will -generate new images and the cache will not be used. - -## The Dockerfile instructions - -Below you'll find recommendations for the best way to write the -various instructions available for use in a `Dockerfile`. - -### [`FROM`](https://docs.docker.com/reference/builder/#from) - -Whenever possible, use current Official Repositories as the basis for your -image. We recommend the [Debian image](https://registry.hub.docker.com/_/debian/) -since it’s very tightly controlled and kept extremely minimal (currently under -100 mb), while still being a full distribution. - -### [`RUN`](https://docs.docker.com/reference/builder/#run) - -As always, to make your `Dockerfile` more readable, understandable, and -maintainable, put long or complex `RUN` statements on multiple lines separated -with backslashes. - -Probably the most common use-case for `RUN` is an application of `apt-get`. -When using `apt-get`, here are a few things to keep in mind: - -* Don’t do `RUN apt-get update` on a single line. This will cause -caching issues if the referenced archive gets updated, which will make your -subsequent `apt-get install` fail without comment. - -* Avoid `RUN apt-get upgrade` or `dist-upgrade`, since many of the “essential” -packages from the base images will fail to upgrade inside an unprivileged -container. If a base package is out of date, you should contact its -maintainers. If you know there’s a particular package, `foo`, that needs to be -updated, use `apt-get install -y foo` and it will update automatically. - -* Do write instructions like: - - RUN apt-get update && apt-get install -y package-bar package-foo package-baz - -Writing the instruction this way not only makes it easier to read -and maintain, but also, by including `apt-get update`, ensures that the cache -will naturally be busted and the latest versions will be installed with no -further coding or manual intervention required. - -* Further natural cache-busting can be realized by version-pinning packages -(e.g., `package-foo=1.3.*`). This will force retrieval of that version -regardless of what’s in the cache. -Writing your `apt-get` code this way will greatly ease maintenance and reduce -failures due to unanticipated changes in required packages. - -#### Example - -Below is a well-formed `RUN` instruction that demonstrates the above -recommendations. Note that the last package, `s3cmd`, specifies a version -`1.1.0*`. If the image previously used an older version, specifying the new one -will cause a cache bust of `apt-get update` and ensure the installation of -the new version (which in this case had a new, required feature). - - RUN apt-get update && apt-get install -y \ - aufs-tools \ - automake \ - btrfs-tools \ - build-essential \ - curl \ - dpkg-sig \ - git \ - iptables \ - libapparmor-dev \ - libcap-dev \ - libsqlite3-dev \ - lxc=1.0* \ - mercurial \ - parallel \ - reprepro \ - ruby1.9.1 \ - ruby1.9.1-dev \ - s3cmd=1.1.0* - -Writing the instruction this way also helps you avoid potential duplication of -a given package because it is much easier to read than an instruction like: - - RUN apt-get install -y package-foo && apt-get install -y package-bar - -### [`CMD`](https://docs.docker.com/reference/builder/#cmd) - -The `CMD` instruction should be used to run the software contained by your -image, along with any arguments. `CMD` should almost always be used in the -form of `CMD [“executable”, “param1”, “param2”…]`. Thus, if the image is for a -service (Apache, Rails, etc.), you would run something like -`CMD ["apache2","-DFOREGROUND"]`. Indeed, this form of the instruction is -recommended for any service-based image. - -In most other cases, `CMD` should be given an interactive shell (bash, python, -perl, etc), for example, `CMD ["perl", "-de0"]`, `CMD ["python"]`, or -`CMD [“php”, “-a”]`. Using this form means that when you execute something like -`docker run -it python`, you’ll get dropped into a usable shell, ready to go. -`CMD` should rarely be used in the manner of `CMD [“param”, “param”]` in -conjunction with [`ENTRYPOINT`](https://docs.docker.com/reference/builder/#entrypoint), unless -you and your expected users are already quite familiar with how `ENTRYPOINT` -works. - -### [`EXPOSE`](https://docs.docker.com/reference/builder/#expose) - -The `EXPOSE` instruction indicates the ports on which a container will listen -for connections. Consequently, you should use the common, traditional port for -your application. For example, an image containing the Apache web server would -use `EXPOSE 80`, while an image containing MongoDB would use `EXPOSE 27017` and -so on. - -For external access, your users can execute `docker run` with a flag indicating -how to map the specified port to the port of their choice. -For container linking, Docker provides environment variables for the path from -the recipient container back to the source (ie, `MYSQL_PORT_3306_TCP`). - -### [`ENV`](https://docs.docker.com/reference/builder/#env) - -In order to make new software easier to run, you can use `ENV` to update the -`PATH` environment variable for the software your container installs. For -example, `ENV PATH /usr/local/nginx/bin:$PATH` will ensure that `CMD [“nginx”]` -just works. - -The `ENV` instruction is also useful for providing required environment -variables specific to services you wish to containerize, such as Postgres’s -`PGDATA`. - -Lastly, `ENV` can also be used to set commonly used version numbers so that -version bumps are easier to maintain, as seen in the following example: - - ENV PG_MAJOR 9.3 - ENV PG_VERSION 9.3.4 - RUN curl -SL http://example.com/postgres-$PG_VERSION.tar.xz | tar -xJC /usr/src/postgress && … - ENV PATH /usr/local/postgres-$PG_MAJOR/bin:$PATH - -Similar to having constant variables in a program (as opposed to hard-coding -values), this approach lets you change a single `ENV` instruction to -auto-magically bump the version of the software in your container. - -### [`ADD`](https://docs.docker.com/reference/builder/#add) or [`COPY`](https://docs.docker.com/reference/builder/#copy) - -Although `ADD` and `COPY` are functionally similar, generally speaking, `COPY` -is preferred. That’s because it’s more transparent than `ADD`. `COPY` only -supports the basic copying of local files into the container, while `ADD` has -some features (like local-only tar extraction and remote URL support) that are -not immediately obvious. Consequently, the best use for `ADD` is local tar file -auto-extraction into the image, as in `ADD rootfs.tar.xz /`. - -If you have multiple `Dockerfile` steps that use different files from your -context, `COPY` them individually, rather than all at once. This will ensure that -each step's build cache is only invalidated (forcing the step to be re-run) if the -specifically required files change. - -For example: - - COPY requirements.txt /tmp/ - RUN pip install /tmp/requirements.txt - COPY . /tmp/ - -Results in fewer cache invalidations for the `RUN` step, than if you put the -`COPY . /tmp/` before it. - -Because image size matters, using `ADD` to fetch packages from remote URLs is -strongly discouraged; you should use `curl` or `wget` instead. That way you can -delete the files you no longer need after they've been extracted and you won't -have to add another layer in your image. For example, you should avoid doing -things like: - - ADD http://example.com/big.tar.xz /usr/src/things/ - RUN tar -xJf /usr/src/things/big.tar.xz -C /usr/src/things - RUN make -C /usr/src/things all - -And instead, do something like: - - RUN mkdir -p /usr/src/things \ - && curl -SL http://example.com/big.tar.gz \ - | tar -xJC /usr/src/things \ - && make -C /usr/src/things all - -For other items (files, directories) that do not require `ADD`’s tar -auto-extraction capability, you should always use `COPY`. - -### [`ENTRYPOINT`](https://docs.docker.com/reference/builder/#entrypoint) - -The best use for `ENTRYPOINT` is to set the image's main command, allowing that -image to be run as though it was that command (and then use `CMD` as the -default flags). - -Let's start with an example of an image for the command line tool `s3cmd`: - - ENTRYPOINT ["s3cmd"] - CMD ["--help"] - -Now the image can be run like this to show the command's help: - - $ docker run s3cmd - -Or using the right parameters to execute a command: - - $ docker run s3cmd ls s3://mybucket - -This is useful because the image name can double as a reference to the binary as -shown in the command above. - -The `ENTRYPOINT` instruction can also be used in combination with a helper -script, allowing it to function in a similar way to the command above, even -when starting the tool may require more than one step. - -For example, the [Postgres Official Image](https://registry.hub.docker.com/_/postgres/) -uses the following script as its `ENTRYPOINT`: - -```bash -#!/bin/bash -set -e - -if [ "$1" = 'postgres' ]; then - chown -R postgres "$PGDATA" - - if [ -z "$(ls -A "$PGDATA")" ]; then - gosu postgres initdb - fi - - exec gosu postgres "$@" -fi - -exec "$@" -``` - -> **Note**: -> This script uses [the `exec` Bash command](http://wiki.bash-hackers.org/commands/builtin/exec) -> so that the final running application becomes the container's PID 1. This allows -> the application to receive any Unix signals sent to the container. -> See the [`ENTRYPOINT`](https://docs.docker.com/reference/builder/#ENTRYPOINT) -> help for more details. - - -The helper script is copied into the container and run via `ENTRYPOINT` on -container start: - - COPY ./docker-entrypoint.sh / - ENTRYPOINT ["/docker-entrypoint.sh"] - -This script allows the user to interact with Postgres in several ways. - -It can simply start Postgres: - - $ docker run postgres - -Or, it can be used to run Postgres and pass parameters to the server: - - $ docker run postgres postgres --help - -Lastly, it could also be used to start a totally different tool, such Bash: - - $ docker run --rm -it postgres bash - -### [`VOLUME`](https://docs.docker.com/reference/builder/#volume) - -The `VOLUME` instruction should be used to expose any database storage area, -configuration storage, or files/folders created by your docker container. You -are strongly encouraged to use `VOLUME` for any mutable and/or user-serviceable -parts of your image. - -### [`USER`](https://docs.docker.com/reference/builder/#user) - -If a service can run without privileges, use `USER` to change to a non-root -user. Start by creating the user and group in the `Dockerfile` with something -like `RUN groupadd -r postgres && useradd -r -g postgres postgres`. - -> **Note:** Users and groups in an image get a non-deterministic -> UID/GID in that the “next” UID/GID gets assigned regardless of image -> rebuilds. So, if it’s critical, you should assign an explicit UID/GID. - -You should avoid installing or using `sudo` since it has unpredictable TTY and -signal-forwarding behavior that can cause more problems than it solves. If -you absolutely need functionality similar to `sudo` (e.g., initializing the -daemon as root but running it as non-root), you may be able to use -[“gosu”](https://github.com/tianon/gosu). - -Lastly, to reduce layers and complexity, avoid switching `USER` back -and forth frequently. - -### [`WORKDIR`](https://docs.docker.com/reference/builder/#workdir) - -For clarity and reliability, you should always use absolute paths for your -`WORKDIR`. Also, you should use `WORKDIR` instead of proliferating -instructions like `RUN cd … && do-something`, which are hard to read, -troubleshoot, and maintain. - -### [`ONBUILD`](https://docs.docker.com/reference/builder/#onbuild) - -`ONBUILD` is only useful for images that are going to be built `FROM` a given -image. For example, you would use `ONBUILD` for a language stack image that -builds arbitrary user software written in that language within the -`Dockerfile`, as you can see in [Ruby’s `ONBUILD` variants](https://github.com/docker-library/ruby/blob/master/2.1/onbuild/Dockerfile). - -Images built from `ONBUILD` should get a separate tag, for example: -`ruby:1.9-onbuild` or `ruby:2.0-onbuild`. - -Be careful when putting `ADD` or `COPY` in `ONBUILD`. The “onbuild” image will -fail catastrophically if the new build's context is missing the resource being -added. Adding a separate tag, as recommended above, will help mitigate this by -allowing the `Dockerfile` author to make a choice. - -## Examples For Official Repositories - -These Official Repos have exemplary `Dockerfile`s: - -* [Go](https://registry.hub.docker.com/_/golang/) -* [Perl](https://registry.hub.docker.com/_/perl/) -* [Hy](https://registry.hub.docker.com/_/hylang/) -* [Rails](https://registry.hub.docker.com/_/rails) - -## Additional Resources: - -* [Dockerfile Reference](https://docs.docker.com/reference/builder/#onbuild) -* [More about Base Images](https://docs.docker.com/articles/baseimages/) -* [More about Automated Builds](https://docs.docker.com/docker-hub/builds/) -* [Guidelines for Creating Official -Repositories](https://docs.docker.com/docker-hub/official_repos/) diff --git a/articles/dsc.md~ b/articles/dsc.md~ deleted file mode 100644 index 8d75b8f816..0000000000 --- a/articles/dsc.md~ +++ /dev/null @@ -1,167 +0,0 @@ -page_title: PowerShell DSC Usage -page_description: Using DSC to configure a new Docker host -page_keywords: powershell, dsc, installation, usage, docker, documentation - -# Using PowerShell DSC - -Windows PowerShell Desired State Configuration (DSC) is a configuration -management tool that extends the existing functionality of Windows PowerShell. -DSC uses a declarative syntax to define the state in which a target should be -configured. More information about PowerShell DSC can be found at -[http://technet.microsoft.com/en-us/library/dn249912.aspx](http://technet.microsoft.com/en-us/library/dn249912.aspx). - -## Requirements - -To use this guide you'll need a Windows host with PowerShell v4.0 or newer. - -The included DSC configuration script also uses the official PPA so -only an Ubuntu target is supported. The Ubuntu target must already have the -required OMI Server and PowerShell DSC for Linux providers installed. More -information can be found at [https://github.com/MSFTOSSMgmt/WPSDSCLinux](https://github.com/MSFTOSSMgmt/WPSDSCLinux). -The source repository listed below also includes PowerShell DSC for Linux -installation and init scripts along with more detailed installation information. - -## Installation - -The DSC configuration example source is available in the following repository: -[https://github.com/anweiss/DockerClientDSC](https://github.com/anweiss/DockerClientDSC). It can be cloned with: - - $ git clone https://github.com/anweiss/DockerClientDSC.git - -## Usage - -The DSC configuration utilizes a set of shell scripts to determine whether or -not the specified Docker components are configured on the target node(s). The -source repository also includes a script (`RunDockerClientConfig.ps1`) that can -be used to establish the required CIM session(s) and execute the -`Set-DscConfiguration` cmdlet. - -More detailed usage information can be found at -[https://github.com/anweiss/DockerClientDSC](https://github.com/anweiss/DockerClientDSC). - -### Install Docker -The Docker installation configuration is equivalent to running: - -``` -apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys\ -36A1D7869245C8950F966E92D8576A8BA88D21E9 -sh -c "echo deb https://get.docker.com/ubuntu docker main\ -> /etc/apt/sources.list.d/docker.list" -apt-get update -apt-get install lxc-docker -``` - -Ensure that your current working directory is set to the `DockerClientDSC` -source and load the DockerClient configuration into the current PowerShell -session - -```powershell -. .\DockerClient.ps1 -``` - -Generate the required DSC configuration .mof file for the targeted node - -```powershell -DockerClient -Hostname "myhost" -``` - -A sample DSC configuration data file has also been included and can be modified -and used in conjunction with or in place of the `Hostname` parameter: - -```powershell -DockerClient -ConfigurationData .\DockerConfigData.psd1 -``` - -Start the configuration application process on the targeted node - -```powershell -.\RunDockerClientConfig.ps1 -Hostname "myhost" -``` - -The `RunDockerClientConfig.ps1` script can also parse a DSC configuration data -file and execute configurations against multiple nodes as such: - -```powershell -.\RunDockerClientConfig.ps1 -ConfigurationData .\DockerConfigData.psd1 -``` - -### Images -Image configuration is equivalent to running: `docker pull [image]` or -`docker rmi -f [IMAGE]`. - -Using the same steps defined above, execute `DockerClient` with the `Image` -parameter and apply the configuration: - -```powershell -DockerClient -Hostname "myhost" -Image "node" -.\RunDockerClientConfig.ps1 -Hostname "myhost" -``` - -You can also configure the host to pull multiple images: - -```powershell -DockerClient -Hostname "myhost" -Image "node","mongo" -.\RunDockerClientConfig.ps1 -Hostname "myhost" -``` - -To remove images, use a hashtable as follows: - -```powershell -DockerClient -Hostname "myhost" -Image @{Name="node"; Remove=$true} -.\RunDockerClientConfig.ps1 -Hostname $hostname -``` - -### Containers -Container configuration is equivalent to running: - -``` -docker run -d --name="[containername]" -p '[port]' -e '[env]' --link '[link]'\ -'[image]' '[command]' -``` -or - -``` -docker rm -f [containername] -``` - -To create or remove containers, you can use the `Container` parameter with one -or more hashtables. The hashtable(s) passed to this parameter can have the -following properties: - -- Name (required) -- Image (required unless Remove property is set to `$true`) -- Port -- Env -- Link -- Command -- Remove - -For example, create a hashtable with the settings for your container: - -```powershell -$webContainer = @{Name="web"; Image="anweiss/docker-platynem"; Port="80:80"} -``` - -Then, using the same steps defined above, execute -`DockerClient` with the `-Image` and `-Container` parameters: - -```powershell -DockerClient -Hostname "myhost" -Image node -Container $webContainer -.\RunDockerClientConfig.ps1 -Hostname "myhost" -``` - -Existing containers can also be removed as follows: - -```powershell -$containerToRemove = @{Name="web"; Remove=$true} -DockerClient -Hostname "myhost" -Container $containerToRemove -.\RunDockerClientConfig.ps1 -Hostname "myhost" -``` - -Here is a hashtable with all of the properties that can be used to create a -container: - -```powershell -$containerProps = @{Name="web"; Image="node:latest"; Port="80:80"; ` -Env="PORT=80"; Link="db:db"; Command="grunt"} -``` diff --git a/articles/host_integration.md~ b/articles/host_integration.md~ deleted file mode 100644 index 89fd2a1f7a..0000000000 --- a/articles/host_integration.md~ +++ /dev/null @@ -1,76 +0,0 @@ -page_title: Automatically Start Containers -page_description: How to generate scripts for upstart, systemd, etc. -page_keywords: systemd, upstart, supervisor, docker, documentation, host integration - -# Automatically Start Containers - -As of Docker 1.2, -[restart policies](/reference/commandline/cli/#restart-policies) are the -built-in Docker mechanism for restarting containers when they exit. If set, -restart policies will be used when the Docker daemon starts up, as typically -happens after a system boot. Restart policies will ensure that linked containers -are started in the correct order. - -If restart policies don't suit your needs (i.e., you have non-Docker processes -that depend on Docker containers), you can use a process manager like -[upstart](http://upstart.ubuntu.com/), -[systemd](http://freedesktop.org/wiki/Software/systemd/) or -[supervisor](http://supervisord.org/) instead. - - -## Using a Process Manager - -Docker does not set any restart policies by default, but be aware that they will -conflict with most process managers. So don't set restart policies if you are -using a process manager. - -*Note:* Prior to Docker 1.2, restarting of Docker containers had to be -explicitly disabled. Refer to the -[previous version](/v1.1/articles/host_integration/) of this article for the -details on how to do that. - -When you have finished setting up your image and are happy with your -running container, you can then attach a process manager to manage it. -When you run `docker start -a`, Docker will automatically attach to the -running container, or start it if needed and forward all signals so that -the process manager can detect when a container stops and correctly -restart it. - -Here are a few sample scripts for systemd and upstart to integrate with -Docker. - - -## Examples - -The examples below show configuration files for two popular process managers, -upstart and systemd. In these examples, we'll assume that we have already -created a container to run Redis with `--name=redis_server`. These files define -a new service that will be started after the docker daemon service has started. - - -### upstart - - description "Redis container" - author "Me" - start on filesystem and started docker - stop on runlevel [!2345] - respawn - script - /usr/bin/docker start -a redis_server - end script - - -### systemd - - [Unit] - Description=Redis container - Author=Me - After=docker.service - - [Service] - Restart=always - ExecStart=/usr/bin/docker start -a redis_server - ExecStop=/usr/bin/docker stop -t 2 redis_server - - [Install] - WantedBy=local.target diff --git a/articles/https.md~ b/articles/https.md~ deleted file mode 100644 index 94d9ca3f22..0000000000 --- a/articles/https.md~ +++ /dev/null @@ -1,204 +0,0 @@ -page_title: Protecting the Docker daemon Socket with HTTPS -page_description: How to setup and run Docker with HTTPS -page_keywords: docker, docs, article, example, https, daemon, tls, ca, certificate - -# Protecting the Docker daemon Socket with HTTPS - -By default, Docker runs via a non-networked Unix socket. It can also -optionally communicate using a HTTP socket. - -If you need Docker to be reachable via the network in a safe manner, you can -enable TLS by specifying the `tlsverify` flag and pointing Docker's -`tlscacert` flag to a trusted CA certificate. - -In the daemon mode, it will only allow connections from clients -authenticated by a certificate signed by that CA. In the client mode, -it will only connect to servers with a certificate signed by that CA. - -> **Warning**: -> Using TLS and managing a CA is an advanced topic. Please familiarize yourself -> with OpenSSL, x509 and TLS before using it in production. - -> **Warning**: -> These TLS commands will only generate a working set of certificates on Linux. -> Mac OS X comes with a version of OpenSSL that is incompatible with the -> certificates that Docker requires. - -## Create a CA, server and client keys with OpenSSL - -> **Note**: replace all instances of `$HOST` in the following example with the -> DNS name of your Docker daemon's host. - -First generate CA private and public keys: - - $ openssl genrsa -aes256 -out ca-key.pem 2048 - Generating RSA private key, 2048 bit long modulus - ......+++ - ...............+++ - e is 65537 (0x10001) - Enter pass phrase for ca-key.pem: - Verifying - Enter pass phrase for ca-key.pem: - $ openssl req -new -x509 -days 365 -key ca-key.pem -sha256 -out ca.pem - Enter pass phrase for ca-key.pem: - You are about to be asked to enter information that will be incorporated - into your certificate request. - What you are about to enter is what is called a Distinguished Name or a DN. - There are quite a few fields but you can leave some blank - For some fields there will be a default value, - If you enter '.', the field will be left blank. - ----- - Country Name (2 letter code) [AU]: - State or Province Name (full name) [Some-State]:Queensland - Locality Name (eg, city) []:Brisbane - Organization Name (eg, company) [Internet Widgits Pty Ltd]:Docker Inc - Organizational Unit Name (eg, section) []:Boot2Docker - Common Name (e.g. server FQDN or YOUR name) []:$HOST - Email Address []:Sven@home.org.au - -Now that we have a CA, you can create a server key and certificate -signing request (CSR). Make sure that "Common Name" (i.e., server FQDN or YOUR -name) matches the hostname you will use to connect to Docker: - -> **Note**: replace all instances of `$HOST` in the following example with the -> DNS name of your Docker daemon's host. - - $ openssl genrsa -out server-key.pem 2048 - Generating RSA private key, 2048 bit long modulus - ......................................................+++ - ............................................+++ - e is 65537 (0x10001) - $ openssl req -subj "/CN=$HOST" -new -key server-key.pem -out server.csr - -Next, we're going to sign the public key with our CA: - -Since TLS connections can be made via IP address as well as DNS name, they need -to be specified when creating the certificate. For example, to allow connections -using `10.10.10.20` and `127.0.0.1`: - - $ echo subjectAltName = IP:10.10.10.20,IP:127.0.0.1 > extfile.cnf - - $ openssl x509 -req -days 365 -in server.csr -CA ca.pem -CAkey ca-key.pem \ - -CAcreateserial -out server-cert.pem -extfile extfile.cnf - Signature ok - subject=/CN=your.host.com - Getting CA Private Key - Enter pass phrase for ca-key.pem: - -For client authentication, create a client key and certificate signing -request: - - $ openssl genrsa -out key.pem 2048 - Generating RSA private key, 2048 bit long modulus - ...............................................+++ - ...............................................................+++ - e is 65537 (0x10001) - $ openssl req -subj '/CN=client' -new -key key.pem -out client.csr - -To make the key suitable for client authentication, create an extensions -config file: - - $ echo extendedKeyUsage = clientAuth > extfile.cnf - -Now sign the public key: - - $ openssl x509 -req -days 365 -in client.csr -CA ca.pem -CAkey ca-key.pem \ - -CAcreateserial -out cert.pem -extfile extfile.cnf - Signature ok - subject=/CN=client - Getting CA Private Key - Enter pass phrase for ca-key.pem: - -After generating `cert.pem` and `server-cert.pem` you can safely remove the -two certificate signing requests: - - $ rm -v client.csr server.csr - -With a default `umask` of 022, your secret keys will be *world-readable* and -writable for you and your group. - -In order to protect your keys from accidental damage, you will want to remove their -write permissions. To make them only readable by you, change file modes as follows: - - $ chmod -v 0400 ca-key.pem key.pem server-key.pem - -Certificates can be world-readable, but you might want to remove write access to -prevent accidental damage: - - $ chmod -v 0444 ca.pem server-cert.pem cert.pem - -Now you can make the Docker daemon only accept connections from clients -providing a certificate trusted by our CA: - - $ docker -d --tlsverify --tlscacert=ca.pem --tlscert=server-cert.pem --tlskey=server-key.pem \ - -H=0.0.0.0:2376 - -To be able to connect to Docker and validate its certificate, you now -need to provide your client keys, certificates and trusted CA: - -> **Note**: replace all instances of `$HOST` in the following example with the -> DNS name of your Docker daemon's host. - - $ docker --tlsverify --tlscacert=ca.pem --tlscert=cert.pem --tlskey=key.pem \ - -H=$HOST:2376 version - -> **Note**: -> Docker over TLS should run on TCP port 2376. - -> **Warning**: -> As shown in the example above, you don't have to run the `docker` client -> with `sudo` or the `docker` group when you use certificate authentication. -> That means anyone with the keys can give any instructions to your Docker -> daemon, giving them root access to the machine hosting the daemon. Guard -> these keys as you would a root password! - -## Secure by default - -If you want to secure your Docker client connections by default, you can move -the files to the `.docker` directory in your home directory -- and set the -`DOCKER_HOST` and `DOCKER_TLS_VERIFY` variables as well (instead of passing -`-H=tcp://$HOST:2376` and `--tlsverify` on every call). - - $ mkdir -pv ~/.docker - $ cp -v {ca,cert,key}.pem ~/.docker - $ export DOCKER_HOST=tcp://$HOST:2376 DOCKER_TLS_VERIFY=1 - -Docker will now connect securely by default: - - $ docker ps - -## Other modes - -If you don't want to have complete two-way authentication, you can run -Docker in various other modes by mixing the flags. - -### Daemon modes - - - `tlsverify`, `tlscacert`, `tlscert`, `tlskey` set: Authenticate clients - - `tls`, `tlscert`, `tlskey`: Do not authenticate clients - -### Client modes - - - `tls`: Authenticate server based on public/default CA pool - - `tlsverify`, `tlscacert`: Authenticate server based on given CA - - `tls`, `tlscert`, `tlskey`: Authenticate with client certificate, do not - authenticate server based on given CA - - `tlsverify`, `tlscacert`, `tlscert`, `tlskey`: Authenticate with client - certificate and authenticate server based on given CA - -If found, the client will send its client certificate, so you just need -to drop your keys into `~/.docker/{ca,cert,key}.pem`. Alternatively, -if you want to store your keys in another location, you can specify that -location using the environment variable `DOCKER_CERT_PATH`. - - $ export DOCKER_CERT_PATH=~/.docker/zone1/ - $ docker --tlsverify ps - -### Connecting to the Secure Docker port using `curl` - -To use `curl` to make test API requests, you need to use three extra command line -flags: - - $ curl https://$HOST:2376/images/json \ - --cert ~/.docker/cert.pem \ - --key ~/.docker/key.pem \ - --cacert ~/.docker/ca.pem diff --git a/articles/https/README.md~ b/articles/https/README.md~ deleted file mode 100644 index 3e1dd27f6e..0000000000 --- a/articles/https/README.md~ +++ /dev/null @@ -1,26 +0,0 @@ - - -This is an initial attempt to make it easier to test the examples in the https.md -doc - -at this point, it has to be a manual thing, and I've been running it in boot2docker - -so my process is - -$ boot2docker ssh -$$ git clone https://github.com/docker/docker -$$ cd docker/docs/sources/articles/https -$$ make cert -lots of things to see and manually answer, as openssl wants to be interactive -**NOTE:** make sure you enter the hostname (`boot2docker` in my case) when prompted for `Computer Name`) -$$ sudo make run - -start another terminal - -$ boot2docker ssh -$$ cd docker/docs/sources/articles/https -$$ make client - -the last will connect first with `--tls` and then with `--tlsverify` - -both should succeed diff --git a/articles/networking.md~ b/articles/networking.md~ deleted file mode 100644 index 5c3e885e17..0000000000 --- a/articles/networking.md~ +++ /dev/null @@ -1,996 +0,0 @@ -page_title: Network Configuration -page_description: Docker networking -page_keywords: network, networking, bridge, docker, documentation - -# Network Configuration - -## TL;DR - -When Docker starts, it creates a virtual interface named `docker0` on -the host machine. It randomly chooses an address and subnet from the -private range defined by [RFC 1918](http://tools.ietf.org/html/rfc1918) -that are not in use on the host machine, and assigns it to `docker0`. -Docker made the choice `172.17.42.1/16` when I started it a few minutes -ago, for example — a 16-bit netmask providing 65,534 addresses for the -host machine and its containers. The MAC address is generated using the -IP address allocated to the container to avoid ARP collisions, using a -range from `02:42:ac:11:00:00` to `02:42:ac:11:ff:ff`. - -> **Note:** -> This document discusses advanced networking configuration -> and options for Docker. In most cases you won't need this information. -> If you're looking to get started with a simpler explanation of Docker -> networking and an introduction to the concept of container linking see -> the [Docker User Guide](/userguide/dockerlinks/). - -But `docker0` is no ordinary interface. It is a virtual *Ethernet -bridge* that automatically forwards packets between any other network -interfaces that are attached to it. This lets containers communicate -both with the host machine and with each other. Every time Docker -creates a container, it creates a pair of “peer” interfaces that are -like opposite ends of a pipe — a packet sent on one will be received on -the other. It gives one of the peers to the container to become its -`eth0` interface and keeps the other peer, with a unique name like -`vethAQI2QT`, out in the namespace of the host machine. By binding -every `veth*` interface to the `docker0` bridge, Docker creates a -virtual subnet shared between the host machine and every Docker -container. - -The remaining sections of this document explain all of the ways that you -can use Docker options and — in advanced cases — raw Linux networking -commands to tweak, supplement, or entirely replace Docker's default -networking configuration. - -## Quick Guide to the Options - -Here is a quick list of the networking-related Docker command-line -options, in case it helps you find the section below that you are -looking for. - -Some networking command-line options can only be supplied to the Docker -server when it starts up, and cannot be changed once it is running: - - * `-b BRIDGE` or `--bridge=BRIDGE` — see - [Building your own bridge](#bridge-building) - - * `--bip=CIDR` — see - [Customizing docker0](#docker0) - - * `--fixed-cidr` — see - [Customizing docker0](#docker0) - - * `--fixed-cidr-v6` — see - [IPv6](#ipv6) - - * `-H SOCKET...` or `--host=SOCKET...` — - This might sound like it would affect container networking, - but it actually faces in the other direction: - it tells the Docker server over what channels - it should be willing to receive commands - like “run container” and “stop container.” - - * `--icc=true|false` — see - [Communication between containers](#between-containers) - - * `--ip=IP_ADDRESS` — see - [Binding container ports](#binding-ports) - - * `--ipv6=true|false` — see - [IPv6](#ipv6) - - * `--ip-forward=true|false` — see - [Communication between containers and the wider world](#the-world) - - * `--iptables=true|false` — see - [Communication between containers](#between-containers) - - * `--mtu=BYTES` — see - [Customizing docker0](#docker0) - -There are two networking options that can be supplied either at startup -or when `docker run` is invoked. When provided at startup, set the -default value that `docker run` will later use if the options are not -specified: - - * `--dns=IP_ADDRESS...` — see - [Configuring DNS](#dns) - - * `--dns-search=DOMAIN...` — see - [Configuring DNS](#dns) - -Finally, several networking options can only be provided when calling -`docker run` because they specify something specific to one container: - - * `-h HOSTNAME` or `--hostname=HOSTNAME` — see - [Configuring DNS](#dns) and - [How Docker networks a container](#container-networking) - - * `--link=CONTAINER_NAME_or_ID:ALIAS` — see - [Configuring DNS](#dns) and - [Communication between containers](#between-containers) - - * `--net=bridge|none|container:NAME_or_ID|host` — see - [How Docker networks a container](#container-networking) - - * `--mac-address=MACADDRESS...` — see - [How Docker networks a container](#container-networking) - - * `-p SPEC` or `--publish=SPEC` — see - [Binding container ports](#binding-ports) - - * `-P` or `--publish-all=true|false` — see - [Binding container ports](#binding-ports) - -The following sections tackle all of the above topics in an order that -moves roughly from simplest to most complex. - -## Configuring DNS - - - -How can Docker supply each container with a hostname and DNS -configuration, without having to build a custom image with the hostname -written inside? Its trick is to overlay three crucial `/etc` files -inside the container with virtual files where it can write fresh -information. You can see this by running `mount` inside a container: - - $$ mount - ... - /dev/disk/by-uuid/1fec...ebdf on /etc/hostname type ext4 ... - /dev/disk/by-uuid/1fec...ebdf on /etc/hosts type ext4 ... - /dev/disk/by-uuid/1fec...ebdf on /etc/resolv.conf type ext4 ... - ... - -This arrangement allows Docker to do clever things like keep -`resolv.conf` up to date across all containers when the host machine -receives new configuration over DHCP later. The exact details of how -Docker maintains these files inside the container can change from one -Docker version to the next, so you should leave the files themselves -alone and use the following Docker options instead. - -Four different options affect container domain name services. - - * `-h HOSTNAME` or `--hostname=HOSTNAME` — sets the hostname by which - the container knows itself. This is written into `/etc/hostname`, - into `/etc/hosts` as the name of the container's host-facing IP - address, and is the name that `/bin/bash` inside the container will - display inside its prompt. But the hostname is not easy to see from - outside the container. It will not appear in `docker ps` nor in the - `/etc/hosts` file of any other container. - - * `--link=CONTAINER_NAME_or_ID:ALIAS` — using this option as you `run` a - container gives the new container's `/etc/hosts` an extra entry - named `ALIAS` that points to the IP address of the container identified by - `CONTAINER_NAME_or_ID`. This lets processes inside the new container - connect to the hostname `ALIAS` without having to know its IP. The - `--link=` option is discussed in more detail below, in the section - [Communication between containers](#between-containers). Because - Docker may assign a different IP address to the linked containers - on restart, Docker updates the `ALIAS` entry in the `/etc/hosts` file - of the recipient containers. - - * `--dns=IP_ADDRESS...` — sets the IP addresses added as `server` - lines to the container's `/etc/resolv.conf` file. Processes in the - container, when confronted with a hostname not in `/etc/hosts`, will - connect to these IP addresses on port 53 looking for name resolution - services. - - * `--dns-search=DOMAIN...` — sets the domain names that are searched - when a bare unqualified hostname is used inside of the container, by - writing `search` lines into the container's `/etc/resolv.conf`. - When a container process attempts to access `host` and the search - domain `example.com` is set, for instance, the DNS logic will not - only look up `host` but also `host.example.com`. - Use `--dns-search=.` if you don't wish to set the search domain. - -Note that Docker, in the absence of either of the last two options -above, will make `/etc/resolv.conf` inside of each container look like -the `/etc/resolv.conf` of the host machine where the `docker` daemon is -running. You might wonder what happens when the host machine's -`/etc/resolv.conf` file changes. The `docker` daemon has a file change -notifier active which will watch for changes to the host DNS configuration. -When the host file changes, all stopped containers which have a matching -`resolv.conf` to the host will be updated immediately to this newest host -configuration. Containers which are running when the host configuration -changes will need to stop and start to pick up the host changes due to lack -of a facility to ensure atomic writes of the `resolv.conf` file while the -container is running. If the container's `resolv.conf` has been edited since -it was started with the default configuration, no replacement will be -attempted as it would overwrite the changes performed by the container. -If the options (`--dns` or `--dns-search`) have been used to modify the -default host configuration, then the replacement with an updated host's -`/etc/resolv.conf` will not happen as well. - -> **Note**: -> For containers which were created prior to the implementation of -> the `/etc/resolv.conf` update feature in Docker 1.5.0: those -> containers will **not** receive updates when the host `resolv.conf` -> file changes. Only containers created with Docker 1.5.0 and above -> will utilize this auto-update feature. - -## Communication between containers and the wider world - - - -Whether a container can talk to the world is governed by two factors. - -1. Is the host machine willing to forward IP packets? This is governed - by the `ip_forward` system parameter. Packets can only pass between - containers if this parameter is `1`. Usually you will simply leave - the Docker server at its default setting `--ip-forward=true` and - Docker will go set `ip_forward` to `1` for you when the server - starts up. To check the setting or turn it on manually: - - ``` - $ cat /proc/sys/net/ipv4/ip_forward - 0 - $ echo 1 > /proc/sys/net/ipv4/ip_forward - $ cat /proc/sys/net/ipv4/ip_forward - 1 - ``` - - Many using Docker will want `ip_forward` to be on, to at - least make communication *possible* between containers and - the wider world. - - May also be needed for inter-container communication if you are - in a multiple bridge setup. - -2. Do your `iptables` allow this particular connection? Docker will - never make changes to your system `iptables` rules if you set - `--iptables=false` when the daemon starts. Otherwise the Docker - server will append forwarding rules to the `DOCKER` filter chain. - -Docker will not delete or modify any pre-existing rules from the `DOCKER` -filter chain. This allows the user to create in advance any rules required -to further restrict access to the containers. - -Docker's forward rules permit all external source IPs by default. To allow -only a specific IP or network to access the containers, insert a negated -rule at the top of the `DOCKER` filter chain. For example, to restrict -external access such that *only* source IP 8.8.8.8 can access the -containers, the following rule could be added: - - $ iptables -I DOCKER -i ext_if ! -s 8.8.8.8 -j DROP - -## Communication between containers - - - -Whether two containers can communicate is governed, at the operating -system level, by two factors. - -1. Does the network topology even connect the containers' network - interfaces? By default Docker will attach all containers to a - single `docker0` bridge, providing a path for packets to travel - between them. See the later sections of this document for other - possible topologies. - -2. Do your `iptables` allow this particular connection? Docker will never - make changes to your system `iptables` rules if you set - `--iptables=false` when the daemon starts. Otherwise the Docker server - will add a default rule to the `FORWARD` chain with a blanket `ACCEPT` - policy if you retain the default `--icc=true`, or else will set the - policy to `DROP` if `--icc=false`. - -It is a strategic question whether to leave `--icc=true` or change it to -`--icc=false` (on Ubuntu, by editing the `DOCKER_OPTS` variable in -`/etc/default/docker` and restarting the Docker server) so that -`iptables` will protect other containers — and the main host — from -having arbitrary ports probed or accessed by a container that gets -compromised. - -If you choose the most secure setting of `--icc=false`, then how can -containers communicate in those cases where you *want* them to provide -each other services? - -The answer is the `--link=CONTAINER_NAME_or_ID:ALIAS` option, which was -mentioned in the previous section because of its effect upon name -services. If the Docker daemon is running with both `--icc=false` and -`--iptables=true` then, when it sees `docker run` invoked with the -`--link=` option, the Docker server will insert a pair of `iptables` -`ACCEPT` rules so that the new container can connect to the ports -exposed by the other container — the ports that it mentioned in the -`EXPOSE` lines of its `Dockerfile`. Docker has more documentation on -this subject — see the [linking Docker containers](/userguide/dockerlinks) -page for further details. - -> **Note**: -> The value `CONTAINER_NAME` in `--link=` must either be an -> auto-assigned Docker name like `stupefied_pare` or else the name you -> assigned with `--name=` when you ran `docker run`. It cannot be a -> hostname, which Docker will not recognize in the context of the -> `--link=` option. - -You can run the `iptables` command on your Docker host to see whether -the `FORWARD` chain has a default policy of `ACCEPT` or `DROP`: - - # When --icc=false, you should see a DROP rule: - - $ sudo iptables -L -n - ... - Chain FORWARD (policy ACCEPT) - target prot opt source destination - DOCKER all -- 0.0.0.0/0 0.0.0.0/0 - DROP all -- 0.0.0.0/0 0.0.0.0/0 - ... - - # When a --link= has been created under --icc=false, - # you should see port-specific ACCEPT rules overriding - # the subsequent DROP policy for all other packets: - - $ sudo iptables -L -n - ... - Chain FORWARD (policy ACCEPT) - target prot opt source destination - DOCKER all -- 0.0.0.0/0 0.0.0.0/0 - DROP all -- 0.0.0.0/0 0.0.0.0/0 - - Chain DOCKER (1 references) - target prot opt source destination - ACCEPT tcp -- 172.17.0.2 172.17.0.3 tcp spt:80 - ACCEPT tcp -- 172.17.0.3 172.17.0.2 tcp dpt:80 - -> **Note**: -> Docker is careful that its host-wide `iptables` rules fully expose -> containers to each other's raw IP addresses, so connections from one -> container to another should always appear to be originating from the -> first container's own IP address. - -## Binding container ports to the host - - - -By default Docker containers can make connections to the outside world, -but the outside world cannot connect to containers. Each outgoing -connection will appear to originate from one of the host machine's own -IP addresses thanks to an `iptables` masquerading rule on the host -machine that the Docker server creates when it starts: - - # You can see that the Docker server creates a - # masquerade rule that let containers connect - # to IP addresses in the outside world: - - $ sudo iptables -t nat -L -n - ... - Chain POSTROUTING (policy ACCEPT) - target prot opt source destination - MASQUERADE all -- 172.17.0.0/16 !172.17.0.0/16 - ... - -But if you want containers to accept incoming connections, you will need -to provide special options when invoking `docker run`. These options -are covered in more detail in the [Docker User Guide](/userguide/dockerlinks) -page. There are two approaches. - -First, you can supply `-P` or `--publish-all=true|false` to `docker run` -which is a blanket operation that identifies every port with an `EXPOSE` -line in the image's `Dockerfile` and maps it to a host port somewhere in -the range 49153–65535. This tends to be a bit inconvenient, since you -then have to run other `docker` sub-commands to learn which external -port a given service was mapped to. - -More convenient is the `-p SPEC` or `--publish=SPEC` option which lets -you be explicit about exactly which external port on the Docker server — -which can be any port at all, not just those in the 49153-65535 block — -you want mapped to which port in the container. - -Either way, you should be able to peek at what Docker has accomplished -in your network stack by examining your NAT tables. - - # What your NAT rules might look like when Docker - # is finished setting up a -P forward: - - $ iptables -t nat -L -n - ... - Chain DOCKER (2 references) - target prot opt source destination - DNAT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:49153 to:172.17.0.2:80 - - # What your NAT rules might look like when Docker - # is finished setting up a -p 80:80 forward: - - Chain DOCKER (2 references) - target prot opt source destination - DNAT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:80 to:172.17.0.2:80 - -You can see that Docker has exposed these container ports on `0.0.0.0`, -the wildcard IP address that will match any possible incoming port on -the host machine. If you want to be more restrictive and only allow -container services to be contacted through a specific external interface -on the host machine, you have two choices. When you invoke `docker run` -you can use either `-p IP:host_port:container_port` or `-p IP::port` to -specify the external interface for one particular binding. - -Or if you always want Docker port forwards to bind to one specific IP -address, you can edit your system-wide Docker server settings (on -Ubuntu, by editing `DOCKER_OPTS` in `/etc/default/docker`) and add the -option `--ip=IP_ADDRESS`. Remember to restart your Docker server after -editing this setting. - -Again, this topic is covered without all of these low-level networking -details in the [Docker User Guide](/userguide/dockerlinks/) document if you -would like to use that as your port redirection reference instead. - -## IPv6 - - - -As we are [running out of IPv4 addresses](http://en.wikipedia.org/wiki/IPv4_address_exhaustion) -the IETF has standardized an IPv4 successor, [Internet Protocol Version 6](http://en.wikipedia.org/wiki/IPv6) -, in [RFC 2460](https://www.ietf.org/rfc/rfc2460.txt). Both protocols, IPv4 and -IPv6, reside on layer 3 of the [OSI model](http://en.wikipedia.org/wiki/OSI_model). - - -### IPv6 with Docker -By default, the Docker server configures the container network for IPv4 only. -You can enable IPv4/IPv6 dualstack support by running the Docker daemon with the -`--ipv6` flag. Docker will set up the bridge `docker0` with the IPv6 -[link-local address](http://en.wikipedia.org/wiki/Link-local_address) `fe80::1`. - -By default, containers that are created will only get a link-local IPv6 address. -To assign globally routable IPv6 addresses to your containers you have to -specify an IPv6 subnet to pick the addresses from. Set the IPv6 subnet via the -`--fixed-cidr-v6` parameter when starting Docker daemon: - - docker -d --ipv6 --fixed-cidr-v6="2001:db8:1::/64" - -The subnet for Docker containers should at least have a size of `/80`. This way -an IPv6 address can end with the container's MAC address and you prevent NDP -neighbor cache invalidation issues in the Docker layer. - -With the `--fixed-cidr-v6` parameter set Docker will add a new route to the -routing table. Further IPv6 routing will be enabled (you may prevent this by -starting Docker daemon with `--ip-forward=false`): - - $ ip -6 route add 2001:db8:1::/64 dev docker0 - $ sysctl net.ipv6.conf.default.forwarding=1 - $ sysctl net.ipv6.conf.all.forwarding=1 - -All traffic to the subnet `2001:db8:1::/64` will now be routed -via the `docker0` interface. - -Be aware that IPv6 forwarding may interfere with your existing IPv6 -configuration: If you are using Router Advertisements to get IPv6 settings for -your host's interfaces you should set `accept_ra` to `2`. Otherwise IPv6 -enabled forwarding will result in rejecting Router Advertisements. E.g., if you -want to configure `eth0` via Router Advertisements you should set: - - ``` - $ sysctl net.ipv6.conf.eth0.accept_ra=2 - ``` - -![](/article-img/ipv6_basic_host_config.svg) - -Every new container will get an IPv6 address from the defined subnet. Further -a default route will be added via the gateway `fe80::1` on `eth0`: - - docker run -it ubuntu bash -c "ip -6 addr show dev eth0; ip -6 route show" - - 15: eth0: mtu 1500 - inet6 2001:db8:1:0:0:242:ac11:3/64 scope global - valid_lft forever preferred_lft forever - inet6 fe80::42:acff:fe11:3/64 scope link - valid_lft forever preferred_lft forever - - 2001:db8:1::/64 dev eth0 proto kernel metric 256 - fe80::/64 dev eth0 proto kernel metric 256 - default via fe80::1 dev eth0 metric 1024 - -In this example the Docker container is assigned a link-local address with the -network suffix `/64` (here: `fe80::42:acff:fe11:3/64`) and a globally routable -IPv6 address (here: `2001:db8:1:0:0:242:ac11:3/64`). The container will create -connections to addresses outside of the `2001:db8:1::/64` network via the -link-local gateway at `fe80::1` on `eth0`. - -Often servers or virtual machines get a `/64` IPv6 subnet assigned (e.g. -`2001:db8:23:42::/64`). In this case you can split it up further and provide -Docker a `/80` subnet while using a separate `/80` subnet for other -applications on the host: - -![](/article-img/ipv6_slash64_subnet_config.svg) - -In this setup the subnet `2001:db8:23:42::/80` with a range from `2001:db8:23:42:0:0:0:0` -to `2001:db8:23:42:0:ffff:ffff:ffff` is attached to `eth0`, with the host listening -at `2001:db8:23:42::1`. The subnet `2001:db8:23:42:1::/80` with an address range from -`2001:db8:23:42:1:0:0:0` to `2001:db8:23:42:1:ffff:ffff:ffff` is attached to -`docker0` and will be used by containers. - -### Docker IPv6 Cluster - -#### Switched Network Environment -Using routable IPv6 addresses allows you to realize communication between -containers on different hosts. Let's have a look at a simple Docker IPv6 cluster -example: - -![](/article-img/ipv6_switched_network_example.svg) - -The Docker hosts are in the `2001:db8:0::/64` subnet. Host1 is configured -to provide addresses from the `2001:db8:1::/64` subnet to its containers. It -has three routes configured: - -- Route all traffic to `2001:db8:0::/64` via `eth0` -- Route all traffic to `2001:db8:1::/64` via `docker0` -- Route all traffic to `2001:db8:2::/64` via Host2 with IP `2001:db8::2` - -Host1 also acts as a router on OSI layer 3. When one of the network clients -tries to contact a target that is specified in Host1's routing table Host1 will -forward the traffic accordingly. It acts as a router for all networks it knows: -`2001:db8::/64`, `2001:db8:1::/64` and `2001:db8:2::/64`. - -On Host2 we have nearly the same configuration. Host2's containers will get -IPv6 addresses from `2001:db8:2::/64`. Host2 has three routes configured: - -- Route all traffic to `2001:db8:0::/64` via `eth0` -- Route all traffic to `2001:db8:2::/64` via `docker0` -- Route all traffic to `2001:db8:1::/64` via Host1 with IP `2001:db8:0::1` - -The difference to Host1 is that the network `2001:db8:2::/64` is directly -attached to the host via its `docker0` interface whereas it reaches -`2001:db8:1::/64` via Host1's IPv6 address `2001:db8::1`. - -This way every container is able to contact every other container. The -containers `Container1-*` share the same subnet and contact each other directly. -The traffic between `Container1-*` and `Container2-*` will be routed via Host1 -and Host2 because those containers do not share the same subnet. - -In a switched environment every host has to know all routes to every subnet. You -always have to update the hosts' routing tables once you add or remove a host -to the cluster. - -Every configuration in the diagram that is shown below the dashed line is -handled by Docker: The `docker0` bridge IP address configuration, the route to -the Docker subnet on the host, the container IP addresses and the routes on the -containers. The configuration above the line is up to the user and can be -adapted to the individual environment. - -#### Routed Network Environment - -In a routed network environment you replace the level 2 switch with a level 3 -router. Now the hosts just have to know their default gateway (the router) and -the route to their own containers (managed by Docker). The router holds all -routing information about the Docker subnets. When you add or remove a host to -this environment you just have to update the routing table in the router - not -on every host. - -![](/article-img/ipv6_routed_network_example.svg) - -In this scenario containers of the same host can communicate directly with each -other. The traffic between containers on different hosts will be routed via -their hosts and the router. For example packet from `Container1-1` to -`Container2-1` will be routed through `Host1`, `Router` and `Host2` until it -arrives at `Container2-1`. - -To keep the IPv6 addresses short in this example a `/48` network is assigned to -every host. The hosts use a `/64` subnet of this for its own services and one -for Docker. When adding a third host you would add a route for the subnet -`2001:db8:3::/48` in the router and configure Docker on Host3 with -`--fixed-cidr-v6=2001:db8:3:1::/64`. - -Remember the subnet for Docker containers should at least have a size of `/80`. -This way an IPv6 address can end with the container's MAC address and you -prevent NDP neighbor cache invalidation issues in the Docker layer. So if you -have a `/64` for your whole environment use `/68` subnets for the hosts and -`/80` for the containers. This way you can use 4096 hosts with 16 `/80` subnets -each. - -Every configuration in the diagram that is visualized below the dashed line is -handled by Docker: The `docker0` bridge IP address configuration, the route to -the Docker subnet on the host, the container IP addresses and the routes on the -containers. The configuration above the line is up to the user and can be -adapted to the individual environment. - -## Customizing docker0 - - - -By default, the Docker server creates and configures the host system's -`docker0` interface as an *Ethernet bridge* inside the Linux kernel that -can pass packets back and forth between other physical or virtual -network interfaces so that they behave as a single Ethernet network. - -Docker configures `docker0` with an IP address, netmask and IP -allocation range. The host machine can both receive and send packets to -containers connected to the bridge, and gives it an MTU — the *maximum -transmission unit* or largest packet length that the interface will -allow — of either 1,500 bytes or else a more specific value copied from -the Docker host's interface that supports its default route. These -options are configurable at server startup: - - * `--bip=CIDR` — supply a specific IP address and netmask for the - `docker0` bridge, using standard CIDR notation like - `192.168.1.5/24`. - - * `--fixed-cidr=CIDR` — restrict the IP range from the `docker0` subnet, - using the standard CIDR notation like `172.167.1.0/28`. This range must - be and IPv4 range for fixed IPs (ex: 10.20.0.0/16) and must be a subset - of the bridge IP range (`docker0` or set using `--bridge`). For example - with `--fixed-cidr=192.168.1.0/25`, IPs for your containers will be chosen - from the first half of `192.168.1.0/24` subnet. - - * `--mtu=BYTES` — override the maximum packet length on `docker0`. - -On Ubuntu you would add these to the `DOCKER_OPTS` setting in -`/etc/default/docker` on your Docker host and restarting the Docker -service. - -Once you have one or more containers up and running, you can confirm -that Docker has properly connected them to the `docker0` bridge by -running the `brctl` command on the host machine and looking at the -`interfaces` column of the output. Here is a host with two different -containers connected: - - # Display bridge info - - $ sudo brctl show - bridge name bridge id STP enabled interfaces - docker0 8000.3a1d7362b4ee no veth65f9 - vethdda6 - -If the `brctl` command is not installed on your Docker host, then on -Ubuntu you should be able to run `sudo apt-get install bridge-utils` to -install it. - -Finally, the `docker0` Ethernet bridge settings are used every time you -create a new container. Docker selects a free IP address from the range -available on the bridge each time you `docker run` a new container, and -configures the container's `eth0` interface with that IP address and the -bridge's netmask. The Docker host's own IP address on the bridge is -used as the default gateway by which each container reaches the rest of -the Internet. - - # The network, as seen from a container - - $ sudo docker run -i -t --rm base /bin/bash - - $$ ip addr show eth0 - 24: eth0: mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 - link/ether 32:6f:e0:35:57:91 brd ff:ff:ff:ff:ff:ff - inet 172.17.0.3/16 scope global eth0 - valid_lft forever preferred_lft forever - inet6 fe80::306f:e0ff:fe35:5791/64 scope link - valid_lft forever preferred_lft forever - - $$ ip route - default via 172.17.42.1 dev eth0 - 172.17.0.0/16 dev eth0 proto kernel scope link src 172.17.0.3 - - $$ exit - -Remember that the Docker host will not be willing to forward container -packets out on to the Internet unless its `ip_forward` system setting is -`1` — see the section above on [Communication between -containers](#between-containers) for details. - -## Building your own bridge - - - -If you want to take Docker out of the business of creating its own -Ethernet bridge entirely, you can set up your own bridge before starting -Docker and use `-b BRIDGE` or `--bridge=BRIDGE` to tell Docker to use -your bridge instead. If you already have Docker up and running with its -old `docker0` still configured, you will probably want to begin by -stopping the service and removing the interface: - - # Stopping Docker and removing docker0 - - $ sudo service docker stop - $ sudo ip link set dev docker0 down - $ sudo brctl delbr docker0 - $ sudo iptables -t nat -F POSTROUTING - -Then, before starting the Docker service, create your own bridge and -give it whatever configuration you want. Here we will create a simple -enough bridge that we really could just have used the options in the -previous section to customize `docker0`, but it will be enough to -illustrate the technique. - - # Create our own bridge - - $ sudo brctl addbr bridge0 - $ sudo ip addr add 192.168.5.1/24 dev bridge0 - $ sudo ip link set dev bridge0 up - - # Confirming that our bridge is up and running - - $ ip addr show bridge0 - 4: bridge0: mtu 1500 qdisc noop state UP group default - link/ether 66:38:d0:0d:76:18 brd ff:ff:ff:ff:ff:ff - inet 192.168.5.1/24 scope global bridge0 - valid_lft forever preferred_lft forever - - # Tell Docker about it and restart (on Ubuntu) - - $ echo 'DOCKER_OPTS="-b=bridge0"' >> /etc/default/docker - $ sudo service docker start - - # Confirming new outgoing NAT masquerade is set up - - $ sudo iptables -t nat -L -n - ... - Chain POSTROUTING (policy ACCEPT) - target prot opt source destination - MASQUERADE all -- 192.168.5.0/24 0.0.0.0/0 - - -The result should be that the Docker server starts successfully and is -now prepared to bind containers to the new bridge. After pausing to -verify the bridge's configuration, try creating a container — you will -see that its IP address is in your new IP address range, which Docker -will have auto-detected. - -Just as we learned in the previous section, you can use the `brctl show` -command to see Docker add and remove interfaces from the bridge as you -start and stop containers, and can run `ip addr` and `ip route` inside a -container to see that it has been given an address in the bridge's IP -address range and has been told to use the Docker host's IP address on -the bridge as its default gateway to the rest of the Internet. - -## How Docker networks a container - - - -While Docker is under active development and continues to tweak and -improve its network configuration logic, the shell commands in this -section are rough equivalents to the steps that Docker takes when -configuring networking for each new container. - -Let's review a few basics. - -To communicate using the Internet Protocol (IP), a machine needs access -to at least one network interface at which packets can be sent and -received, and a routing table that defines the range of IP addresses -reachable through that interface. Network interfaces do not have to be -physical devices. In fact, the `lo` loopback interface available on -every Linux machine (and inside each Docker container) is entirely -virtual — the Linux kernel simply copies loopback packets directly from -the sender's memory into the receiver's memory. - -Docker uses special virtual interfaces to let containers communicate -with the host machine — pairs of virtual interfaces called “peers” that -are linked inside of the host machine's kernel so that packets can -travel between them. They are simple to create, as we will see in a -moment. - -The steps with which Docker configures a container are: - -1. Create a pair of peer virtual interfaces. - -2. Give one of them a unique name like `veth65f9`, keep it inside of - the main Docker host, and bind it to `docker0` or whatever bridge - Docker is supposed to be using. - -3. Toss the other interface over the wall into the new container (which - will already have been provided with an `lo` interface) and rename - it to the much prettier name `eth0` since, inside of the container's - separate and unique network interface namespace, there are no - physical interfaces with which this name could collide. - -4. Set the interface's MAC address according to the `--mac-address` - parameter or generate a random one. - -5. Give the container's `eth0` a new IP address from within the - bridge's range of network addresses, and set its default route to - the IP address that the Docker host owns on the bridge. If available - the IP address is generated from the MAC address. This prevents ARP - cache invalidation problems, when a new container comes up with an - IP used in the past by another container with another MAC. - -With these steps complete, the container now possesses an `eth0` -(virtual) network card and will find itself able to communicate with -other containers and the rest of the Internet. - -You can opt out of the above process for a particular container by -giving the `--net=` option to `docker run`, which takes four possible -values. - - * `--net=bridge` — The default action, that connects the container to - the Docker bridge as described above. - - * `--net=host` — Tells Docker to skip placing the container inside of - a separate network stack. In essence, this choice tells Docker to - **not containerize the container's networking**! While container - processes will still be confined to their own filesystem and process - list and resource limits, a quick `ip addr` command will show you - that, network-wise, they live “outside” in the main Docker host and - have full access to its network interfaces. Note that this does - **not** let the container reconfigure the host network stack — that - would require `--privileged=true` — but it does let container - processes open low-numbered ports like any other root process. - It also allows the container to access local network services - like D-bus. This can lead to processes in the container being - able to do unexpected things like - [restart your computer](https://github.com/docker/docker/issues/6401). - You should use this option with caution. - - * `--net=container:NAME_or_ID` — Tells Docker to put this container's - processes inside of the network stack that has already been created - inside of another container. The new container's processes will be - confined to their own filesystem and process list and resource - limits, but will share the same IP address and port numbers as the - first container, and processes on the two containers will be able to - connect to each other over the loopback interface. - - * `--net=none` — Tells Docker to put the container inside of its own - network stack but not to take any steps to configure its network, - leaving you free to build any of the custom configurations explored - in the last few sections of this document. - -To get an idea of the steps that are necessary if you use `--net=none` -as described in that last bullet point, here are the commands that you -would run to reach roughly the same configuration as if you had let -Docker do all of the configuration: - - # At one shell, start a container and - # leave its shell idle and running - - $ sudo docker run -i -t --rm --net=none base /bin/bash - root@63f36fc01b5f:/# - - # At another shell, learn the container process ID - # and create its namespace entry in /var/run/netns/ - # for the "ip netns" command we will be using below - - $ sudo docker inspect -f '{{.State.Pid}}' 63f36fc01b5f - 2778 - $ pid=2778 - $ sudo mkdir -p /var/run/netns - $ sudo ln -s /proc/$pid/ns/net /var/run/netns/$pid - - # Check the bridge's IP address and netmask - - $ ip addr show docker0 - 21: docker0: ... - inet 172.17.42.1/16 scope global docker0 - ... - - # Create a pair of "peer" interfaces A and B, - # bind the A end to the bridge, and bring it up - - $ sudo ip link add A type veth peer name B - $ sudo brctl addif docker0 A - $ sudo ip link set A up - - # Place B inside the container's network namespace, - # rename to eth0, and activate it with a free IP - - $ sudo ip link set B netns $pid - $ sudo ip netns exec $pid ip link set dev B name eth0 - $ sudo ip netns exec $pid ip link set eth0 address 12:34:56:78:9a:bc - $ sudo ip netns exec $pid ip link set eth0 up - $ sudo ip netns exec $pid ip addr add 172.17.42.99/16 dev eth0 - $ sudo ip netns exec $pid ip route add default via 172.17.42.1 - -At this point your container should be able to perform networking -operations as usual. - -When you finally exit the shell and Docker cleans up the container, the -network namespace is destroyed along with our virtual `eth0` — whose -destruction in turn destroys interface `A` out in the Docker host and -automatically un-registers it from the `docker0` bridge. So everything -gets cleaned up without our having to run any extra commands! Well, -almost everything: - - # Clean up dangling symlinks in /var/run/netns - - find -L /var/run/netns -type l -delete - -Also note that while the script above used modern `ip` command instead -of old deprecated wrappers like `ipconfig` and `route`, these older -commands would also have worked inside of our container. The `ip addr` -command can be typed as `ip a` if you are in a hurry. - -Finally, note the importance of the `ip netns exec` command, which let -us reach inside and configure a network namespace as root. The same -commands would not have worked if run inside of the container, because -part of safe containerization is that Docker strips container processes -of the right to configure their own networks. Using `ip netns exec` is -what let us finish up the configuration without having to take the -dangerous step of running the container itself with `--privileged=true`. - -## Tools and Examples - -Before diving into the following sections on custom network topologies, -you might be interested in glancing at a few external tools or examples -of the same kinds of configuration. Here are two: - - * Jérôme Petazzoni has created a `pipework` shell script to help you - connect together containers in arbitrarily complex scenarios: - - - * Brandon Rhodes has created a whole network topology of Docker - containers for the next edition of Foundations of Python Network - Programming that includes routing, NAT'd firewalls, and servers that - offer HTTP, SMTP, POP, IMAP, Telnet, SSH, and FTP: - - -Both tools use networking commands very much like the ones you saw in -the previous section, and will see in the following sections. - -## Building a point-to-point connection - - - -By default, Docker attaches all containers to the virtual subnet -implemented by `docker0`. You can create containers that are each -connected to some different virtual subnet by creating your own bridge -as shown in [Building your own bridge](#bridge-building), starting each -container with `docker run --net=none`, and then attaching the -containers to your bridge with the shell commands shown in [How Docker -networks a container](#container-networking). - -But sometimes you want two particular containers to be able to -communicate directly without the added complexity of both being bound to -a host-wide Ethernet bridge. - -The solution is simple: when you create your pair of peer interfaces, -simply throw *both* of them into containers, and configure them as -classic point-to-point links. The two containers will then be able to -communicate directly (provided you manage to tell each container the -other's IP address, of course). You might adjust the instructions of -the previous section to go something like this: - - # Start up two containers in two terminal windows - - $ sudo docker run -i -t --rm --net=none base /bin/bash - root@1f1f4c1f931a:/# - - $ sudo docker run -i -t --rm --net=none base /bin/bash - root@12e343489d2f:/# - - # Learn the container process IDs - # and create their namespace entries - - $ sudo docker inspect -f '{{.State.Pid}}' 1f1f4c1f931a - 2989 - $ sudo docker inspect -f '{{.State.Pid}}' 12e343489d2f - 3004 - $ sudo mkdir -p /var/run/netns - $ sudo ln -s /proc/2989/ns/net /var/run/netns/2989 - $ sudo ln -s /proc/3004/ns/net /var/run/netns/3004 - - # Create the "peer" interfaces and hand them out - - $ sudo ip link add A type veth peer name B - - $ sudo ip link set A netns 2989 - $ sudo ip netns exec 2989 ip addr add 10.1.1.1/32 dev A - $ sudo ip netns exec 2989 ip link set A up - $ sudo ip netns exec 2989 ip route add 10.1.1.2/32 dev A - - $ sudo ip link set B netns 3004 - $ sudo ip netns exec 3004 ip addr add 10.1.1.2/32 dev B - $ sudo ip netns exec 3004 ip link set B up - $ sudo ip netns exec 3004 ip route add 10.1.1.1/32 dev B - -The two containers should now be able to ping each other and make -connections successfully. Point-to-point links like this do not depend -on a subnet nor a netmask, but on the bare assertion made by `ip route` -that some other single IP address is connected to a particular network -interface. - -Note that point-to-point links can be safely combined with other kinds -of network connectivity — there is no need to start the containers with -`--net=none` if you want point-to-point links to be an addition to the -container's normal networking instead of a replacement. - -A final permutation of this pattern is to create the point-to-point link -between the Docker host and one container, which would allow the host to -communicate with that one container on some single IP address and thus -communicate “out-of-band” of the bridge that connects the other, more -usual containers. But unless you have very specific networking needs -that drive you to such a solution, it is probably far preferable to use -`--icc=false` to lock down inter-container communication, as we explored -earlier. - -## Editing networking config files - -Starting with Docker v.1.2.0, you can now edit `/etc/hosts`, `/etc/hostname` -and `/etc/resolve.conf` in a running container. This is useful if you need -to install bind or other services that might override one of those files. - -Note, however, that changes to these files will not be saved by -`docker commit`, nor will they be saved during `docker run`. -That means they won't be saved in the image, nor will they persist when a -container is restarted; they will only "stick" in a running container. diff --git a/articles/puppet.md~ b/articles/puppet.md~ deleted file mode 100644 index d9a7ceb70e..0000000000 --- a/articles/puppet.md~ +++ /dev/null @@ -1,93 +0,0 @@ -page_title: Puppet Usage -page_description: Installating and using Puppet -page_keywords: puppet, installation, usage, docker, documentation - -# Using Puppet - -> *Note:* Please note this is a community contributed installation path. The -> only `official` installation is using the -> [*Ubuntu*](/installation/ubuntulinux) installation -> path. This version may sometimes be out of date. - -## Requirements - -To use this guide you'll need a working installation of Puppet from -[Puppet Labs](https://puppetlabs.com) . - -The module also currently uses the official PPA so only works with -Ubuntu. - -## Installation - -The module is available on the [Puppet -Forge](https://forge.puppetlabs.com/garethr/docker/) and can be -installed using the built-in module tool. - - $ puppet module install garethr/docker - -It can also be found on -[GitHub](https://github.com/garethr/garethr-docker) if you would rather -download the source. - -## Usage - -The module provides a puppet class for installing Docker and two defined -types for managing images and containers. - -### Installation - - include 'docker' - -### Images - -The next step is probably to install a Docker image. For this, we have a -defined type which can be used like so: - - docker::image { 'ubuntu': } - -This is equivalent to running: - - $ sudo docker pull ubuntu - -Note that it will only be downloaded if an image of that name does not -already exist. This is downloading a large binary so on first run can -take a while. For that reason this define turns off the default 5 minute -timeout for the exec type. Note that you can also remove images you no -longer need with: - - docker::image { 'ubuntu': - ensure => 'absent', - } - -### Containers - -Now you have an image where you can run commands within a container -managed by Docker. - - docker::run { 'helloworld': - image => 'ubuntu', - command => '/bin/sh -c "while true; do echo hello world; sleep 1; done"', - } - -This is equivalent to running the following command, but under upstart: - - $ sudo docker run -d ubuntu /bin/sh -c "while true; do echo hello world; sleep 1; done" - -Run also contains a number of optional parameters: - - docker::run { 'helloworld': - image => 'ubuntu', - command => '/bin/sh -c "while true; do echo hello world; sleep 1; done"', - ports => ['4444', '4555'], - volumes => ['/var/lib/couchdb', '/var/log'], - volumes_from => '6446ea52fbc9', - memory_limit => 10485760, # bytes - username => 'example', - hostname => 'example.com', - env => ['FOO=BAR', 'FOO2=BAR2'], - dns => ['8.8.8.8', '8.8.4.4'], - } - -> *Note:* -> The `ports`, `env`, `dns` and `volumes` attributes can be set with either a single -> string or as above with an array of values. diff --git a/articles/registry_mirror.md~ b/articles/registry_mirror.md~ deleted file mode 100644 index a7493e9aec..0000000000 --- a/articles/registry_mirror.md~ +++ /dev/null @@ -1,83 +0,0 @@ -page_title: Run a local registry mirror -page_description: How to set up and run a local registry mirror -page_keywords: docker, registry, mirror, examples - -# Run a local registry mirror - -## Why? - -If you have multiple instances of Docker running in your environment -(e.g., multiple physical or virtual machines, all running the Docker -daemon), each time one of them requires an image that it doesn't have -it will go out to the internet and fetch it from the public Docker -registry. By running a local registry mirror, you can keep most of the -image fetch traffic on your local network. - -## How does it work? - -The first time you request an image from your local registry mirror, -it pulls the image from the public Docker registry and stores it locally -before handing it back to you. On subsequent requests, the local registry -mirror is able to serve the image from its own storage. - -## How do I set up a local registry mirror? - -There are two steps to set up and use a local registry mirror. - -### Step 1: Configure your Docker daemons to use the local registry mirror - -You will need to pass the `--registry-mirror` option to your Docker daemon on -startup: - - sudo docker --registry-mirror=http:// -d - -For example, if your mirror is serving on `http://10.0.0.2:5000`, you would run: - - sudo docker --registry-mirror=http://10.0.0.2:5000 -d - -**NOTE:** -Depending on your local host setup, you may be able to add the -`--registry-mirror` options to the `DOCKER_OPTS` variable in -`/etc/default/docker`. - -### Step 2: Run the local registry mirror - -You will need to start a local registry mirror service. The -[`registry` image](https://registry.hub.docker.com/_/registry/) provides this -functionality. For example, to run a local registry mirror that serves on -port `5000` and mirrors the content at `registry-1.docker.io`: - - sudo docker run -p 5000:5000 \ - -e STANDALONE=false \ - -e MIRROR_SOURCE=https://registry-1.docker.io \ - -e MIRROR_SOURCE_INDEX=https://index.docker.io registry - -## Test it out - -With your mirror running, pull an image that you haven't pulled before (using -`time` to time it): - - $ time sudo docker pull node:latest - Pulling repository node - [...] - - real 1m14.078s - user 0m0.176s - sys 0m0.120s - -Now, remove the image from your local machine: - - $ sudo docker rmi node:latest - -Finally, re-pull the image: - - $ time sudo docker pull node:latest - Pulling repository node - [...] - - real 0m51.376s - user 0m0.120s - sys 0m0.116s - -The second time around, the local registry mirror served the image from storage, -avoiding a trip out to the internet to refetch it. diff --git a/articles/runmetrics.md~ b/articles/runmetrics.md~ deleted file mode 100644 index 3276409697..0000000000 --- a/articles/runmetrics.md~ +++ /dev/null @@ -1,438 +0,0 @@ -page_title: Runtime Metrics -page_description: Measure the behavior of running containers -page_keywords: docker, metrics, CPU, memory, disk, IO, run, runtime - -# Runtime Metrics - -Linux Containers rely on [control groups]( -https://www.kernel.org/doc/Documentation/cgroups/cgroups.txt) -which not only track groups of processes, but also expose metrics about -CPU, memory, and block I/O usage. You can access those metrics and -obtain network usage metrics as well. This is relevant for "pure" LXC -containers, as well as for Docker containers. - -## Control Groups - -Control groups are exposed through a pseudo-filesystem. In recent -distros, you should find this filesystem under `/sys/fs/cgroup`. Under -that directory, you will see multiple sub-directories, called devices, -freezer, blkio, etc.; each sub-directory actually corresponds to a different -cgroup hierarchy. - -On older systems, the control groups might be mounted on `/cgroup`, without -distinct hierarchies. In that case, instead of seeing the sub-directories, -you will see a bunch of files in that directory, and possibly some directories -corresponding to existing containers. - -To figure out where your control groups are mounted, you can run: - - $ grep cgroup /proc/mounts - -## Enumerating Cgroups - -You can look into `/proc/cgroups` to see the different control group subsystems -known to the system, the hierarchy they belong to, and how many groups they contain. - -You can also look at `/proc//cgroup` to see which control groups a process -belongs to. The control group will be shown as a path relative to the root of -the hierarchy mountpoint; e.g., `/` means “this process has not been assigned into -a particular group”, while `/lxc/pumpkin` means that the process is likely to be -a member of a container named `pumpkin`. - -## Finding the Cgroup for a Given Container - -For each container, one cgroup will be created in each hierarchy. On -older systems with older versions of the LXC userland tools, the name of -the cgroup will be the name of the container. With more recent versions -of the LXC tools, the cgroup will be `lxc/.` - -For Docker containers using cgroups, the container name will be the full -ID or long ID of the container. If a container shows up as ae836c95b4c3 -in `docker ps`, its long ID might be something like -`ae836c95b4c3c9e9179e0e91015512da89fdec91612f63cebae57df9a5444c79`. You can -look it up with `docker inspect` or `docker ps --no-trunc`. - -Putting everything together to look at the memory metrics for a Docker -container, take a look at `/sys/fs/cgroup/memory/lxc//`. - -## Metrics from Cgroups: Memory, CPU, Block IO - -For each subsystem (memory, CPU, and block I/O), you will find one or -more pseudo-files containing statistics. - -### Memory Metrics: `memory.stat` - -Memory metrics are found in the "memory" cgroup. Note that the memory -control group adds a little overhead, because it does very fine-grained -accounting of the memory usage on your host. Therefore, many distros -chose to not enable it by default. Generally, to enable it, all you have -to do is to add some kernel command-line parameters: -`cgroup_enable=memory swapaccount=1`. - -The metrics are in the pseudo-file `memory.stat`. -Here is what it will look like: - - cache 11492564992 - rss 1930993664 - mapped_file 306728960 - pgpgin 406632648 - pgpgout 403355412 - swap 0 - pgfault 728281223 - pgmajfault 1724 - inactive_anon 46608384 - active_anon 1884520448 - inactive_file 7003344896 - active_file 4489052160 - unevictable 32768 - hierarchical_memory_limit 9223372036854775807 - hierarchical_memsw_limit 9223372036854775807 - total_cache 11492564992 - total_rss 1930993664 - total_mapped_file 306728960 - total_pgpgin 406632648 - total_pgpgout 403355412 - total_swap 0 - total_pgfault 728281223 - total_pgmajfault 1724 - total_inactive_anon 46608384 - total_active_anon 1884520448 - total_inactive_file 7003344896 - total_active_file 4489052160 - total_unevictable 32768 - -The first half (without the `total_` prefix) contains statistics relevant -to the processes within the cgroup, excluding sub-cgroups. The second half -(with the `total_` prefix) includes sub-cgroups as well. - -Some metrics are "gauges", i.e., values that can increase or decrease -(e.g., swap, the amount of swap space used by the members of the cgroup). -Some others are "counters", i.e., values that can only go up, because -they represent occurrences of a specific event (e.g., pgfault, which -indicates the number of page faults which happened since the creation of -the cgroup; this number can never decrease). - - - - **cache:** - the amount of memory used by the processes of this control group - that can be associated precisely with a block on a block device. - When you read from and write to files on disk, this amount will - increase. This will be the case if you use "conventional" I/O - (`open`, `read`, - `write` syscalls) as well as mapped files (with - `mmap`). It also accounts for the memory used by - `tmpfs` mounts, though the reasons are unclear. - - - **rss:** - the amount of memory that *doesn't* correspond to anything on disk: - stacks, heaps, and anonymous memory maps. - - - **mapped_file:** - indicates the amount of memory mapped by the processes in the - control group. It doesn't give you information about *how much* - memory is used; it rather tells you *how* it is used. - - - **pgfault and pgmajfault:** - indicate the number of times that a process of the cgroup triggered - a "page fault" and a "major fault", respectively. A page fault - happens when a process accesses a part of its virtual memory space - which is nonexistent or protected. The former can happen if the - process is buggy and tries to access an invalid address (it will - then be sent a `SIGSEGV` signal, typically - killing it with the famous `Segmentation fault` - message). The latter can happen when the process reads from a memory - zone which has been swapped out, or which corresponds to a mapped - file: in that case, the kernel will load the page from disk, and let - the CPU complete the memory access. It can also happen when the - process writes to a copy-on-write memory zone: likewise, the kernel - will preempt the process, duplicate the memory page, and resume the - write operation on the process` own copy of the page. "Major" faults - happen when the kernel actually has to read the data from disk. When - it just has to duplicate an existing page, or allocate an empty - page, it's a regular (or "minor") fault. - - - **swap:** - the amount of swap currently used by the processes in this cgroup. - - - **active_anon and inactive_anon:** - the amount of *anonymous* memory that has been identified has - respectively *active* and *inactive* by the kernel. "Anonymous" - memory is the memory that is *not* linked to disk pages. In other - words, that's the equivalent of the rss counter described above. In - fact, the very definition of the rss counter is **active_anon** + - **inactive_anon** - **tmpfs** (where tmpfs is the amount of memory - used up by `tmpfs` filesystems mounted by this - control group). Now, what's the difference between "active" and - "inactive"? Pages are initially "active"; and at regular intervals, - the kernel sweeps over the memory, and tags some pages as - "inactive". Whenever they are accessed again, they are immediately - retagged "active". When the kernel is almost out of memory, and time - comes to swap out to disk, the kernel will swap "inactive" pages. - - - **active_file and inactive_file:** - cache memory, with *active* and *inactive* similar to the *anon* - memory above. The exact formula is cache = **active_file** + - **inactive_file** + **tmpfs**. The exact rules used by the kernel - to move memory pages between active and inactive sets are different - from the ones used for anonymous memory, but the general principle - is the same. Note that when the kernel needs to reclaim memory, it - is cheaper to reclaim a clean (=non modified) page from this pool, - since it can be reclaimed immediately (while anonymous pages and - dirty/modified pages have to be written to disk first). - - - **unevictable:** - the amount of memory that cannot be reclaimed; generally, it will - account for memory that has been "locked" with `mlock`. - It is often used by crypto frameworks to make sure that - secret keys and other sensitive material never gets swapped out to - disk. - - - **memory and memsw limits:** - These are not really metrics, but a reminder of the limits applied - to this cgroup. The first one indicates the maximum amount of - physical memory that can be used by the processes of this control - group; the second one indicates the maximum amount of RAM+swap. - -Accounting for memory in the page cache is very complex. If two -processes in different control groups both read the same file -(ultimately relying on the same blocks on disk), the corresponding -memory charge will be split between the control groups. It's nice, but -it also means that when a cgroup is terminated, it could increase the -memory usage of another cgroup, because they are not splitting the cost -anymore for those memory pages. - -### CPU metrics: `cpuacct.stat` - -Now that we've covered memory metrics, everything else will look very -simple in comparison. CPU metrics will be found in the -`cpuacct` controller. - -For each container, you will find a pseudo-file `cpuacct.stat`, -containing the CPU usage accumulated by the processes of the container, -broken down between `user` and `system` time. If you're not familiar -with the distinction, `user` is the time during which the processes were -in direct control of the CPU (i.e., executing process code), and `system` -is the time during which the CPU was executing system calls on behalf of -those processes. - -Those times are expressed in ticks of 1/100th of a second. Actually, -they are expressed in "user jiffies". There are `USER_HZ` -*"jiffies"* per second, and on x86 systems, -`USER_HZ` is 100. This used to map exactly to the -number of scheduler "ticks" per second; but with the advent of higher -frequency scheduling, as well as [tickless kernels]( -http://lwn.net/Articles/549580/), the number of kernel ticks -wasn't relevant anymore. It stuck around anyway, mainly for legacy and -compatibility reasons. - -### Block I/O metrics - -Block I/O is accounted in the `blkio` controller. -Different metrics are scattered across different files. While you can -find in-depth details in the [blkio-controller]( -https://www.kernel.org/doc/Documentation/cgroups/blkio-controller.txt) -file in the kernel documentation, here is a short list of the most -relevant ones: - - - - **blkio.sectors:** - contain the number of 512-bytes sectors read and written by the - processes member of the cgroup, device by device. Reads and writes - are merged in a single counter. - - - **blkio.io_service_bytes:** - indicates the number of bytes read and written by the cgroup. It has - 4 counters per device, because for each device, it differentiates - between synchronous vs. asynchronous I/O, and reads vs. writes. - - - **blkio.io_serviced:** - the number of I/O operations performed, regardless of their size. It - also has 4 counters per device. - - - **blkio.io_queued:** - indicates the number of I/O operations currently queued for this - cgroup. In other words, if the cgroup isn't doing any I/O, this will - be zero. Note that the opposite is not true. In other words, if - there is no I/O queued, it does not mean that the cgroup is idle - (I/O-wise). It could be doing purely synchronous reads on an - otherwise quiescent device, which is therefore able to handle them - immediately, without queuing. Also, while it is helpful to figure - out which cgroup is putting stress on the I/O subsystem, keep in - mind that is is a relative quantity. Even if a process group does - not perform more I/O, its queue size can increase just because the - device load increases because of other devices. - -## Network Metrics - -Network metrics are not exposed directly by control groups. There is a -good explanation for that: network interfaces exist within the context -of *network namespaces*. The kernel could probably accumulate metrics -about packets and bytes sent and received by a group of processes, but -those metrics wouldn't be very useful. You want per-interface metrics -(because traffic happening on the local `lo` -interface doesn't really count). But since processes in a single cgroup -can belong to multiple network namespaces, those metrics would be harder -to interpret: multiple network namespaces means multiple `lo` -interfaces, potentially multiple `eth0` -interfaces, etc.; so this is why there is no easy way to gather network -metrics with control groups. - -Instead we can gather network metrics from other sources: - -### IPtables - -IPtables (or rather, the netfilter framework for which iptables is just -an interface) can do some serious accounting. - -For instance, you can setup a rule to account for the outbound HTTP -traffic on a web server: - - $ iptables -I OUTPUT -p tcp --sport 80 - -There is no `-j` or `-g` flag, -so the rule will just count matched packets and go to the following -rule. - -Later, you can check the values of the counters, with: - - $ iptables -nxvL OUTPUT - -Technically, `-n` is not required, but it will -prevent iptables from doing DNS reverse lookups, which are probably -useless in this scenario. - -Counters include packets and bytes. If you want to setup metrics for -container traffic like this, you could execute a `for` -loop to add two `iptables` rules per -container IP address (one in each direction), in the `FORWARD` -chain. This will only meter traffic going through the NAT -layer; you will also have to add traffic going through the userland -proxy. - -Then, you will need to check those counters on a regular basis. If you -happen to use `collectd`, there is a [nice plugin](https://collectd.org/wiki/index.php/Plugin:IPTables) -to automate iptables counters collection. - -### Interface-level counters - -Since each container has a virtual Ethernet interface, you might want to -check directly the TX and RX counters of this interface. You will notice -that each container is associated to a virtual Ethernet interface in -your host, with a name like `vethKk8Zqi`. Figuring -out which interface corresponds to which container is, unfortunately, -difficult. - -But for now, the best way is to check the metrics *from within the -containers*. To accomplish this, you can run an executable from the host -environment within the network namespace of a container using **ip-netns -magic**. - -The `ip-netns exec` command will let you execute any -program (present in the host system) within any network namespace -visible to the current process. This means that your host will be able -to enter the network namespace of your containers, but your containers -won't be able to access the host, nor their sibling containers. -Containers will be able to “see” and affect their sub-containers, -though. - -The exact format of the command is: - - $ ip netns exec - -For example: - - $ ip netns exec mycontainer netstat -i - -`ip netns` finds the "mycontainer" container by -using namespaces pseudo-files. Each process belongs to one network -namespace, one PID namespace, one `mnt` namespace, -etc., and those namespaces are materialized under -`/proc//ns/`. For example, the network -namespace of PID 42 is materialized by the pseudo-file -`/proc/42/ns/net`. - -When you run `ip netns exec mycontainer ...`, it -expects `/var/run/netns/mycontainer` to be one of -those pseudo-files. (Symlinks are accepted.) - -In other words, to execute a command within the network namespace of a -container, we need to: - -- Find out the PID of any process within the container that we want to investigate; -- Create a symlink from `/var/run/netns/` to `/proc//ns/net` -- Execute `ip netns exec ....` - -Please review [*Enumerating Cgroups*](#enumerating-cgroups) to learn how to find -the cgroup of a process running in the container of which you want to -measure network usage. From there, you can examine the pseudo-file named -`tasks`, which contains the PIDs that are in the -control group (i.e., in the container). Pick any one of them. - -Putting everything together, if the "short ID" of a container is held in -the environment variable `$CID`, then you can do this: - - $ TASKS=/sys/fs/cgroup/devices/$CID*/tasks - $ PID=$(head -n 1 $TASKS) - $ mkdir -p /var/run/netns - $ ln -sf /proc/$PID/ns/net /var/run/netns/$CID - $ ip netns exec $CID netstat -i - -## Tips for high-performance metric collection - -Note that running a new process each time you want to update metrics is -(relatively) expensive. If you want to collect metrics at high -resolutions, and/or over a large number of containers (think 1000 -containers on a single host), you do not want to fork a new process each -time. - -Here is how to collect metrics from a single process. You will have to -write your metric collector in C (or any language that lets you do -low-level system calls). You need to use a special system call, -`setns()`, which lets the current process enter any -arbitrary namespace. It requires, however, an open file descriptor to -the namespace pseudo-file (remember: that's the pseudo-file in -`/proc//ns/net`). - -However, there is a catch: you must not keep this file descriptor open. -If you do, when the last process of the control group exits, the -namespace will not be destroyed, and its network resources (like the -virtual interface of the container) will stay around for ever (or until -you close that file descriptor). - -The right approach would be to keep track of the first PID of each -container, and re-open the namespace pseudo-file each time. - -## Collecting metrics when a container exits - -Sometimes, you do not care about real time metric collection, but when a -container exits, you want to know how much CPU, memory, etc. it has -used. - -Docker makes this difficult because it relies on `lxc-start`, which -carefully cleans up after itself, but it is still possible. It is -usually easier to collect metrics at regular intervals (e.g., every -minute, with the collectd LXC plugin) and rely on that instead. - -But, if you'd still like to gather the stats when a container stops, -here is how: - -For each container, start a collection process, and move it to the -control groups that you want to monitor by writing its PID to the tasks -file of the cgroup. The collection process should periodically re-read -the tasks file to check if it's the last process of the control group. -(If you also want to collect network statistics as explained in the -previous section, you should also move the process to the appropriate -network namespace.) - -When the container exits, `lxc-start` will try to -delete the control groups. It will fail, since the control group is -still in use; but that's fine. You process should now detect that it is -the only one remaining in the group. Now is the right time to collect -all the metrics you need! - -Finally, your process should move itself back to the root control group, -and remove the container control group. To remove a control group, just -`rmdir` its directory. It's counter-intuitive to -`rmdir` a directory as it still contains files; but -remember that this is a pseudo-filesystem, so usual rules don't apply. -After the cleanup is done, the collection process can exit safely. diff --git a/articles/security.md~ b/articles/security.md~ deleted file mode 100644 index a26f79cf9b..0000000000 --- a/articles/security.md~ +++ /dev/null @@ -1,276 +0,0 @@ -page_title: Docker Security -page_description: Review of the Docker Daemon attack surface -page_keywords: Docker, Docker documentation, security - -# Docker Security - -There are three major areas to consider when reviewing Docker security: - - - the intrinsic security of the kernel and its support for - namespaces and cgroups; - - the attack surface of the Docker daemon itself; - - loopholes in the container configuration profile, either by default, - or when customized by users. - - the "hardening" security features of the kernel and how they - interact with containers. - -## Kernel Namespaces - -Docker containers are very similar to LXC containers, and they have -similar security features. When you start a container with `docker -run`, behind the scenes Docker creates a set of namespaces and control -groups for the container. - -**Namespaces provide the first and most straightforward form of -isolation**: processes running within a container cannot see, and even -less affect, processes running in another container, or in the host -system. - -**Each container also gets its own network stack**, meaning that a -container doesn't get privileged access to the sockets or interfaces -of another container. Of course, if the host system is setup -accordingly, containers can interact with each other through their -respective network interfaces — just like they can interact with -external hosts. When you specify public ports for your containers or use -[*links*](/userguide/dockerlinks) -then IP traffic is allowed between containers. They can ping each other, -send/receive UDP packets, and establish TCP connections, but that can be -restricted if necessary. From a network architecture point of view, all -containers on a given Docker host are sitting on bridge interfaces. This -means that they are just like physical machines connected through a -common Ethernet switch; no more, no less. - -How mature is the code providing kernel namespaces and private -networking? Kernel namespaces were introduced [between kernel version -2.6.15 and -2.6.26](http://lxc.sourceforge.net/index.php/about/kernel-namespaces/). -This means that since July 2008 (date of the 2.6.26 release, now 5 years -ago), namespace code has been exercised and scrutinized on a large -number of production systems. And there is more: the design and -inspiration for the namespaces code are even older. Namespaces are -actually an effort to reimplement the features of [OpenVZ]( -http://en.wikipedia.org/wiki/OpenVZ) in such a way that they could be -merged within the mainstream kernel. And OpenVZ was initially released -in 2005, so both the design and the implementation are pretty mature. - -## Control Groups - -Control Groups are another key component of Linux Containers. They -implement resource accounting and limiting. They provide many -useful metrics, but they also help ensure that each container gets -its fair share of memory, CPU, disk I/O; and, more importantly, that a -single container cannot bring the system down by exhausting one of those -resources. - -So while they do not play a role in preventing one container from -accessing or affecting the data and processes of another container, they -are essential to fend off some denial-of-service attacks. They are -particularly important on multi-tenant platforms, like public and -private PaaS, to guarantee a consistent uptime (and performance) even -when some applications start to misbehave. - -Control Groups have been around for a while as well: the code was -started in 2006, and initially merged in kernel 2.6.24. - -## Docker Daemon Attack Surface - -Running containers (and applications) with Docker implies running the -Docker daemon. This daemon currently requires `root` privileges, and you -should therefore be aware of some important details. - -First of all, **only trusted users should be allowed to control your -Docker daemon**. This is a direct consequence of some powerful Docker -features. Specifically, Docker allows you to share a directory between -the Docker host and a guest container; and it allows you to do so -without limiting the access rights of the container. This means that you -can start a container where the `/host` directory will be the `/` directory -on your host; and the container will be able to alter your host filesystem -without any restriction. This is similar to how virtualization systems -allow filesystem resource sharing. Nothing prevents you from sharing your -root filesystem (or even your root block device) with a virtual machine. - -This has a strong security implication: for example, if you instrument Docker -from a web server to provision containers through an API, you should be -even more careful than usual with parameter checking, to make sure that -a malicious user cannot pass crafted parameters causing Docker to create -arbitrary containers. - -For this reason, the REST API endpoint (used by the Docker CLI to -communicate with the Docker daemon) changed in Docker 0.5.2, and now -uses a UNIX socket instead of a TCP socket bound on 127.0.0.1 (the -latter being prone to cross-site-scripting attacks if you happen to run -Docker directly on your local machine, outside of a VM). You can then -use traditional UNIX permission checks to limit access to the control -socket. - -You can also expose the REST API over HTTP if you explicitly decide so. -However, if you do that, being aware of the above mentioned security -implication, you should ensure that it will be reachable only from a -trusted network or VPN; or protected with e.g., `stunnel` and client SSL -certificates. You can also secure them with [HTTPS and -certificates](/articles/https/). - -The daemon is also potentially vulnerable to other inputs, such as image -loading from either disk with 'docker load', or from the network with -'docker pull'. This has been a focus of improvement in the community, -especially for 'pull' security. While these overlap, it should be noted -that 'docker load' is a mechanism for backup and restore and is not -currently considered a secure mechanism for loading images. As of -Docker 1.3.2, images are now extracted in a chrooted subprocess on -Linux/Unix platforms, being the first-step in a wider effort toward -privilege separation. - -Eventually, it is expected that the Docker daemon will run restricted -privileges, delegating operations well-audited sub-processes, -each with its own (very limited) scope of Linux capabilities, -virtual network setup, filesystem management, etc. That is, most likely, -pieces of the Docker engine itself will run inside of containers. - -Finally, if you run Docker on a server, it is recommended to run -exclusively Docker in the server, and move all other services within -containers controlled by Docker. Of course, it is fine to keep your -favorite admin tools (probably at least an SSH server), as well as -existing monitoring/supervision processes (e.g., NRPE, collectd, etc). - -## Linux Kernel Capabilities - -By default, Docker starts containers with a restricted set of -capabilities. What does that mean? - -Capabilities turn the binary "root/non-root" dichotomy into a -fine-grained access control system. Processes (like web servers) that -just need to bind on a port below 1024 do not have to run as root: they -can just be granted the `net_bind_service` capability instead. And there -are many other capabilities, for almost all the specific areas where root -privileges are usually needed. - -This means a lot for container security; let's see why! - -Your average server (bare metal or virtual machine) needs to run a bunch -of processes as root. Those typically include SSH, cron, syslogd; -hardware management tools (e.g., load modules), network configuration -tools (e.g., to handle DHCP, WPA, or VPNs), and much more. A container is -very different, because almost all of those tasks are handled by the -infrastructure around the container: - - - SSH access will typically be managed by a single server running on - the Docker host; - - `cron`, when necessary, should run as a user - process, dedicated and tailored for the app that needs its - scheduling service, rather than as a platform-wide facility; - - log management will also typically be handed to Docker, or by - third-party services like Loggly or Splunk; - - hardware management is irrelevant, meaning that you never need to - run `udevd` or equivalent daemons within - containers; - - network management happens outside of the containers, enforcing - separation of concerns as much as possible, meaning that a container - should never need to perform `ifconfig`, - `route`, or ip commands (except when a container - is specifically engineered to behave like a router or firewall, of - course). - -This means that in most cases, containers will not need "real" root -privileges *at all*. And therefore, containers can run with a reduced -capability set; meaning that "root" within a container has much less -privileges than the real "root". For instance, it is possible to: - - - deny all "mount" operations; - - deny access to raw sockets (to prevent packet spoofing); - - deny access to some filesystem operations, like creating new device - nodes, changing the owner of files, or altering attributes (including - the immutable flag); - - deny module loading; - - and many others. - -This means that even if an intruder manages to escalate to root within a -container, it will be much harder to do serious damage, or to escalate -to the host. - -This won't affect regular web apps; but malicious users will find that -the arsenal at their disposal has shrunk considerably! By default Docker -drops all capabilities except [those -needed](https://github.com/docker/docker/blob/master/daemon/execdriver/native/template/default_template.go), -a whitelist instead of a blacklist approach. You can see a full list of -available capabilities in [Linux -manpages](http://man7.org/linux/man-pages/man7/capabilities.7.html). - -One primary risk with running Docker containers is that the default set -of capabilities and mounts given to a container may provide incomplete -isolation, either independently, or when used in combination with -kernel vulnerabilities. - -Docker supports the addition and removal of capabilities, allowing use -of a non-default profile. This may make Docker more secure through -capability removal, or less secure through the addition of capabilities. -The best practice for users would be to remove all capabilities except -those explicitly required for their processes. - -## Other Kernel Security Features - -Capabilities are just one of the many security features provided by -modern Linux kernels. It is also possible to leverage existing, -well-known systems like TOMOYO, AppArmor, SELinux, GRSEC, etc. with -Docker. - -While Docker currently only enables capabilities, it doesn't interfere -with the other systems. This means that there are many different ways to -harden a Docker host. Here are a few examples. - - - You can run a kernel with GRSEC and PAX. This will add many safety - checks, both at compile-time and run-time; it will also defeat many - exploits, thanks to techniques like address randomization. It doesn't - require Docker-specific configuration, since those security features - apply system-wide, independent of containers. - - If your distribution comes with security model templates for - Docker containers, you can use them out of the box. For instance, we - ship a template that works with AppArmor and Red Hat comes with SELinux - policies for Docker. These templates provide an extra safety net (even - though it overlaps greatly with capabilities). - - You can define your own policies using your favorite access control - mechanism. - -Just like there are many third-party tools to augment Docker containers -with e.g., special network topologies or shared filesystems, you can -expect to see tools to harden existing Docker containers without -affecting Docker's core. - -Recent improvements in Linux namespaces will soon allow to run -full-featured containers without root privileges, thanks to the new user -namespace. This is covered in detail [here]( -http://s3hh.wordpress.com/2013/07/19/creating-and-using-containers-without-privilege/). -Moreover, this will solve the problem caused by sharing filesystems -between host and guest, since the user namespace allows users within -containers (including the root user) to be mapped to other users in the -host system. - -Today, Docker does not directly support user namespaces, but they -may still be utilized by Docker containers on supported kernels, -by directly using the clone syscall, or utilizing the 'unshare' -utility. Using this, some users may find it possible to drop -more capabilities from their process as user namespaces provide -an artifical capabilities set. Likewise, however, this artifical -capabilities set may require use of 'capsh' to restrict the -user-namespace capabilities set when using 'unshare'. - -Eventually, it is expected that Docker will direct, native support -for user-namespaces, simplifying the process of hardening containers. - -## Conclusions - -Docker containers are, by default, quite secure; especially if you take -care of running your processes inside the containers as non-privileged -users (i.e., non-`root`). - -You can add an extra layer of safety by enabling Apparmor, SELinux, -GRSEC, or your favorite hardening solution. - -Last but not least, if you see interesting security features in other -containerization systems, these are simply kernels features that may -be implemented in Docker as well. We welcome users to submit issues, -pull requests, and communicate via the mailing list. - -References: -* [Docker Containers: How Secure Are They? (2013)]( -http://blog.docker.com/2013/08/containers-docker-how-secure-are-they/). -* [On the Security of Containers (2014)](https://medium.com/@ewindisch/on-the-security-of-containers-2c60ffe25a9e). diff --git a/articles/systemd.md~ b/articles/systemd.md~ deleted file mode 100644 index fddd146b07..0000000000 --- a/articles/systemd.md~ +++ /dev/null @@ -1,105 +0,0 @@ -page_title: Controlling and configuring Docker using Systemd -page_description: Controlling and configuring Docker using Systemd -page_keywords: docker, daemon, systemd, configuration - -# Controlling and configuring Docker using Systemd - -Many Linux distributions use systemd to start the Docker daemon. This document -shows a few examples of how to customise Docker's settings. - -## Starting the Docker daemon - -Once Docker is installed, you will need to start the Docker daemon. - - $ sudo systemctl start docker - # or on older distributions, you may need to use - $ sudo service docker start - -If you want Docker to start at boot, you should also: - - $ sudo systemctl enable docker - # or on older distributions, you may need to use - $ sudo chkconfig docker on - -## Custom Docker daemon options - -There are a number of ways to configure the daemon flags and environment variables -for your Docker daemon. - -If the `docker.service` file is set to use an `EnvironmentFile` -(often pointing to `/etc/sysconfig/docker`) then you can modify the -referenced file. - -Or, you may need to edit the `docker.service` file, which can be in `/usr/lib/systemd/system` -or `/etc/systemd/service`. - -### Runtime directory and storage driver - -You may want to control the disk space used for Docker images, containers -and volumes by moving it to a separate partition. - -In this example, we'll assume that your `docker.service` file looks something like: - - [Unit] - Description=Docker Application Container Engine - Documentation=http://docs.docker.com - After=network.target docker.socket - Requires=docker.socket - - [Service] - Type=notify - EnvironmentFile=-/etc/sysconfig/docker - ExecStart=/usr/bin/docker -d -H fd:// $OPTIONS - LimitNOFILE=1048576 - LimitNPROC=1048576 - - [Install] - Also=docker.socket - -This will allow us to add extra flags to the `/etc/sysconfig/docker` file by -setting `OPTIONS`: - - OPTIONS="--graph /mnt/docker-data --storage-driver btrfs" - -You can also set other environment variables in this file, for example, the -`HTTP_PROXY` environment variables described below. - -### HTTP Proxy - -This example overrides the default `docker.service` file. - -If you are behind a HTTP proxy server, for example in corporate settings, -you will need to add this configuration in the Docker systemd service file. - -First, create a systemd drop-in directory for the docker service: - - mkdir /etc/systemd/system/docker.service.d - -Now create a file called `/etc/systemd/system/docker.service.d/http-proxy.conf` -that adds the `HTTP_PROXY` environment variable: - - [Service] - Environment="HTTP_PROXY=http://proxy.example.com:80/" - -If you have internal Docker registries that you need to contact without -proxying you can specify them via the `NO_PROXY` environment variable: - - Environment="HTTP_PROXY=http://proxy.example.com:80/" "NO_PROXY=localhost,127.0.0.0/8,docker-registry.somecorporation.com" - -Flush changes: - - $ sudo systemctl daemon-reload - -Restart Docker: - - $ sudo systemctl restart docker - -## Manually creating the systemd unit files - -When installing the binary without a package, you may want -to integrate Docker with systemd. For this, simply install the two unit files -(service and socket) from [the github -repository](https://github.com/docker/docker/tree/master/contrib/init/systemd) -to `/etc/systemd/system`. - - diff --git a/articles/using_supervisord.md~ b/articles/using_supervisord.md~ deleted file mode 100644 index 5806707ee6..0000000000 --- a/articles/using_supervisord.md~ +++ /dev/null @@ -1,112 +0,0 @@ -page_title: Using Supervisor with Docker -page_description: How to use Supervisor process management with Docker -page_keywords: docker, supervisor, process management - -# Using Supervisor with Docker - -> **Note**: -> - **If you don't like sudo** then see [*Giving non-root -> access*](/installation/binaries/#giving-non-root-access) - -Traditionally a Docker container runs a single process when it is -launched, for example an Apache daemon or a SSH server daemon. Often -though you want to run more than one process in a container. There are a -number of ways you can achieve this ranging from using a simple Bash -script as the value of your container's `CMD` instruction to installing -a process management tool. - -In this example we're going to make use of the process management tool, -[Supervisor](http://supervisord.org/), to manage multiple processes in -our container. Using Supervisor allows us to better control, manage, and -restart the processes we want to run. To demonstrate this we're going to -install and manage both an SSH daemon and an Apache daemon. - -## Creating a Dockerfile - -Let's start by creating a basic `Dockerfile` for our -new image. - - FROM ubuntu:13.04 - MAINTAINER examples@docker.com - -## Installing Supervisor - -We can now install our SSH and Apache daemons as well as Supervisor in -our container. - - RUN apt-get update && apt-get install -y openssh-server apache2 supervisor - RUN mkdir -p /var/lock/apache2 /var/run/apache2 /var/run/sshd /var/log/supervisor - -Here we're installing the `openssh-server`, -`apache2` and `supervisor` -(which provides the Supervisor daemon) packages. We're also creating four -new directories that are needed to run our SSH daemon and Supervisor. - -## Adding Supervisor's configuration file - -Now let's add a configuration file for Supervisor. The default file is -called `supervisord.conf` and is located in -`/etc/supervisor/conf.d/`. - - COPY supervisord.conf /etc/supervisor/conf.d/supervisord.conf - -Let's see what is inside our `supervisord.conf` -file. - - [supervisord] - nodaemon=true - - [program:sshd] - command=/usr/sbin/sshd -D - - [program:apache2] - command=/bin/bash -c "source /etc/apache2/envvars && exec /usr/sbin/apache2 -DFOREGROUND" - -The `supervisord.conf` configuration file contains -directives that configure Supervisor and the processes it manages. The -first block `[supervisord]` provides configuration -for Supervisor itself. We're using one directive, `nodaemon` -which tells Supervisor to run interactively rather than -daemonize. - -The next two blocks manage the services we wish to control. Each block -controls a separate process. The blocks contain a single directive, -`command`, which specifies what command to run to -start each process. - -## Exposing ports and running Supervisor - -Now let's finish our `Dockerfile` by exposing some -required ports and specifying the `CMD` instruction -to start Supervisor when our container launches. - - EXPOSE 22 80 - CMD ["/usr/bin/supervisord"] - -Here We've exposed ports 22 and 80 on the container and we're running -the `/usr/bin/supervisord` binary when the container -launches. - -## Building our image - -We can now build our new image. - - $ sudo docker build -t /supervisord . - -## Running our Supervisor container - -Once We've got a built image we can launch a container from it. - - $ sudo docker run -p 22 -p 80 -t -i /supervisord - 2013-11-25 18:53:22,312 CRIT Supervisor running as root (no user in config file) - 2013-11-25 18:53:22,312 WARN Included extra file "/etc/supervisor/conf.d/supervisord.conf" during parsing - 2013-11-25 18:53:22,342 INFO supervisord started with pid 1 - 2013-11-25 18:53:23,346 INFO spawned: 'sshd' with pid 6 - 2013-11-25 18:53:23,349 INFO spawned: 'apache2' with pid 7 - . . . - -We've launched a new container interactively using the `docker run` command. -That container has run Supervisor and launched the SSH and Apache daemons with -it. We've specified the `-p` flag to expose ports 22 and 80. From here we can -now identify the exposed ports and connect to one or both of the SSH and Apache -daemons. diff --git a/compose/cli.md~ b/compose/cli.md~ deleted file mode 100644 index fb270f4471..0000000000 --- a/compose/cli.md~ +++ /dev/null @@ -1,181 +0,0 @@ -no_version_dropdown: truepage_title: Compose CLI reference -page_description: Compose CLI reference -page_keywords: fig, composition, compose, docker, orchestration, cli, reference - - -# CLI reference - -Most Docker Compose commands are run against one or more services. If -the service is not specified, the command will apply to all services. - -For full usage information, run `docker-compose [COMMAND] --help`. - -## Commands - -### build - -Builds or rebuilds services. - -Services are built once and then tagged as `project_service`, e.g., -`composetest_db`. If you change a service's Dockerfile or the contents of its -build directory, run `docker-compose build` to rebuild it. - -### help - -Displays help and usage instructions for a command. - -### kill - -Forces running containers to stop by sending a `SIGKILL` signal. Optionally the -signal can be passed, for example: - - $ docker-compose kill -s SIGINT - -### logs - -Displays log output from services. - -### port - -Prints the public port for a port binding - -### ps - -Lists containers. - -### pull - -Pulls service images. - -### rm - -Removes stopped service containers. - - -### run - -Runs a one-off command on a service. - -For example, - - $ docker-compose run web python manage.py shell - -will start the `web` service and then run `manage.py shell` in python. -Note that by default, linked services will also be started, unless they are -already running. - -One-off commands are started in new containers with the same configuration as a -normal container for that service, so volumes, links, etc will all be created as -expected. When using `run`, there are two differences from bringing up a -container normally: - -1. the command will be overridden with the one specified. So, if you run -`docker-compose run web bash`, the container's web command (which could default -to, e.g., `python app.py`) will be overridden to `bash` - -2. by default no ports will be created in case they collide with already opened -ports. - -Links are also created between one-off commands and the other containers which -are part of that service. So, for example, you could run: - - $ docker-compose run db psql -h db -U docker - -This would open up an interactive PostgreSQL shell for the linked `db` container -(which would get created or started as needed). - -If you do not want linked containers to start when running the one-off command, -specify the `--no-deps` flag: - - $ docker-compose run --no-deps web python manage.py shell - -Similarly, if you do want the service's ports to be created and mapped to the -host, specify the `--service-ports` flag: - $ docker-compose run --service-ports web python manage.py shell - -### scale - -Sets the number of containers to run for a service. - -Numbers are specified as arguments in the form `service=num`. For example: - - $ docker-compose scale web=2 worker=3 - -### start - -Starts existing containers for a service. - -### stop - -Stops running containers without removing them. They can be started again with -`docker-compose start`. - -### up - -Builds, (re)creates, starts, and attaches to containers for a service. - -Linked services will be started, unless they are already running. - -By default, `docker-compose up` will aggregate the output of each container and, -when it exits, all containers will be stopped. Running `docker-compose up -d`, -will start the containers in the background and leave them running. - -By default, if there are existing containers for a service, `docker-compose up` will stop and recreate them (preserving mounted volumes with [volumes-from]), so that changes in `docker-compose.yml` are picked up. If you do not want containers stopped and recreated, use `docker-compose up --no-recreate`. This will still start any stopped containers, if needed. - -[volumes-from]: http://docs.docker.io/en/latest/use/working_with_volumes/ - -## Options - -### --verbose - - Shows more output - -### --version - - Prints version and exits - -### -f, --file FILE - - Specifies an alternate Compose yaml file (default: `docker-compose.yml`) - -### -p, --project-name NAME - - Specifies an alternate project name (default: current directory name) - - -## Environment Variables - -Several environment variables are available for you to configure Compose's behaviour. - -Variables starting with `DOCKER_` are the same as those used to configure the -Docker command-line client. If you're using boot2docker, `$(boot2docker shellinit)` -will set them to their correct values. - -### COMPOSE\_PROJECT\_NAME - -Sets the project name, which is prepended to the name of every container started by Compose. Defaults to the `basename` of the current working directory. - -### COMPOSE\_FILE - -Sets the path to the `docker-compose.yml` to use. Defaults to `docker-compose.yml` in the current working directory. - -### DOCKER\_HOST - -Sets the URL of the docker daemon. As with the Docker client, defaults to `unix:///var/run/docker.sock`. - -### DOCKER\_TLS\_VERIFY - -When set to anything other than an empty string, enables TLS communication with -the daemon. - -### DOCKER\_CERT\_PATH - -Configures the path to the `ca.pem`, `cert.pem`, and `key.pem` files used for TLS verification. Defaults to `~/.docker`. - -## Compose documentation - -- [Installing Compose](install.md) -- [User guide](index.md) -- [Yaml file reference](yml.md) -- [Compose environment variables](env.md) -- [Compose command line completion](completion.md) diff --git a/compose/completion.md~ b/compose/completion.md~ deleted file mode 100644 index c17f3173dc..0000000000 --- a/compose/completion.md~ +++ /dev/null @@ -1,41 +0,0 @@ -no_version_dropdown: true--- -layout: default -title: Command Completion ---- - -Command Completion -================== - -Compose comes with [command completion](http://en.wikipedia.org/wiki/Command-line_completion) -for the bash shell. - -Installing Command Completion ------------------------------ - -Make sure bash completion is installed. If you use a current Linux in a non-minimal installation, bash completion should be available. -On a Mac, install with `brew install bash-completion` - -Place the completion script in `/etc/bash_completion.d/` (`/usr/local/etc/bash_completion.d/` on a Mac), using e.g. - - curl -L https://raw.githubusercontent.com/docker/compose/1.1.0/contrib/completion/bash/docker-compose > /etc/bash_completion.d/docker-compose - -Completion will be available upon next login. - -Available completions ---------------------- -Depending on what you typed on the command line so far, it will complete - - - available docker-compose commands - - options that are available for a particular command - - service names that make sense in a given context (e.g. services with running or stopped instances or services based on images vs. services based on Dockerfiles). For `docker-compose scale`, completed service names will automatically have "=" appended. - - arguments for selected options, e.g. `docker-compose kill -s` will complete some signals like SIGHUP and SIGUSR1. - -Enjoy working with Compose faster and with less typos! - -## Compose documentation - -- [Installing Compose](install.md) -- [User guide](index.md) -- [Command line reference](cli.md) -- [Yaml file reference](yml.md) -- [Compose environment variables](env.md) diff --git a/compose/django.md~ b/compose/django.md~ deleted file mode 100644 index 816c6e8a33..0000000000 --- a/compose/django.md~ +++ /dev/null @@ -1,99 +0,0 @@ -no_version_dropdown: true--- -layout: default -title: Getting started with Compose and Django ---- - -Getting started with Compose and Django -=================================== - -Let's use Compose to set up and run a Django/PostgreSQL app. Before starting, you'll need to have [Compose installed](install.md). - -Let's set up the three files that'll get us started. First, our app is going to be running inside a Docker container which contains all of its dependencies. We can define what goes inside that Docker container using a file called `Dockerfile`. It'll contain this to start with: - - FROM python:2.7 - ENV PYTHONUNBUFFERED 1 - RUN mkdir /code - WORKDIR /code - ADD requirements.txt /code/ - RUN pip install -r requirements.txt - ADD . /code/ - -That'll install our application inside an image with Python installed alongside all of our Python dependencies. For more information on how to write Dockerfiles, see the [Docker user guide](https://docs.docker.com/userguide/dockerimages/#building-an-image-from-a-dockerfile) and the [Dockerfile reference](http://docs.docker.com/reference/builder/). - -Second, we define our Python dependencies in a file called `requirements.txt`: - - Django - psycopg2 - -Simple enough. Finally, this is all tied together with a file called `docker-compose.yml`. It describes the services that our app comprises of (a web server and database), what Docker images they use, how they link together, what volumes will be mounted inside the containers and what ports they expose. - - db: - image: postgres - web: - build: . - command: python manage.py runserver 0.0.0.0:8000 - volumes: - - .:/code - ports: - - "8000:8000" - links: - - db - -See the [`docker-compose.yml` reference](yml.html) for more information on how it works. - -We can now start a Django project using `docker-compose run`: - - $ docker-compose run web django-admin.py startproject composeexample . - -First, Compose will build an image for the `web` service using the `Dockerfile`. It will then run `django-admin.py startproject composeexample .` inside a container using that image. - -This will generate a Django app inside the current directory: - - $ ls - Dockerfile docker-compose.yml composeexample manage.py requirements.txt - -First thing we need to do is set up the database connection. Replace the `DATABASES = ...` definition in `composeexample/settings.py` to read: - - DATABASES = { - 'default': { - 'ENGINE': 'django.db.backends.postgresql_psycopg2', - 'NAME': 'postgres', - 'USER': 'postgres', - 'HOST': 'db', - 'PORT': 5432, - } - } - -These settings are determined by the [postgres](https://registry.hub.docker.com/_/postgres/) Docker image we are using. - -Then, run `docker-compose up`: - - Recreating myapp_db_1... - Recreating myapp_web_1... - Attaching to myapp_db_1, myapp_web_1 - myapp_db_1 | - myapp_db_1 | PostgreSQL stand-alone backend 9.1.11 - myapp_db_1 | 2014-01-27 12:17:03 UTC LOG: database system is ready to accept connections - myapp_db_1 | 2014-01-27 12:17:03 UTC LOG: autovacuum launcher started - myapp_web_1 | Validating models... - myapp_web_1 | - myapp_web_1 | 0 errors found - myapp_web_1 | January 27, 2014 - 12:12:40 - myapp_web_1 | Django version 1.6.1, using settings 'composeexample.settings' - myapp_web_1 | Starting development server at http://0.0.0.0:8000/ - myapp_web_1 | Quit the server with CONTROL-C. - -And your Django app should be running at port 8000 on your docker daemon (if you're using boot2docker, `boot2docker ip` will tell you its address). - -You can also run management commands with Docker. To set up your database, for example, run `docker-compose up` and in another terminal run: - - $ docker-compose run web python manage.py syncdb - -## Compose documentation - -- [Installing Compose](install.md) -- [User guide](index.md) -- [Command line reference](cli.md) -- [Yaml file reference](yml.md) -- [Compose environment variables](env.md) -- [Compose command line completion](completion.md) diff --git a/compose/env.md~ b/compose/env.md~ deleted file mode 100644 index 9cc85ef8e5..0000000000 --- a/compose/env.md~ +++ /dev/null @@ -1,41 +0,0 @@ -no_version_dropdown: true--- -layout: default -title: Compose environment variables reference ---- - -Environment variables reference -=============================== - -**Note:** Environment variables are no longer the recommended method for connecting to linked services. Instead, you should use the link name (by default, the name of the linked service) as the hostname to connect to. See the [docker-compose.yml documentation](yml.md#links) for details. - -Compose uses [Docker links] to expose services' containers to one another. Each linked container injects a set of environment variables, each of which begins with the uppercase name of the container. - -To see what environment variables are available to a service, run `docker-compose run SERVICE env`. - -name\_PORT
-Full URL, e.g. `DB_PORT=tcp://172.17.0.5:5432` - -name\_PORT\_num\_protocol
-Full URL, e.g. `DB_PORT_5432_TCP=tcp://172.17.0.5:5432` - -name\_PORT\_num\_protocol\_ADDR
-Container's IP address, e.g. `DB_PORT_5432_TCP_ADDR=172.17.0.5` - -name\_PORT\_num\_protocol\_PORT
-Exposed port number, e.g. `DB_PORT_5432_TCP_PORT=5432` - -name\_PORT\_num\_protocol\_PROTO
-Protocol (tcp or udp), e.g. `DB_PORT_5432_TCP_PROTO=tcp` - -name\_NAME
-Fully qualified container name, e.g. `DB_1_NAME=/myapp_web_1/myapp_db_1` - -[Docker links]: http://docs.docker.com/userguide/dockerlinks/ - -## Compose documentation - -- [Installing Compose](install.md) -- [User guide](index.md) -- [Command line reference](cli.md) -- [Yaml file reference](yml.md) -- [Compose command line completion](completion.md) diff --git a/compose/index.md~ b/compose/index.md~ deleted file mode 100644 index a028143af5..0000000000 --- a/compose/index.md~ +++ /dev/null @@ -1,190 +0,0 @@ -no_version_dropdown: truepage_title: Compose: Multi-container orchestration for Docker -page_description: Introduction and Overview of Compose -page_keywords: documentation, docs, docker, compose, orchestration, containers - - -# Docker Compose - -Compose is a tool for defining and running complex applications with Docker. -With Compose, you define a multi-container application in a single file, then -spin your application up in a single command which does everything that needs to -be done to get it running. - -Compose is great for development environments, staging servers, and CI. We don't -recommend that you use it in production yet. - -Using Compose is basically a three-step process. - -First, you define your app's environment with a `Dockerfile` so it can be -reproduced anywhere: - -```Dockerfile -FROM python:2.7 -WORKDIR /code -ADD requirements.txt /code/ -RUN pip install -r requirements.txt -ADD . /code -CMD python app.py -``` - -Next, you define the services that make up your app in `docker-compose.yml` so -they can be run together in an isolated environment: - -```yaml -web: - build: . - links: - - db - ports: - - "8000:8000" -db: - image: postgres -``` - -Lastly, run `docker-compose up` and Compose will start and run your entire app. - -Compose has commands for managing the whole lifecycle of your application: - - * Start, stop and rebuild services - * View the status of running services - * Stream the log output of running services - * Run a one-off command on a service - -## Compose documentation - -- [Installing Compose](install.md) -- [Command line reference](cli.md) -- [Yaml file reference](yml.md) -- [Compose environment variables](env.md) -- [Compose command line completion](completion.md) - -## Quick start - -Let's get started with a walkthrough of getting a simple Python web app running -on Compose. It assumes a little knowledge of Python, but the concepts -demonstrated here should be understandable even if you're not familiar with -Python. - -### Installation and set-up - -First, [install Docker and Compose](install.md). - -Next, you'll want to make a directory for the project: - - $ mkdir composetest - $ cd composetest - -Inside this directory, create `app.py`, a simple web app that uses the Flask -framework and increments a value in Redis: - -```python -from flask import Flask -from redis import Redis -import os -app = Flask(__name__) -redis = Redis(host='redis', port=6379) - -@app.route('/') -def hello(): - redis.incr('hits') - return 'Hello World! I have been seen %s times.' % redis.get('hits') - -if __name__ == "__main__": - app.run(host="0.0.0.0", debug=True) -``` - -Next, define the Python dependencies in a file called `requirements.txt`: - - flask - redis - -### Create a Docker image - -Now, create a Docker image containing all of your app's dependencies. You -specify how to build the image using a file called -[`Dockerfile`](http://docs.docker.com/reference/builder/): - - FROM python:2.7 - ADD . /code - WORKDIR /code - RUN pip install -r requirements.txt - -This tells Docker to include Python, your code, and your Python dependencies in -a Docker image. For more information on how to write Dockerfiles, see the -[Docker user -guide](https://docs.docker.com/userguide/dockerimages/#building-an-image-from-a-dockerfile) -and the -[Dockerfile reference](http://docs.docker.com/reference/builder/). - -### Define services - -Next, define a set of services using `docker-compose.yml`: - - web: - build: . - command: python app.py - ports: - - "5000:5000" - volumes: - - .:/code - links: - - redis - redis: - image: redis - -This defines two services: - - - `web`, which is built from the `Dockerfile` in the current directory. It also - says to run the command `python app.py` inside the image, forward the exposed - port 5000 on the container to port 5000 on the host machine, connect up the - Redis service, and mount the current directory inside the container so we can - work on code without having to rebuild the image. - - `redis`, which uses the public image - [redis](https://registry.hub.docker.com/_/redis/), which gets pulled from the - Docker Hub registry. - -### Build and run your app with Compose - -Now, when you run `docker-compose up`, Compose will pull a Redis image, build an -image for your code, and start everything up: - - $ docker-compose up - Pulling image redis... - Building web... - Starting composetest_redis_1... - Starting composetest_web_1... - redis_1 | [8] 02 Jan 18:43:35.576 # Server started, Redis version 2.8.3 - web_1 | * Running on http://0.0.0.0:5000/ - -The web app should now be listening on port 5000 on your Docker daemon host (if -you're using Boot2docker, `boot2docker ip` will tell you its address). - -If you want to run your services in the background, you can pass the `-d` flag -(for daemon mode) to `docker-compose up` and use `docker-compose ps` to see what -is currently running: - - $ docker-compose up -d - Starting composetest_redis_1... - Starting composetest_web_1... - $ docker-compose ps - Name Command State Ports - ------------------------------------------------------------------- - composetest_redis_1 /usr/local/bin/run Up - composetest_web_1 /bin/sh -c python app.py Up 5000->5000/tcp - -The `docker-compose run` command allows you to run one-off commands for your -services. For example, to see what environment variables are available to the -`web` service: - - $ docker-compose run web env - -See `docker-compose --help` to see other available commands. - -If you started Compose with `docker-compose up -d`, you'll probably want to stop -your services once you've finished with them: - - $ docker-compose stop - -At this point, you have seen the basics of how Compose works. - - diff --git a/compose/install.md~ b/compose/install.md~ deleted file mode 100644 index c02e885665..0000000000 --- a/compose/install.md~ +++ /dev/null @@ -1,52 +0,0 @@ -no_version_dropdown: truepage_title: Installing Compose -page_description: How to intall Docker Compose -page_keywords: compose, orchestration, install, installation, docker, documentation - - -## Installing Compose - -To install Compose, you'll need to install Docker first. You'll then install -Compose with a `curl` command. - -### Install Docker - -First, you'll need to install Docker version 1.3 or greater. - -If you're on OS X, you can use the -[OS X installer](https://docs.docker.com/installation/mac/) to install both -Docker and the OSX helper app, boot2docker. Once boot2docker is running, set the -environment variables that'll configure Docker and Compose to talk to it: - - $(boot2docker shellinit) - -To persist the environment variables across shell sessions, add the above line -to your `~/.bashrc` file. - -For complete instructions, or if you are on another platform, consult Docker's -[installation instructions](https://docs.docker.com/installation/). - -### Install Compose - -To install Compose, run the following commands: - - curl -L https://github.com/docker/compose/releases/download/1.1.0/docker-compose-`uname -s`-`uname -m` > /usr/local/bin/docker-compose - chmod +x /usr/local/bin/docker-compose - -Optionally, you can also install [command completion](completion.md) for the -bash shell. - -Compose is available for OS X and 64-bit Linux. If you're on another platform, -Compose can also be installed as a Python package: - - $ sudo pip install -U docker-compose - -No further steps are required; Compose should now be successfully installed. -You can test the installation by running `docker-compose --version`. - -## Compose documentation - -- [User guide](index.md) -- [Command line reference](cli.md) -- [Yaml file reference](yml.md) -- [Compose environment variables](env.md) -- [Compose command line completion](completion.md) diff --git a/compose/rails.md~ b/compose/rails.md~ deleted file mode 100644 index 4b59a6fc9e..0000000000 --- a/compose/rails.md~ +++ /dev/null @@ -1,105 +0,0 @@ -no_version_dropdown: true--- -layout: default -title: Getting started with Compose and Rails ---- - -Getting started with Compose and Rails -================================== - -We're going to use Compose to set up and run a Rails/PostgreSQL app. Before starting, you'll need to have [Compose installed](install.md). - -Let's set up the three files that'll get us started. First, our app is going to be running inside a Docker container which contains all of its dependencies. We can define what goes inside that Docker container using a file called `Dockerfile`. It'll contain this to start with: - - FROM ruby:2.2.0 - RUN apt-get update -qq && apt-get install -y build-essential libpq-dev - RUN mkdir /myapp - WORKDIR /myapp - ADD Gemfile /myapp/Gemfile - RUN bundle install - ADD . /myapp - -That'll put our application code inside an image with Ruby, Bundler and all our dependencies. For more information on how to write Dockerfiles, see the [Docker user guide](https://docs.docker.com/userguide/dockerimages/#building-an-image-from-a-dockerfile) and the [Dockerfile reference](http://docs.docker.com/reference/builder/). - -Next, we have a bootstrap `Gemfile` which just loads Rails. It'll be overwritten in a moment by `rails new`. - - source 'https://rubygems.org' - gem 'rails', '4.2.0' - -Finally, `docker-compose.yml` is where the magic happens. It describes what services our app comprises (a database and a web app), how to get each one's Docker image (the database just runs on a pre-made PostgreSQL image, and the web app is built from the current directory), and the configuration we need to link them together and expose the web app's port. - - db: - image: postgres - ports: - - "5432" - web: - build: . - command: bundle exec rails s -p 3000 -b '0.0.0.0' - volumes: - - .:/myapp - ports: - - "3000:3000" - links: - - db - -With those files in place, we can now generate the Rails skeleton app using `docker-compose run`: - - $ docker-compose run web rails new . --force --database=postgresql --skip-bundle - -First, Compose will build the image for the `web` service using the `Dockerfile`. Then it'll run `rails new` inside a new container, using that image. Once it's done, you should have a fresh app generated: - - $ ls - Dockerfile app docker-compose.yml tmp - Gemfile bin lib vendor - Gemfile.lock config log - README.rdoc config.ru public - Rakefile db test - -Uncomment the line in your new `Gemfile` which loads `therubyracer`, so we've got a Javascript runtime: - - gem 'therubyracer', platforms: :ruby - -Now that we've got a new `Gemfile`, we need to build the image again. (This, and changes to the Dockerfile itself, should be the only times you'll need to rebuild). - - $ docker-compose build - -The app is now bootable, but we're not quite there yet. By default, Rails expects a database to be running on `localhost` - we need to point it at the `db` container instead. We also need to change the database and username to align with the defaults set by the `postgres` image. - -Open up your newly-generated `database.yml`. Replace its contents with the following: - - development: &default - adapter: postgresql - encoding: unicode - database: postgres - pool: 5 - username: postgres - password: - host: db - - test: - <<: *default - database: myapp_test - -We can now boot the app. - - $ docker-compose up - -If all's well, you should see some PostgreSQL output, and then—after a few seconds—the familiar refrain: - - myapp_web_1 | [2014-01-17 17:16:29] INFO WEBrick 1.3.1 - myapp_web_1 | [2014-01-17 17:16:29] INFO ruby 2.2.0 (2014-12-25) [x86_64-linux-gnu] - myapp_web_1 | [2014-01-17 17:16:29] INFO WEBrick::HTTPServer#start: pid=1 port=3000 - -Finally, we just need to create the database. In another terminal, run: - - $ docker-compose run web rake db:create - -And we're rolling—your app should now be running on port 3000 on your docker daemon (if you're using boot2docker, `boot2docker ip` will tell you its address). - -## Compose documentation - -- [Installing Compose](install.md) -- [User guide](index.md) -- [Command line reference](cli.md) -- [Yaml file reference](yml.md) -- [Compose environment variables](env.md) -- [Compose command line completion](completion.md) diff --git a/compose/wordpress.md~ b/compose/wordpress.md~ deleted file mode 100644 index 4359294d18..0000000000 --- a/compose/wordpress.md~ +++ /dev/null @@ -1,100 +0,0 @@ -no_version_dropdown: true--- -layout: default -title: Getting started with Compose and Wordpress ---- - -Getting started with Compose and Wordpress -====================================== - -Compose makes it nice and easy to run Wordpress in an isolated environment. [Install Compose](install.md), then download Wordpress into the current directory: - - $ curl https://wordpress.org/latest.tar.gz | tar -xvzf - - -This will create a directory called `wordpress`, which you can rename to the name of your project if you wish. Inside that directory, we create `Dockerfile`, a file that defines what environment your app is going to run in: - -``` -FROM orchardup/php5 -ADD . /code -``` - -This instructs Docker on how to build an image that contains PHP and Wordpress. For more information on how to write Dockerfiles, see the [Docker user guide](https://docs.docker.com/userguide/dockerimages/#building-an-image-from-a-dockerfile) and the [Dockerfile reference](http://docs.docker.com/reference/builder/). - -Next up, `docker-compose.yml` starts our web service and a separate MySQL instance: - -``` -web: - build: . - command: php -S 0.0.0.0:8000 -t /code - ports: - - "8000:8000" - links: - - db - volumes: - - .:/code -db: - image: orchardup/mysql - environment: - MYSQL_DATABASE: wordpress -``` - -Two supporting files are needed to get this working - first up, `wp-config.php` is the standard Wordpress config file with a single change to point the database configuration at the `db` container: - -``` - -### links - -Link to containers in another service. Either specify both the service name and -the link alias (`SERVICE:ALIAS`), or just the service name (which will also be -used for the alias). - -``` -links: - - db - - db:database - - redis -``` - -An entry with the alias' name will be created in `/etc/hosts` inside containers -for this service, e.g: - -``` -172.17.2.186 db -172.17.2.186 database -172.17.2.187 redis -``` - -Environment variables will also be created - see the [environment variable -reference](env.md) for details. - -### external_links - -Link to containers started outside this `docker-compose.yml` or even outside -of Compose, especially for containers that provide shared or common services. -`external_links` follow semantics similar to `links` when specifying both the -container name and the link alias (`CONTAINER:ALIAS`). - -``` -external_links: - - redis_1 - - project_db_1:mysql - - project_db_1:postgresql -``` - -### ports - -Expose ports. Either specify both ports (`HOST:CONTAINER`), or just the container -port (a random host port will be chosen). - -> **Note:** When mapping ports in the `HOST:CONTAINER` format, you may experience -> erroneous results when using a container port lower than 60, because YAML will -> parse numbers in the format `xx:yy` as sexagesimal (base 60). For this reason, -> we recommend always explicitly specifying your port mappings as strings. - -``` -ports: - - "3000" - - "8000:8000" - - "49100:22" - - "127.0.0.1:8001:8001" -``` - -### expose - -Expose ports without publishing them to the host machine - they'll only be -accessible to linked services. Only the internal port can be specified. - -``` -expose: - - "3000" - - "8000" -``` - -### volumes - -Mount paths as volumes, optionally specifying a path on the host machine -(`HOST:CONTAINER`), or an access mode (`HOST:CONTAINER:ro`). - -``` -volumes: - - /var/lib/mysql - - cache/:/tmp/cache - - ~/configs:/etc/configs/:ro -``` - -### volumes_from - -Mount all of the volumes from another service or container. - -``` -volumes_from: - - service_name - - container_name -``` - -### environment - -Add environment variables. You can use either an array or a dictionary. - -Environment variables with only a key are resolved to their values on the -machine Compose is running on, which can be helpful for secret or host-specific values. - -``` -environment: - RACK_ENV: development - SESSION_SECRET: - -environment: - - RACK_ENV=development - - SESSION_SECRET -``` - -### env_file - -Add environment variables from a file. Can be a single value or a list. - -Environment variables specified in `environment` override these values. - -``` -env_file: - - .env -``` - -``` -RACK_ENV: development -``` - -### net - -Networking mode. Use the same values as the docker client `--net` parameter. - -``` -net: "bridge" -net: "none" -net: "container:[name or id]" -net: "host" -``` - -### dns - -Custom DNS servers. Can be a single value or a list. - -``` -dns: 8.8.8.8 -dns: - - 8.8.8.8 - - 9.9.9.9 -``` - -### cap_add, cap_drop - -Add or drop container capabilities. -See `man 7 capabilities` for a full list. - -``` -cap_add: - - ALL - -cap_drop: - - NET_ADMIN - - SYS_ADMIN -``` - -### dns_search - -Custom DNS search domains. Can be a single value or a list. - -``` -dns_search: example.com -dns_search: - - dc1.example.com - - dc2.example.com -``` - -### working\_dir, entrypoint, user, hostname, domainname, mem\_limit, privileged, restart, stdin\_open, tty, cpu\_shares - -Each of these is a single value, analogous to its -[docker run](https://docs.docker.com/reference/run/) counterpart. - -``` -cpu_shares: 73 - -working_dir: /code -entrypoint: /code/entrypoint.sh -user: postgresql - -hostname: foo -domainname: foo.com - -mem_limit: 1000000000 -privileged: true - -restart: always - -stdin_open: true -tty: true -``` - -## Compose documentation - -- [Installing Compose](install.md) -- [User guide](index.md) -- [Command line reference](cli.md) -- [Compose environment variables](env.md) -- [Compose command line completion](completion.md) diff --git a/docker-hub-enterprise/install-config.md~ b/docker-hub-enterprise/install-config.md~ deleted file mode 100644 index 0b7bcfd6fe..0000000000 --- a/docker-hub-enterprise/install-config.md~ +++ /dev/null @@ -1,8 +0,0 @@ -page_title: Using Docker Hub Enterprise Installation -page_description: Docker Hub Enterprise Installation -page_keywords: docker hub enterprise - -# Docker Hub Enterprise Installation - -Documenation coming soon. - diff --git a/docker-hub-enterprise/usage.md~ b/docker-hub-enterprise/usage.md~ deleted file mode 100644 index 252223ef70..0000000000 --- a/docker-hub-enterprise/usage.md~ +++ /dev/null @@ -1,9 +0,0 @@ -page_title: Using Docker Hub Enterprise -page_description: Docker Hub Enterprise -page_keywords: docker hub enterprise - -# Docker Hub Enterprise - -Documenation coming soon. - - diff --git a/docker-hub/accounts.md~ b/docker-hub/accounts.md~ deleted file mode 100644 index e4623f9980..0000000000 --- a/docker-hub/accounts.md~ +++ /dev/null @@ -1,54 +0,0 @@ -page_title: Accounts on Docker Hub -page_description: Docker Hub accounts -page_keywords: Docker, docker, registry, accounts, plans, Dockerfile, Docker Hub, docs, documentation - -# Accounts on Docker Hub - -## Docker Hub Accounts - -You can `search` for Docker images and `pull` them from [Docker -Hub](https://hub.docker.com) without signing in or even having an -account. However, in order to `push` images, leave comments or to *star* -a repository, you are going to need a [Docker -Hub](https://hub.docker.com) account. - -### Registration for a Docker Hub Account - -You can get a [Docker Hub](https://hub.docker.com) account by -[signing up for one here](https://hub.docker.com/account/signup/). A valid -email address is required to register, which you will need to verify for -account activation. - -### Email activation process - -You need to have at least one verified email address to be able to use your -[Docker Hub](https://hub.docker.com) account. If you can't find the validation email, -you can request another by visiting the [Resend Email Confirmation]( -https://hub.docker.com/account/resend-email-confirmation/) page. - -### Password reset process - -If you can't access your account for some reason, you can reset your password -from the [*Password Reset*](https://hub.docker.com/account/forgot-password/) -page. - -## Organizations & Groups - -Also available on the Docker Hub are organizations and groups that allow -you to collaborate across your organization or team. You can see what -organizations [you belong to and add new organizations]( -https://hub.docker.com/account/organizations/) from the Account Settings -tab. They are also listed below your user name on your repositories page and in your account profile. - -![organizations](/docker-hub/hub-images/orgs.png) - -From within your organizations you can create groups that allow you to -further manage who can interact with your repositories. - -![groups](/docker-hub/hub-images/groups.png) - -You can add or invite users to join groups by clicking on the organization and then clicking the edit button for the group to which you want to add members. Enter a user-name (for current Hub users) or email address (if they are not yet Hub users) for the person you want to invite. They will receive an email invitation to join the group. - -![invite members](/docker-hub/hub-images/invite.png) - - diff --git a/docker-hub/builds.md~ b/docker-hub/builds.md~ deleted file mode 100644 index 164018e827..0000000000 --- a/docker-hub/builds.md~ +++ /dev/null @@ -1,350 +0,0 @@ -page_title: Automated Builds on Docker Hub -page_description: Docker Hub Automated Builds -page_keywords: Docker, docker, registry, accounts, plans, Dockerfile, Docker Hub, docs, documentation, trusted, builds, trusted builds, automated builds - -# Automated Builds on Docker Hub - -## About Automated Builds - -*Automated Builds* are a special feature of Docker Hub which allow you to -use [Docker Hub's](https://hub.docker.com) build clusters to automatically -create images from a specified `Dockerfile` and a GitHub or Bitbucket repository -(or "context"). The system will clone your repository and build the image -described by the `Dockerfile` using the repository as the context. The -resulting automated image will then be uploaded to the Docker Hub registry -and marked as an *Automated Build*. - -Automated Builds have several advantages: - -* Users of *your* Automated Build can trust that the resulting -image was built exactly as specified. - -* The `Dockerfile` will be available to anyone with access to -your repository on the Docker Hub registry. - -* Because the process is automated, Automated Builds help to -make sure that your repository is always up to date. - -Automated Builds are supported for both public and private repositories -on both [GitHub](http://github.com) and [Bitbucket](https://bitbucket.org/). - -To use Automated Builds, you must have an [account on Docker Hub]( -http://docs.docker.com/userguide/dockerhub/#creating-a-docker-hub-account) -and on GitHub and/or Bitbucket. In either case, the account needs -to be properly validated and activated before you can link to it. - -## Setting up Automated Builds with GitHub - -In order to set up an Automated Build, you need to first link your -[Docker Hub](https://hub.docker.com) account with a GitHub account. -This will allow the registry to see your repositories. - -> *Note:* -> Automated Builds currently require *read* and *write* access since -> [Docker Hub](https://hub.docker.com) needs to setup a GitHub service -> hook. We have no choice here, this is how GitHub manages permissions, sorry! -> We do guarantee nothing else will be touched in your account. - -To get started, log into your Docker Hub account and click the -"+ Add Repository" button at the upper right of the screen. Then select -[Automated Build](https://registry.hub.docker.com/builds/add/). - -Select the [GitHub service](https://registry.hub.docker.com/associate/github/). - -Then follow the onscreen instructions to authorize and link your -GitHub account to Docker Hub. Once it is linked, you'll be able to -choose a repo from which to create the Automatic Build. - -### Creating an Automated Build - -You can [create an Automated Build]( -https://registry.hub.docker.com/builds/github/select/) from any of your -public or private GitHub repositories with a `Dockerfile`. - -### GitHub Submodules - -If your GitHub repository contains links to private submodules, you'll -need to add a deploy key from your Docker Hub repository. - -Your Docker Hub deploy key is located under the "Build Details" -menu on the Automated Build's main page in the Hub. Add this key -to your GitHub submodule by visiting the Settings page for the -repository on GitHub and selecting "Deploy keys". - - - - - - - - - - - - - - - - - - - - - -
StepScreenshotDescription
1.Your automated build's deploy key is in the "Build Details" menu -under "Deploy keys".
2.In your GitHub submodule's repository Settings page, add the -deploy key from your Docker Hub Automated Build.
- -### GitHub Organizations - -GitHub organizations will appear once your membership to that organization is -made public on GitHub. To verify, you can look at the members tab for your -organization on GitHub. - -### GitHub Service Hooks - -Follow the steps below to configure the GitHub service -hooks for your Automated Build: - - - - - - - - - - - - - - - - - - - - - - - - - -
StepScreenshotDescription
1.Log in to Github.com, and go to your Repository page. Click on "Settings" on - the right side of the page. You must have admin privileges to the repository in order to do this.
2.Webhooks & ServicesClick on "Webhooks & Services" on the left side of the page.
3.Find the service labeled DockerFind the service labeled "Docker" and click on it.
4.Activate Service HooksMake sure the "Active" checkbox is selected and click the "Update service" button to save your changes.
- -## Setting up Automated Builds with Bitbucket - -In order to setup an Automated Build, you need to first link your -[Docker Hub](https://hub.docker.com) account with a Bitbucket account. -This will allow the registry to see your repositories. - -To get started, log into your Docker Hub account and click the -"+ Add Repository" button at the upper right of the screen. Then -select [Automated Build](https://registry.hub.docker.com/builds/add/). - -Select the [Bitbucket source]( -https://registry.hub.docker.com/associate/bitbucket/). - -Then follow the onscreen instructions to authorize and link your -Bitbucket account to Docker Hub. Once it is linked, you'll be able -to choose a repository from which to create the Automatic Build. - -### Creating an Automated Build - -You can [create an Automated Build]( -https://registry.hub.docker.com/builds/bitbucket/select/) from any of your -public or private Bitbucket repositories with a `Dockerfile`. - -### Adding a Hook - -When you link your Docker Hub account, a `POST` hook should get automatically -added to your Bitbucket repository. Follow the steps below to confirm or modify the -Bitbucket hooks for your Automated Build: - - - - - - - - - - - - - - - - - - - - - - - - -
StepScreenshotDescription
1.SettingsLog in to Bitbucket.org and go to your Repository page. Click on "Settings" on - the far left side of the page, under "Navigation". You must have admin privileges - to the repository in order to do this.
2.HooksClick on "Hooks" on the near left side of the page, under "Settings".
3.Docker Post HookYou should now see a list of hooks associated with the repo, including a POST hook that points at - registry.hub.docker.com/hooks/bitbucket.
- - -## The Dockerfile and Automated Builds - -During the build process, Docker will copy the contents of your `Dockerfile`. -It will also add it to the [Docker Hub](https://hub.docker.com) for the Docker -community (for public repositories) or approved team members/orgs (for private -repositories) to see on the repository page. - -### README.md - -If you have a `README.md` file in your repository, it will be used as the -repository's full description.The build process will look for a -`README.md` in the same directory as your `Dockerfile`. - -> **Warning:** -> If you change the full description after a build, it will be -> rewritten the next time the Automated Build has been built. To make changes, -> modify the `README.md` from the Git repository. - -## Remote Build triggers - -If you need a way to trigger Automated Builds outside of GitHub or Bitbucket, -you can set up a build trigger. When you turn on the build trigger for an -Automated Build, it will give you a URL to which you can send POST requests. -This will trigger the Automated Build, much as with a GitHub webhook. - -Build triggers are available under the Settings menu of each Automated Build -repository on the Docker Hub. - -![Build trigger screen](/docker-hub/hub-images/build-trigger.png) - -You can use `curl` to trigger a build: - -``` -$ curl --data "build=true" -X POST https://registry.hub.docker.com/u/svendowideit/testhook/trigger/be579c -82-7c0e-11e4-81c4-0242ac110020/ -OK -``` - -> **Note:** -> You can only trigger one build at a time and no more than one -> every five minutes. If you already have a build pending, or if you -> recently submitted a build request, those requests *will be ignored*. -> To verify everything is working correctly, check the logs of last -> ten triggers on the settings page . - -## Webhooks - -Automated Builds also include a Webhooks feature. Webhooks can be called -after a successful repository push is made. This includes when a new tag is added -to an existing image. - -The webhook call will generate a HTTP POST with the following JSON -payload: - -``` -{ - "callback_url": "https://registry.hub.docker.com/u/svendowideit/testhook/hook/2141b5bi5i5b02bec211i4eeih0242eg11000a/", - "push_data": { - "images": [ - "27d47432a69bca5f2700e4dff7de0388ed65f9d3fb1ec645e2bc24c223dc1cc3", - "51a9c7c1f8bb2fa19bcd09789a34e63f35abb80044bc10196e304f6634cc582c", - ... - ], - "pushed_at": 1.417566161e+09, - "pusher": "trustedbuilder" - }, - "repository": { - "comment_count": 0, - "date_created": 1.417494799e+09, - "description": "", - "dockerfile": "#\n# BUILD\u0009\u0009docker build -t svendowideit/apt-cacher .\n# RUN\u0009\u0009docker run -d -p 3142:3142 -name apt-cacher-run apt-cacher\n#\n# and then you can run containers with:\n# \u0009\u0009docker run -t -i -rm -e http_proxy http://192.168.1.2:3142/ debian bash\n#\nFROM\u0009\u0009ubuntu\nMAINTAINER\u0009SvenDowideit@home.org.au\n\n\nVOLUME\u0009\u0009[\"/var/cache/apt-cacher-ng\"]\nRUN\u0009\u0009apt-get update ; apt-get install -yq apt-cacher-ng\n\nEXPOSE \u0009\u00093142\nCMD\u0009\u0009chmod 777 /var/cache/apt-cacher-ng ; /etc/init.d/apt-cacher-ng start ; tail -f /var/log/apt-cacher-ng/*\n", - "full_description": "Docker Hub based automated build from a GitHub repo", - "is_official": false, - "is_private": true, - "is_trusted": true, - "name": "testhook", - "namespace": "svendowideit", - "owner": "svendowideit", - "repo_name": "svendowideit/testhook", - "repo_url": "https://registry.hub.docker.com/u/svendowideit/testhook/", - "star_count": 0, - "status": "Active" - } -} -``` - -Webhooks are available under the Settings menu of each Repository. - -> **Note:** If you want to test your webhook out we recommend using -> a tool like [requestb.in](http://requestb.in/). - -> **Note**: The Docker Hub servers are currently in the IP range -> `162.242.195.64 - 162.242.195.127`, so you can restrict your webhooks to -> accept webhook requests from that set of IP addresses. - -### Webhook chains - -Webhook chains allow you to chain calls to multiple services. For example, -you can use this to trigger a deployment of your container only after -it has been successfully tested, then update a separate Changelog once the -deployment is complete. -After clicking the "Add webhook" button, simply add as many URLs as necessary -in your chain. - -The first webhook in a chain will be called after a successful push. Subsequent -URLs will be contacted after the callback has been validated. - -### Validating a callback - -In order to validate a callback in a webhook chain, you need to - -1. Retrieve the `callback_url` value in the request's JSON payload. -1. Send a POST request to this URL containing a valid JSON body. - -> **Note**: A chain request will only be considered complete once the last -> callback has been validated. - -To help you debug or simply view the results of your webhook(s), -view the "History" of the webhook available on its settings page. - -### Callback JSON data - -The following parameters are recognized in callback data: - -* `state` (required): Accepted values are `success`, `failure` and `error`. - If the state isn't `success`, the webhook chain will be interrupted. -* `description`: A string containing miscellaneous information that will be - available on the Docker Hub. Maximum 255 characters. -* `context`: A string containing the context of the operation. Can be retrieved - from the Docker Hub. Maximum 100 characters. -* `target_url`: The URL where the results of the operation can be found. Can be - retrieved on the Docker Hub. - -*Example callback payload:* - - { - "state": "success", - "description": "387 tests PASSED", - "context": "Continuous integration by Acme CI", - "target_url": "http://ci.acme.com/results/afd339c1c3d27" - } - -## Repository links - -Repository links are a way to associate one Automated Build with -another. If one gets updated,the linking system triggers a rebuild -for the other Automated Build. This makes it easy to keep all your -Automated Builds up to date. - -To add a link, go to the repository for the Automated Build you want to -link to and click on *Repository Links* under the Settings menu at -right. Then, enter the name of the repository that you want have linked. - -> **Warning:** -> You can add more than one repository link, however, you should -> do so very carefully. Creating a two way relationship between Automated Builds will -> cause an endless build loop. diff --git a/docker-hub/home.md~ b/docker-hub/home.md~ deleted file mode 100644 index 15baf7b83a..0000000000 --- a/docker-hub/home.md~ +++ /dev/null @@ -1,13 +0,0 @@ -page_title: The Docker Hub Registry Help -page_description: The Docker Registry help documentation home -page_keywords: Docker, docker, registry, accounts, plans, Dockerfile, Docker Hub, docs, documentation - -# The Docker Hub Registry Help - -## Introduction - -For your questions about the [Docker Hub](https://hub.docker.com) registry you -can use [this documentation](docs.md). - -If you can not find something you are looking for, please feel free to -[contact us](https://docker.com/resources/support/). diff --git a/docker-hub/index.md~ b/docker-hub/index.md~ deleted file mode 100644 index c29a5f7873..0000000000 --- a/docker-hub/index.md~ +++ /dev/null @@ -1,23 +0,0 @@ -page_title: The Docker Hub Help -page_description: The Docker Help documentation home -page_keywords: Docker, docker, registry, accounts, plans, Dockerfile, Docker Hub, docs, documentation, accounts, organizations, repositories, groups - -# Docker Hub - -![DockerHub](/docker-hub/hub-images/hub.png) - -## [Accounts](accounts/) - -[Learn how to create](accounts/) a [Docker Hub](https://hub.docker.com) -account and manage your organizations and groups. - -## [Repositories](repos/) - -Find out how to share your Docker images in [Docker Hub -repositories](repos/) and how to store and manage private images. - -## [Automated Builds](builds/) - -Learn how to automate your build and deploy pipeline with [Automated -Builds](builds/) - diff --git a/docker-hub/official_repos.md~ b/docker-hub/official_repos.md~ deleted file mode 100644 index 4ec431238b..0000000000 --- a/docker-hub/official_repos.md~ +++ /dev/null @@ -1,189 +0,0 @@ -page_title: Guidelines for Official Repositories on Docker Hub -page_description: Guidelines for Official Repositories on Docker Hub -page_keywords: Docker, docker, registry, accounts, plans, Dockerfile, Docker Hub, docs, official, image, documentation - -# Guidelines for Creating and Documenting Official Repositories - -## Introduction - -You’ve been given the job of creating an image for an Official Repository -hosted on [Docker Hub Registry](https://registry.hub.docker.com/). These are -our guidelines for getting that task done. Even if you’re not -planning to create an Official Repo, you can think of these guidelines as best -practices for image creation generally. - -This document consists of two major sections: - -* A list of expected files, resources and supporting items for your image, -along with best practices for creating those items -* Examples embodying those practices - -## Expected Files & Resources - -### A Git repository - -Your image needs to live in a Git repository, preferably on GitHub. (If you’d -like to use a different provider, please [contact us](mailto:feedback@docker.com) -directly.) Docker **strongly** recommends that this repo be publicly -accessible. - -If the repo is private or has otherwise limited access, you must provide a -means of at least “read-only” access for both general users and for the -docker-library maintainers, who need access for review and building purposes. - -### A Dockerfile - -Complete information on `Dockerfile`s can be found in the [Reference section](https://docs.docker.com/reference/builder/). -We also have a page discussing [best practices for writing `Dockerfile`s](/articles/dockerfile_best-practices). -Your `Dockerfile` should adhere to the following: - -* It must be written either by using `FROM scratch` or be based on another, -established Official Image. -* It must follow `Dockerfile` best practices. These are discussed on the -[best practices page](/articles/dockerfile_best-practices). In addition, -Docker engineer Michael Crosby has some good tips for `Dockerfiles` in -this [blog post](http://crosbymichael.com/dockerfile-best-practices-take-2.html). - -While [`ONBUILD` triggers](https://docs.docker.com/reference/builder/#onbuild) -are not required, if you choose to use them you should: - -* Build both `ONBUILD` and non-`ONBUILD` images, with the `ONBUILD` image -built `FROM` the non-`ONBUILD` image. -* The `ONBUILD` image should be specifically tagged, for example, `ruby: -latest`and `ruby:onbuild`, or `ruby:2` and `ruby:2-onbuild` - -### A short description - -Include a brief description of your image (in plaintext). Only one description -is required; you don’t need additional descriptions for each tag. The file -should also: - -* Be named `README-short.txt` -* Reside in the repo for the “latest” tag -* Not exceed 100 characters - -### A logo - -Include a logo of your company or the product (png format preferred). Only one -logo is required; you don’t need additional logo files for each tag. The logo -file should have the following characteristics: - -* Be named `logo.png` -* Should reside in the repo for the “latest” tag -* Should fit inside a 200px square, maximized in one dimension (preferably the -width) -* Square or wide (landscape) is preferred over tall (portrait), but exceptions -can be made based on the logo needed - -### A long description - -Include a comprehensive description of your image (in Markdown format, GitHub -flavor preferred). Only one description is required; you don’t need additional -descriptions for each tag. The file should also: - -* Be named `README.md` -* Reside in the repo for the “latest” tag -* Be no longer than absolutely necessary, while still addressing all the -content requirements - -In terms of content, the long description must include the following sections: - -* Overview & links -* How-to/usage -* Issues & contributions - -#### Overview & links - -This section should provide: - -* an overview of the software contained in the image, similar to the -introduction in a Wikipedia entry - -* a selection of links to outside resources that help to describe the software - -* a *mandatory* link to the `Dockerfile` - -#### How-to/usage - -A section that describes how to run and use the image, including common use -cases and example `Dockerfile`s (if applicable). Try to provide clear, step-by- -step instructions wherever possible. - -##### Issues & contributions - -In this section, point users to any resources that can help them contribute to -the project. Include contribution guidelines and any specific instructions -related to your development practices. Include a link to -[Docker’s resources for contributors](https://docs.docker.com/contributing/contributing/). -Be sure to include contact info, handles, etc. for official maintainers. - -Also include information letting users know where they can go for help and how -they can file issues with the repo. Point them to any specific IRC channels, -issue trackers, contacts, additional “how-to” information or other resources. - -### License - -Include a file, `LICENSE`, of any applicable license. Docker recommends using -the license of the software contained in the image, provided it allows Docker, -Inc. to legally build and distribute the image. Otherwise, Docker recommends -adopting the [Expat license](http://directory.fsf.org/wiki/License:Expat) -(a.k.a., the MIT or X11 license). - -## Examples - -Below are sample short and long description files for an imaginary image -containing Ruby on Rails. - -### Short description - -`README-short.txt` - -`Ruby on Rails is an open-source application framework written in Ruby. It emphasizes best practices such as convention over configuration, active record pattern, and the model-view-controller pattern.` - -### Long description - -`README.md` - -```markdown -# What is Ruby on Rails - -Ruby on Rails, often simply referred to as Rails, is an open source web application framework which runs via the Ruby programming language. It is a full-stack framework: it allows creating pages and applications that gather information from the web server, talk to or query the database, and render templates out of the box. As a result, Rails features a routing system that is independent of the web server. - -> [wikipedia.org/wiki/Ruby_on_Rails](https://en.wikipedia.org/wiki/Ruby_on_Rails) - -# How to use this image - -## Create a `Dockerfile` in your rails app project - - FROM rails:onbuild - -Put this file in the root of your app, next to the `Gemfile`. - -This image includes multiple `ONBUILD` triggers so that should be all that you need for most applications. The build will `ADD . /usr/src/app`, `RUN bundle install`, `EXPOSE 3000`, and set the default command to `rails server`. - -Then build and run the Docker image. - - docker build -t my-rails-app . - docker run --name some-rails-app -d my-rails-app - -Test it by visiting `http://container-ip:3000` in a browser. On the other hand, if you need access outside the host on port 8080: - - docker run --name some-rails-app -p 8080:3000 -d my-rails-app - -Then go to `http://localhost:8080` or `http://host-ip:8080` in a browser. -``` - -For more examples, take a look at these repos: - -* [Go](https://github.com/docker-library/golang) -* [PostgreSQL](https://github.com/docker-library/postgres) -* [Buildpack-deps](https://github.com/docker-library/buildpack-deps) -* ["Hello World" minimal container](https://github.com/docker-library/hello-world) -* [Node](https://github.com/docker-library/node) - -## Submit your repo - -Once you've checked off everything in these guidelines, and are confident your -image is ready for primetime, please contact us at -[partners@docker.com](mailto:partners@docker.com) to have your project -considered for the Official Repos program. diff --git a/docker-hub/repos.md~ b/docker-hub/repos.md~ deleted file mode 100644 index 576583584f..0000000000 --- a/docker-hub/repos.md~ +++ /dev/null @@ -1,197 +0,0 @@ -page_title: Repositories and Images on Docker Hub -page_description: Repositories and Images on Docker Hub -page_keywords: Docker, docker, registry, accounts, plans, Dockerfile, Docker Hub, webhooks, docs, documentation - -# Repositories and Images on Docker Hub - -![repositories](/docker-hub/hub-images/repos.png) - -## Searching for repositories and images - -You can `search` for all the publicly available repositories and images using -Docker. - - $ sudo docker search ubuntu - -This will show you a list of the currently available repositories on the -Docker Hub which match the provided keyword. - -If a repository is private it won't be listed on the repository search -results. To see repository statuses, you can look at your [profile -page](https://hub.docker.com) on [Docker Hub](https://hub.docker.com). - -## Repositories - -Your Docker Hub repositories have a number of useful features. - -### Stars - -Your repositories can be starred and you can star repositories in -return. Stars are a way to show that you like a repository. They are -also an easy way of bookmarking your favorites. - -### Comments - -You can interact with other members of the Docker community and maintainers by -leaving comments on repositories. If you find any comments that are not -appropriate, you can flag them for review. - -### Collaborators and their role - -A collaborator is someone you want to give access to a private -repository. Once designated, they can `push` and `pull` to your -repositories. They will not be allowed to perform any administrative -tasks such as deleting the repository or changing its status from -private to public. - -> **Note:** -> A collaborator cannot add other collaborators. Only the owner of -> the repository has administrative access. - -You can also collaborate on Docker Hub with organizations and groups. -You can read more about that [here](accounts/). - -## Official Repositories - -The Docker Hub contains a number of [official -repositories](http://registry.hub.docker.com/official). These are -certified repositories from vendors and contributors to Docker. They -contain Docker images from vendors like Canonical, Oracle, and Red Hat -that you can use to build applications and services. - -If you use Official Repositories you know you're using a supported, -optimized and up-to-date image to power your applications. - -> **Note:** -> If you would like to contribute an official repository for your -> organization, product or team you can see more information -> [here](https://github.com/docker/stackbrew). - -## Private Repositories - -Private repositories allow you to have repositories that contain images -that you want to keep private, either to your own account or within an -organization or group. - -To work with a private repository on [Docker -Hub](https://hub.docker.com), you will need to add one via the [Add -Repository](https://registry.hub.docker.com/account/repositories/add/) -link. You get one private repository for free with your Docker Hub -account. If you need more accounts you can upgrade your [Docker -Hub](https://registry.hub.docker.com/plans/) plan. - -Once the private repository is created, you can `push` and `pull` images -to and from it using Docker. - -> *Note:* You need to be signed in and have access to work with a -> private repository. - -Private repositories are just like public ones. However, it isn't -possible to browse them or search their content on the public registry. -They do not get cached the same way as a public repository either. - -It is possible to give access to a private repository to those whom you -designate (i.e., collaborators) from its Settings page. From there, you -can also switch repository status (*public* to *private*, or -vice-versa). You will need to have an available private repository slot -open before you can do such a switch. If you don't have any available, -you can always upgrade your [Docker -Hub](https://registry.hub.docker.com/plans/) plan. - -## Webhooks - -You can configure webhooks for your repositories on the Repository -Settings page. A webhook is called only after a successful `push` is -made. The webhook calls are HTTP POST requests with a JSON payload -similar to the example shown below. - -*Example webhook JSON payload:* - -``` -{ - "callback_url": "https://registry.hub.docker.com/u/svendowideit/busybox/hook/2141bc0cdec4hebec411i4c1g40242eg110020/", - "push_data": { - "images": [ - "27d47432a69bca5f2700e4dff7de0388ed65f9d3fb1ec645e2bc24c223dc1cc3", - "51a9c7c1f8bb2fa19bcd09789a34e63f35abb80044bc10196e304f6634cc582c", - ... - ], - "pushed_at": 1.417566822e+09, - "pusher": "svendowideit" - }, - "repository": { - "comment_count": 0, - "date_created": 1.417566665e+09, - "description": "", - "full_description": "webhook triggered from a 'docker push'", - "is_official": false, - "is_private": false, - "is_trusted": false, - "name": "busybox", - "namespace": "svendowideit", - "owner": "svendowideit", - "repo_name": "svendowideit/busybox", - "repo_url": "https://registry.hub.docker.com/u/svendowideit/busybox/", - "star_count": 0, - "status": "Active" -} -``` - -Webhooks allow you to notify people, services and other applications of -new updates to your images and repositories. To get started adding webhooks, -go to the desired repository in the Hub, and click "Webhooks" under the "Settings" -box. - -> **Note:** For testing, you can try an HTTP request tool like -> [requestb.in](http://requestb.in/). - -> **Note**: The Docker Hub servers are currently in the IP range -> `162.242.195.64 - 162.242.195.127`, so you can restrict your webhooks to -> accept webhook requests from that set of IP addresses. - -### Webhook chains - -Webhook chains allow you to chain calls to multiple services. For example, -you can use this to trigger a deployment of your container only after -it has been successfully tested, then update a separate Changelog once the -deployment is complete. -After clicking the "Add webhook" button, simply add as many URLs as necessary -in your chain. - -The first webhook in a chain will be called after a successful push. Subsequent -URLs will be contacted after the callback has been validated. - -#### Validating a callback - -In order to validate a callback in a webhook chain, you need to - -1. Retrieve the `callback_url` value in the request's JSON payload. -1. Send a POST request to this URL containing a valid JSON body. - -> **Note**: A chain request will only be considered complete once the last -> callback has been validated. - -To help you debug or simply view the results of your webhook(s), -view the "History" of the webhook available on its settings page. - -#### Callback JSON data - -The following parameters are recognized in callback data: - -* `state` (required): Accepted values are `success`, `failure` and `error`. - If the state isn't `success`, the webhook chain will be interrupted. -* `description`: A string containing miscellaneous information that will be - available on the Docker Hub. Maximum 255 characters. -* `context`: A string containing the context of the operation. Can be retrieved - from the Docker Hub. Maximum 100 characters. -* `target_url`: The URL where the results of the operation can be found. Can be - retrieved on the Docker Hub. - -*Example callback payload:* - - { - "state": "success", - "description": "387 tests PASSED", - "context": "Continuous integration by Acme CI", - "target_url": "http://ci.acme.com/results/afd339c1c3d27" - } diff --git a/examples/apt-cacher-ng.md~ b/examples/apt-cacher-ng.md~ deleted file mode 100644 index cd92cb59a1..0000000000 --- a/examples/apt-cacher-ng.md~ +++ /dev/null @@ -1,107 +0,0 @@ -page_title: Dockerizing an apt-cacher-ng service -page_description: Installing and running an apt-cacher-ng service -page_keywords: docker, example, package installation, networking, debian, ubuntu - -# Dockerizing an Apt-Cacher-ng Service - -> **Note**: -> - **If you don't like sudo** then see [*Giving non-root -> access*](/installation/binaries/#giving-non-root-access). -> - **If you're using OS X or docker via TCP** then you shouldn't use -> sudo. - -When you have multiple Docker servers, or build unrelated Docker -containers which can't make use of the Docker build cache, it can be -useful to have a caching proxy for your packages. This container makes -the second download of any package almost instant. - -Use the following Dockerfile: - - # - # Build: docker build -t apt-cacher . - # Run: docker run -d -p 3142:3142 --name apt-cacher-run apt-cacher - # - # and then you can run containers with: - # docker run -t -i --rm -e http_proxy http://dockerhost:3142/ debian bash - # - FROM ubuntu - MAINTAINER SvenDowideit@docker.com - - VOLUME ["/var/cache/apt-cacher-ng"] - RUN apt-get update && apt-get install -y apt-cacher-ng - - EXPOSE 3142 - CMD chmod 777 /var/cache/apt-cacher-ng && /etc/init.d/apt-cacher-ng start && tail -f /var/log/apt-cacher-ng/* - -To build the image using: - - $ sudo docker build -t eg_apt_cacher_ng . - -Then run it, mapping the exposed port to one on the host - - $ sudo docker run -d -p 3142:3142 --name test_apt_cacher_ng eg_apt_cacher_ng - -To see the logfiles that are `tailed` in the default command, you can -use: - - $ sudo docker logs -f test_apt_cacher_ng - -To get your Debian-based containers to use the proxy, you can do one of -three things - -1. Add an apt Proxy setting - `echo 'Acquire::http { Proxy "http://dockerhost:3142"; };' >> /etc/apt/conf.d/01proxy` -2. Set an environment variable: - `http_proxy=http://dockerhost:3142/` -3. Change your `sources.list` entries to start with - `http://dockerhost:3142/` - -**Option 1** injects the settings safely into your apt configuration in -a local version of a common base: - - FROM ubuntu - RUN echo 'Acquire::http { Proxy "http://dockerhost:3142"; };' >> /etc/apt/apt.conf.d/01proxy - RUN apt-get update && apt-get install -y vim git - - # docker build -t my_ubuntu . - -**Option 2** is good for testing, but will break other HTTP clients -which obey `http_proxy`, such as `curl`, `wget` and others: - - $ sudo docker run --rm -t -i -e http_proxy=http://dockerhost:3142/ debian bash - -**Option 3** is the least portable, but there will be times when you -might need to do it and you can do it from your `Dockerfile` -too. - -Apt-cacher-ng has some tools that allow you to manage the repository, -and they can be used by leveraging the `VOLUME` -instruction, and the image we built to run the service: - - $ sudo docker run --rm -t -i --volumes-from test_apt_cacher_ng eg_apt_cacher_ng bash - - $$ /usr/lib/apt-cacher-ng/distkill.pl - Scanning /var/cache/apt-cacher-ng, please wait... - Found distributions: - bla, taggedcount: 0 - 1. precise-security (36 index files) - 2. wheezy (25 index files) - 3. precise-updates (36 index files) - 4. precise (36 index files) - 5. wheezy-updates (18 index files) - - Found architectures: - 6. amd64 (36 index files) - 7. i386 (24 index files) - - WARNING: The removal action may wipe out whole directories containing - index files. Select d to see detailed list. - - (Number nn: tag distribution or architecture nn; 0: exit; d: show details; r: remove tagged; q: quit): q - -Finally, clean up after your test by stopping and removing the -container, and then removing the image. - - $ sudo docker stop test_apt_cacher_ng - $ sudo docker rm test_apt_cacher_ng - $ sudo docker rmi eg_apt_cacher_ng diff --git a/examples/couchdb_data_volumes.md~ b/examples/couchdb_data_volumes.md~ deleted file mode 100644 index 8cd2408e44..0000000000 --- a/examples/couchdb_data_volumes.md~ +++ /dev/null @@ -1,43 +0,0 @@ -page_title: Dockerizing a CouchDB Service -page_description: Sharing data between 2 couchdb databases -page_keywords: docker, example, package installation, networking, couchdb, data volumes - -# Dockerizing a CouchDB Service - -> **Note**: -> - **If you don't like sudo** then see [*Giving non-root -> access*](/installation/binaries/#giving-non-root-access) - -Here's an example of using data volumes to share the same data between -two CouchDB containers. This could be used for hot upgrades, testing -different versions of CouchDB on the same data, etc. - -## Create first database - -Note that we're marking `/var/lib/couchdb` as a data volume. - - $ COUCH1=$(sudo docker run -d -p 5984 -v /var/lib/couchdb shykes/couchdb:2013-05-03) - -## Add data to the first database - -We're assuming your Docker host is reachable at `localhost`. If not, -replace `localhost` with the public IP of your Docker host. - - $ HOST=localhost - $ URL="http://$HOST:$(sudo docker port $COUCH1 5984 | grep -o '[1-9][0-9]*$')/_utils/" - $ echo "Navigate to $URL in your browser, and use the couch interface to add data" - -## Create second database - -This time, we're requesting shared access to `$COUCH1`'s volumes. - - $ COUCH2=$(sudo docker run -d -p 5984 --volumes-from $COUCH1 shykes/couchdb:2013-05-03) - -## Browse data on the second database - - $ HOST=localhost - $ URL="http://$HOST:$(sudo docker port $COUCH2 5984 | grep -o '[1-9][0-9]*$')/_utils/" - $ echo "Navigate to $URL in your browser. You should see the same data as in the first database"'!' - -Congratulations, you are now running two Couchdb containers, completely -isolated from each other *except* for their data. diff --git a/examples/mongodb.md~ b/examples/mongodb.md~ deleted file mode 100644 index 28f7824594..0000000000 --- a/examples/mongodb.md~ +++ /dev/null @@ -1,152 +0,0 @@ -page_title: Dockerizing MongoDB -page_description: Creating a Docker image with MongoDB pre-installed using a Dockerfile and sharing the image on Docker Hub -page_keywords: docker, dockerize, dockerizing, article, example, docker.io, platform, package, installation, networking, mongodb, containers, images, image, sharing, dockerfile, build, auto-building, virtualization, framework - -# Dockerizing MongoDB - -## Introduction - -In this example, we are going to learn how to build a Docker image with -MongoDB pre-installed. We'll also see how to `push` that image to the -[Docker Hub registry](https://hub.docker.com) and share it with others! - -Using Docker and containers for deploying [MongoDB](https://www.mongodb.org/) -instances will bring several benefits, such as: - - - Easy to maintain, highly configurable MongoDB instances; - - Ready to run and start working within milliseconds; - - Based on globally accessible and shareable images. - -> **Note:** -> -> If you do **_not_** like `sudo`, you might want to check out: -> [*Giving non-root access*](/installation/binaries/#giving-non-root-access). - -## Creating a Dockerfile for MongoDB - -Let's create our `Dockerfile` and start building it: - - $ nano Dockerfile - -Although optional, it is handy to have comments at the beginning of a -`Dockerfile` explaining its purpose: - - # Dockerizing MongoDB: Dockerfile for building MongoDB images - # Based on ubuntu:latest, installs MongoDB following the instructions from: - # http://docs.mongodb.org/manual/tutorial/install-mongodb-on-ubuntu/ - -> **Tip:** `Dockerfile`s are flexible. However, they need to follow a certain -> format. The first item to be defined is the name of an image, which becomes -> the *parent* of your *Dockerized MongoDB* image. - -We will build our image using the latest version of Ubuntu from the -[Docker Hub Ubuntu](https://registry.hub.docker.com/_/ubuntu/) repository. - - # Format: FROM repository[:version] - FROM ubuntu:latest - -Continuing, we will declare the `MAINTAINER` of the `Dockerfile`: - - # Format: MAINTAINER Name - MAINTAINER M.Y. Name - -> **Note:** Although Ubuntu systems have MongoDB packages, they are likely to -> be outdated. Therefore in this example, we will use the official MongoDB -> packages. - -We will begin with importing the MongoDB public GPG key. We will also create -a MongoDB repository file for the package manager. - - # Installation: - # Import MongoDB public GPG key AND create a MongoDB list file - RUN apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv 7F0CEB10 - RUN echo 'deb http://downloads-distro.mongodb.org/repo/ubuntu-upstart dist 10gen' | tee /etc/apt/sources.list.d/10gen.list - -After this initial preparation we can update our packages and install MongoDB. - - # Update apt-get sources AND install MongoDB - RUN apt-get update && apt-get install -y mongodb-org - -> **Tip:** You can install a specific version of MongoDB by using a list -> of required packages with versions, e.g.: -> -> RUN apt-get update && apt-get install -y mongodb-org=2.6.1 mongodb-org-server=2.6.1 mongodb-org-shell=2.6.1 mongodb-org-mongos=2.6.1 mongodb-org-tools=2.6.1 - -MongoDB requires a data directory. Let's create it as the final step of our -installation instructions. - - # Create the MongoDB data directory - RUN mkdir -p /data/db - -Lastly we set the `ENTRYPOINT` which will tell Docker to run `mongod` inside -the containers launched from our MongoDB image. And for ports, we will use -the `EXPOSE` instruction. - - # Expose port 27017 from the container to the host - EXPOSE 27017 - - # Set usr/bin/mongod as the dockerized entry-point application - ENTRYPOINT usr/bin/mongod - -Now save the file and let's build our image. - -> **Note:** -> -> The full version of this `Dockerfile` can be found [here](/examples/mongodb/Dockerfile). - -## Building the MongoDB Docker image - -With our `Dockerfile`, we can now build the MongoDB image using Docker. Unless -experimenting, it is always a good practice to tag Docker images by passing the -`--tag` option to `docker build` command. - - # Format: sudo docker build --tag/-t / . - # Example: - $ sudo docker build --tag my/repo . - -Once this command is issued, Docker will go through the `Dockerfile` and build -the image. The final image will be tagged `my/repo`. - -## Pushing the MongoDB image to Docker Hub - -All Docker image repositories can be hosted and shared on -[Docker Hub](https://hub.docker.com) with the `docker push` command. For this, -you need to be logged-in. - - # Log-in - $ sudo docker login - Username: - .. - - # Push the image - # Format: sudo docker push / - $ sudo docker push my/repo - The push refers to a repository [my/repo] (len: 1) - Sending image list - Pushing repository my/repo (1 tags) - .. - -## Using the MongoDB image - -Using the MongoDB image we created, we can run one or more MongoDB instances -as daemon process(es). - - # Basic way - # Usage: sudo docker run --name -d / - $ sudo docker run --name mongo_instance_001 -d my/repo - - # Dockerized MongoDB, lean and mean! - # Usage: sudo docker run --name -d / --noprealloc --smallfiles - $ sudo docker run --name mongo_instance_001 -d my/repo --noprealloc --smallfiles - - # Checking out the logs of a MongoDB container - # Usage: sudo docker logs - $ sudo docker logs mongo_instance_001 - - # Playing with MongoDB - # Usage: mongo --port - $ mongo --port 12345 - - - [Linking containers](/userguide/dockerlinks) - - [Cross-host linking containers](/articles/ambassador_pattern_linking/) - - [Creating an Automated Build](/docker-io/builds/#automated-builds) diff --git a/examples/nodejs_web_app.md~ b/examples/nodejs_web_app.md~ deleted file mode 100644 index 56f7687cd2..0000000000 --- a/examples/nodejs_web_app.md~ +++ /dev/null @@ -1,191 +0,0 @@ -page_title: Dockerizing a Node.js Web App -page_description: Installing and running a Node.js app with Docker -page_keywords: docker, example, package installation, node, centos - -# Dockerizing a Node.js Web App - -> **Note**: -> - **If you don't like sudo** then see [*Giving non-root -> access*](/installation/binaries/#giving-non-root-access) - -The goal of this example is to show you how you can build your own -Docker images from a parent image using a `Dockerfile` -. We will do that by making a simple Node.js hello world web -application running on CentOS. You can get the full source code at -[https://github.com/enokd/docker-node-hello/](https://github.com/enokd/docker-node-hello/). - -## Create Node.js app - -First, create a directory `src` where all the files -would live. Then create a `package.json` file that -describes your app and its dependencies: - - { - "name": "docker-centos-hello", - "private": true, - "version": "0.0.1", - "description": "Node.js Hello world app on CentOS using docker", - "author": "Daniel Gasienica ", - "dependencies": { - "express": "3.2.4" - } - } - -Then, create an `index.js` file that defines a web -app using the [Express.js](http://expressjs.com/) framework: - - var express = require('express'); - - // Constants - var PORT = 8080; - - // App - var app = express(); - app.get('/', function (req, res) { - res.send('Hello world\n'); - }); - - app.listen(PORT); - console.log('Running on http://localhost:' + PORT); - -In the next steps, we'll look at how you can run this app inside a -CentOS container using Docker. First, you'll need to build a Docker -image of your app. - -## Creating a Dockerfile - -Create an empty file called `Dockerfile`: - - touch Dockerfile - -Open the `Dockerfile` in your favorite text editor - -Define the parent image you want to use to build your own image on -top of. Here, we'll use -[CentOS](https://registry.hub.docker.com/_/centos/) (tag: `centos6`) -available on the [Docker Hub](https://hub.docker.com/): - - FROM centos:centos6 - -Since we're building a Node.js app, you'll have to install Node.js as -well as npm on your CentOS image. Node.js is required to run your app -and npm to install your app's dependencies defined in -`package.json`. To install the right package for -CentOS, we'll use the instructions from the [Node.js wiki]( -https://github.com/joyent/node/wiki/Installing-Node.js- -via-package-manager#rhelcentosscientific-linux-6): - - # Enable EPEL for Node.js - RUN rpm -Uvh http://download.fedoraproject.org/pub/epel/6/i386/epel-release-6-8.noarch.rpm - # Install Node.js and npm - RUN yum install -y npm - -To bundle your app's source code inside the Docker image, use the `COPY` -instruction: - - # Bundle app source - COPY . /src - -Install your app dependencies using the `npm` binary: - - # Install app dependencies - RUN cd /src; npm install - -Your app binds to port `8080` so you'll use the` EXPOSE` instruction to have -it mapped by the `docker` daemon: - - EXPOSE 8080 - -Last but not least, define the command to run your app using `CMD` which -defines your runtime, i.e. `node`, and the path to our app, i.e. `src/index.js` -(see the step where we added the source to the container): - - CMD ["node", "/src/index.js"] - -Your `Dockerfile` should now look like this: - - FROM centos:centos6 - - # Enable EPEL for Node.js - RUN rpm -Uvh http://download.fedoraproject.org/pub/epel/6/i386/epel-release-6-8.noarch.rpm - # Install Node.js and npm - RUN yum install -y npm - - # Bundle app source - COPY . /src - # Install app dependencies - RUN cd /src; npm install - - EXPOSE 8080 - CMD ["node", "/src/index.js"] - -## Building your image - -Go to the directory that has your `Dockerfile` and run the following command -to build a Docker image. The `-t` flag lets you tag your image so it's easier -to find later using the `docker images` command: - - $ sudo docker build -t /centos-node-hello . - -Your image will now be listed by Docker: - - $ sudo docker images - - # Example - REPOSITORY TAG ID CREATED - centos centos6 539c0211cd76 8 weeks ago - /centos-node-hello latest d64d3505b0d2 2 hours ago - -## Run the image - -Running your image with `-d` runs the container in detached mode, leaving the -container running in the background. The `-p` flag redirects a public port to -a private port in the container. Run the image you previously built: - - $ sudo docker run -p 49160:8080 -d /centos-node-hello - -Print the output of your app: - - # Get container ID - $ sudo docker ps - - # Print app output - $ sudo docker logs - - # Example - Running on http://localhost:8080 - -## Test - -To test your app, get the port of your app that Docker mapped: - - $ sudo docker ps - - # Example - ID IMAGE COMMAND ... PORTS - ecce33b30ebf /centos-node-hello:latest node /src/index.js 49160->8080 - -In the example above, Docker mapped the `8080` port of the container to `49160`. - -Now you can call your app using `curl` (install if needed via: -`sudo apt-get install curl`): - - $ curl -i localhost:49160 - - HTTP/1.1 200 OK - X-Powered-By: Express - Content-Type: text/html; charset=utf-8 - Content-Length: 12 - Date: Sun, 02 Jun 2013 03:53:22 GMT - Connection: keep-alive - - Hello world - -If you use Boot2docker on OS X, the port is actually mapped to the Docker host VM, -and you should use the following command: - - $ curl $(boot2docker ip):49160 - -We hope this tutorial helped you get up and running with Node.js and -CentOS on Docker. You can get the full source code at -[https://github.com/enokd/docker-node-hello/](https://github.com/enokd/docker-node-hello/). diff --git a/examples/postgresql_service.md~ b/examples/postgresql_service.md~ deleted file mode 100644 index 21044d369a..0000000000 --- a/examples/postgresql_service.md~ +++ /dev/null @@ -1,147 +0,0 @@ -page_title: Dockerizing PostgreSQL -page_description: Running and installing a PostgreSQL service -page_keywords: docker, example, package installation, postgresql - -# Dockerizing PostgreSQL - -> **Note**: -> - **If you don't like sudo** then see [*Giving non-root -> access*](/installation/binaries/#giving-non-root-access) - -## Installing PostgreSQL on Docker - -Assuming there is no Docker image that suits your needs on the [Docker -Hub](http://hub.docker.com), you can create one yourself. - -Start by creating a new `Dockerfile`: - -> **Note**: -> This PostgreSQL setup is for development-only purposes. Refer to the -> PostgreSQL documentation to fine-tune these settings so that it is -> suitably secure. - - # - # example Dockerfile for http://docs.docker.com/examples/postgresql_service/ - # - - FROM ubuntu - MAINTAINER SvenDowideit@docker.com - - # Add the PostgreSQL PGP key to verify their Debian packages. - # It should be the same key as https://www.postgresql.org/media/keys/ACCC4CF8.asc - RUN apt-key adv --keyserver keyserver.ubuntu.com --recv-keys B97B0AFCAA1A47F044F244A07FCC7D46ACCC4CF8 - - # Add PostgreSQL's repository. It contains the most recent stable release - # of PostgreSQL, ``9.3``. - RUN echo "deb http://apt.postgresql.org/pub/repos/apt/ precise-pgdg main" > /etc/apt/sources.list.d/pgdg.list - - # Install ``python-software-properties``, ``software-properties-common`` and PostgreSQL 9.3 - # There are some warnings (in red) that show up during the build. You can hide - # them by prefixing each apt-get statement with DEBIAN_FRONTEND=noninteractive - RUN apt-get update && apt-get install -y python-software-properties software-properties-common postgresql-9.3 postgresql-client-9.3 postgresql-contrib-9.3 - - # Note: The official Debian and Ubuntu images automatically ``apt-get clean`` - # after each ``apt-get`` - - # Run the rest of the commands as the ``postgres`` user created by the ``postgres-9.3`` package when it was ``apt-get installed`` - USER postgres - - # Create a PostgreSQL role named ``docker`` with ``docker`` as the password and - # then create a database `docker` owned by the ``docker`` role. - # Note: here we use ``&&\`` to run commands one after the other - the ``\`` - # allows the RUN command to span multiple lines. - RUN /etc/init.d/postgresql start &&\ - psql --command "CREATE USER docker WITH SUPERUSER PASSWORD 'docker';" &&\ - createdb -O docker docker - - # Adjust PostgreSQL configuration so that remote connections to the - # database are possible. - RUN echo "host all all 0.0.0.0/0 md5" >> /etc/postgresql/9.3/main/pg_hba.conf - - # And add ``listen_addresses`` to ``/etc/postgresql/9.3/main/postgresql.conf`` - RUN echo "listen_addresses='*'" >> /etc/postgresql/9.3/main/postgresql.conf - - # Expose the PostgreSQL port - EXPOSE 5432 - - # Add VOLUMEs to allow backup of config, logs and databases - VOLUME ["/etc/postgresql", "/var/log/postgresql", "/var/lib/postgresql"] - - # Set the default command to run when starting the container - CMD ["/usr/lib/postgresql/9.3/bin/postgres", "-D", "/var/lib/postgresql/9.3/main", "-c", "config_file=/etc/postgresql/9.3/main/postgresql.conf"] - -Build an image from the Dockerfile assign it a name. - - $ sudo docker build -t eg_postgresql . - -And run the PostgreSQL server container (in the foreground): - - $ sudo docker run --rm -P --name pg_test eg_postgresql - -There are 2 ways to connect to the PostgreSQL server. We can use [*Link -Containers*](/userguide/dockerlinks), or we can access it from our host -(or the network). - -> **Note**: -> The `--rm` removes the container and its image when -> the container exits successfully. - -### Using container linking - -Containers can be linked to another container's ports directly using -`-link remote_name:local_alias` in the client's -`docker run`. This will set a number of environment -variables that can then be used to connect: - - $ sudo docker run --rm -t -i --link pg_test:pg eg_postgresql bash - - postgres@7ef98b1b7243:/$ psql -h $PG_PORT_5432_TCP_ADDR -p $PG_PORT_5432_TCP_PORT -d docker -U docker --password - -### Connecting from your host system - -Assuming you have the postgresql-client installed, you can use the -host-mapped port to test as well. You need to use `docker ps` -to find out what local host port the container is mapped to -first: - - $ sudo docker ps - CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES - 5e24362f27f6 eg_postgresql:latest /usr/lib/postgresql/ About an hour ago Up About an hour 0.0.0.0:49153->5432/tcp pg_test - $ psql -h localhost -p 49153 -d docker -U docker --password - -### Testing the database - -Once you have authenticated and have a `docker =#` -prompt, you can create a table and populate it. - - psql (9.3.1) - Type "help" for help. - - $ docker=# CREATE TABLE cities ( - docker(# name varchar(80), - docker(# location point - docker(# ); - CREATE TABLE - $ docker=# INSERT INTO cities VALUES ('San Francisco', '(-194.0, 53.0)'); - INSERT 0 1 - $ docker=# select * from cities; - name | location - ---------------+----------- - San Francisco | (-194,53) - (1 row) - -### Using the container volumes - -You can use the defined volumes to inspect the PostgreSQL log files and -to backup your configuration and data: - - $ sudo docker run --rm --volumes-from pg_test -t -i busybox sh - - / # ls - bin etc lib linuxrc mnt proc run sys usr - dev home lib64 media opt root sbin tmp var - / # ls /etc/postgresql/9.3/main/ - environment pg_hba.conf postgresql.conf - pg_ctl.conf pg_ident.conf start.conf - /tmp # ls /var/log - ldconfig postgresql diff --git a/examples/running_redis_service.md~ b/examples/running_redis_service.md~ deleted file mode 100644 index 99036a0426..0000000000 --- a/examples/running_redis_service.md~ +++ /dev/null @@ -1,83 +0,0 @@ -page_title: Dockerizing a Redis service -page_description: Installing and running an redis service -page_keywords: docker, example, package installation, networking, redis - -# Dockerizing a Redis Service - -Very simple, no frills, Redis service attached to a web application -using a link. - -## Create a docker container for Redis - -Firstly, we create a `Dockerfile` for our new Redis -image. - - FROM ubuntu:14.04 - RUN apt-get update && apt-get install -y redis-server - EXPOSE 6379 - ENTRYPOINT ["/usr/bin/redis-server"] - -Next we build an image from our `Dockerfile`. -Replace `` with your own user name. - - $ sudo docker build -t /redis . - -## Run the service - -Use the image we've just created and name your container `redis`. - -Running the service with `-d` runs the container in detached mode, leaving -the container running in the background. - -Importantly, we're not exposing any ports on our container. Instead -we're going to use a container link to provide access to our Redis -database. - - $ sudo docker run --name redis -d /redis - -## Create your web application container - -Next we can create a container for our application. We're going to use -the `-link` flag to create a link to the `redis` container we've just -created with an alias of `db`. This will create a secure tunnel to the -`redis` container and expose the Redis instance running inside that -container to only this container. - - $ sudo docker run --link redis:db -i -t ubuntu:14.04 /bin/bash - -Once inside our freshly created container we need to install Redis to -get the `redis-cli` binary to test our connection. - - $ sudo apt-get update - $ sudo apt-get install redis-server - $ sudo service redis-server stop - -As we've used the `--link redis:db` option, Docker -has created some environment variables in our web application container. - - $ env | grep DB_ - - # Should return something similar to this with your values - DB_NAME=/violet_wolf/db - DB_PORT_6379_TCP_PORT=6379 - DB_PORT=tcp://172.17.0.33:6379 - DB_PORT_6379_TCP=tcp://172.17.0.33:6379 - DB_PORT_6379_TCP_ADDR=172.17.0.33 - DB_PORT_6379_TCP_PROTO=tcp - -We can see that we've got a small list of environment variables prefixed -with `DB`. The `DB` comes from the link alias specified when we launched -the container. Let's use the `DB_PORT_6379_TCP_ADDR` variable to connect to -our Redis container. - - $ redis-cli -h $DB_PORT_6379_TCP_ADDR - $ redis 172.17.0.33:6379> - $ redis 172.17.0.33:6379> set docker awesome - OK - $ redis 172.17.0.33:6379> get docker - "awesome" - $ redis 172.17.0.33:6379> exit - -We could easily use this or other environment variables in our web -application to make a connection to our `redis` -container. diff --git a/examples/running_riak_service.md~ b/examples/running_riak_service.md~ deleted file mode 100644 index 0b53234046..0000000000 --- a/examples/running_riak_service.md~ +++ /dev/null @@ -1,112 +0,0 @@ -page_title: Dockerizing a Riak service -page_description: Build a Docker image with Riak pre-installed -page_keywords: docker, example, package installation, networking, riak - -# Dockerizing a Riak Service - -The goal of this example is to show you how to build a Docker image with -Riak pre-installed. - -## Creating a Dockerfile - -Create an empty file called `Dockerfile`: - - $ touch Dockerfile - -Next, define the parent image you want to use to build your image on top -of. We'll use [Ubuntu](https://registry.hub.docker.com/_/ubuntu/) (tag: -`latest`), which is available on [Docker Hub](https://hub.docker.com): - - # Riak - # - # VERSION 0.1.0 - - # Use the Ubuntu base image provided by dotCloud - FROM ubuntu:latest - MAINTAINER Hector Castro hector@basho.com - -After that, we install and setup a few dependencies: - - - `curl` is used to download Basho's APT - repository key - - `lsb-release` helps us derive the Ubuntu release - codename - - `openssh-server` allows us to login to - containers remotely and join Riak nodes to form a cluster - - `supervisor` is used manage the OpenSSH and Riak - processes - - - - # Install and setup project dependencies - RUN apt-get update && apt-get install -y curl lsb-release supervisor openssh-server - - RUN mkdir -p /var/run/sshd - RUN mkdir -p /var/log/supervisor - - RUN locale-gen en_US en_US.UTF-8 - - COPY supervisord.conf /etc/supervisor/conf.d/supervisord.conf - - RUN echo 'root:basho' | chpasswd - -Next, we add Basho's APT repository: - - RUN curl -sSL http://apt.basho.com/gpg/basho.apt.key | apt-key add -- - RUN echo "deb http://apt.basho.com $(lsb_release -cs) main" > /etc/apt/sources.list.d/basho.list - -After that, we install Riak and alter a few defaults: - - # Install Riak and prepare it to run - RUN apt-get update && apt-get install -y riak - RUN sed -i.bak 's/127.0.0.1/0.0.0.0/' /etc/riak/app.config - RUN echo "ulimit -n 4096" >> /etc/default/riak - -Then, we expose the Riak Protocol Buffers and HTTP interfaces, along -with SSH: - - # Expose Riak Protocol Buffers and HTTP interfaces, along with SSH - EXPOSE 8087 8098 22 - -Finally, run `supervisord` so that Riak and OpenSSH -are started: - - CMD ["/usr/bin/supervisord"] - -## Create a supervisord configuration file - -Create an empty file called `supervisord.conf`. Make -sure it's at the same directory level as your `Dockerfile`: - - touch supervisord.conf - -Populate it with the following program definitions: - - [supervisord] - nodaemon=true - - [program:sshd] - command=/usr/sbin/sshd -D - stdout_logfile=/var/log/supervisor/%(program_name)s.log - stderr_logfile=/var/log/supervisor/%(program_name)s.log - autorestart=true - - [program:riak] - command=bash -c ". /etc/default/riak && /usr/sbin/riak console" - pidfile=/var/log/riak/riak.pid - stdout_logfile=/var/log/supervisor/%(program_name)s.log - stderr_logfile=/var/log/supervisor/%(program_name)s.log - -## Build the Docker image for Riak - -Now you should be able to build a Docker image for Riak: - - $ sudo docker build -t "/riak" . - -## Next steps - -Riak is a distributed database. Many production deployments consist of -[at least five nodes]( -http://basho.com/why-your-riak-cluster-should-have-at-least-five-nodes/). -See the [docker-riak](https://github.com/hectcastro/docker-riak) project -details on how to deploy a Riak cluster using Docker and Pipework. diff --git a/examples/running_ssh_service.md~ b/examples/running_ssh_service.md~ deleted file mode 100644 index 445cfe5257..0000000000 --- a/examples/running_ssh_service.md~ +++ /dev/null @@ -1,78 +0,0 @@ -page_title: Dockerizing an SSH service -page_description: Installing and running an SSHd service on Docker -page_keywords: docker, example, package installation, networking - -# Dockerizing an SSH Daemon Service - -## Build an `eg_sshd` image - -The following `Dockerfile` sets up an SSHd service in a container that you -can use to connect to and inspect other container's volumes, or to get -quick access to a test container. - - # sshd - # - # VERSION 0.0.2 - - FROM ubuntu:14.04 - MAINTAINER Sven Dowideit - - RUN apt-get update && apt-get install -y openssh-server - RUN mkdir /var/run/sshd - RUN echo 'root:screencast' | chpasswd - RUN sed -i 's/PermitRootLogin without-password/PermitRootLogin yes/' /etc/ssh/sshd_config - - # SSH login fix. Otherwise user is kicked off after login - RUN sed 's@session\s*required\s*pam_loginuid.so@session optional pam_loginuid.so@g' -i /etc/pam.d/sshd - - ENV NOTVISIBLE "in users profile" - RUN echo "export VISIBLE=now" >> /etc/profile - - EXPOSE 22 - CMD ["/usr/sbin/sshd", "-D"] - -Build the image using: - - $ sudo docker build -t eg_sshd . - -## Run a `test_sshd` container - -Then run it. You can then use `docker port` to find out what host port -the container's port 22 is mapped to: - - $ sudo docker run -d -P --name test_sshd eg_sshd - $ sudo docker port test_sshd 22 - 0.0.0.0:49154 - -And now you can ssh as `root` on the container's IP address (you can find it -with `docker inspect`) or on port `49154` of the Docker daemon's host IP address -(`ip address` or `ifconfig` can tell you that) or `localhost` if on the -Docker daemon host: - - $ ssh root@192.168.1.2 -p 49154 - # The password is ``screencast``. - $$ - -## Environment variables - -Using the `sshd` daemon to spawn shells makes it complicated to pass environment -variables to the user's shell via the normal Docker mechanisms, as `sshd` scrubs -the environment before it starts the shell. - -If you're setting values in the `Dockerfile` using `ENV`, you'll need to push them -to a shell initialization file like the `/etc/profile` example in the `Dockerfile` -above. - -If you need to pass`docker run -e ENV=value` values, you will need to write a -short script to do the same before you start `sshd -D` and then replace the -`CMD` with that script. - -## Clean up - -Finally, clean up after your test by stopping and removing the -container, and then removing the image. - - $ sudo docker stop test_sshd - $ sudo docker rm test_sshd - $ sudo docker rmi eg_sshd - diff --git a/img/icons/README.md~ b/img/icons/README.md~ deleted file mode 100644 index 6bd32ae6e5..0000000000 --- a/img/icons/README.md~ +++ /dev/null @@ -1,7 +0,0 @@ -### About the images - -Generally the icons are created in .svg, because it is a nicer format. Then we can easily convert them to .png as required. - -Using imagemagick; mogrify: - -mogrify -background none -format png *.svg diff --git a/include/no-remote-sudo.md~ b/include/no-remote-sudo.md~ deleted file mode 100644 index 065b0cbfd7..0000000000 --- a/include/no-remote-sudo.md~ +++ /dev/null @@ -1,3 +0,0 @@ -> **Note:** if you are using a remote Docker daemon, such as Boot2Docker, -> then _do not_ type the `sudo` before the `docker` commands shown in the -> documentation's examples. diff --git a/installation/SUSE.md~ b/installation/SUSE.md~ deleted file mode 100644 index 2a0aa91d9f..0000000000 --- a/installation/SUSE.md~ +++ /dev/null @@ -1,82 +0,0 @@ -page_title: Installation on openSUSE and SUSE Linux Enterprise -page_description: Installation instructions for Docker on openSUSE and on SUSE Linux Enterprise. -page_keywords: openSUSE, SUSE Linux Enterprise, SUSE, SLE, docker, documentation, installation - -# openSUSE - -Docker is available in **openSUSE 12.3 and later**. Please note that due -to its current limitations Docker is able to run only **64 bit** architecture. - -Docker is not part of the official repositories of openSUSE 12.3 and -openSUSE 13.1. Hence it is neccessary to add the [Virtualization -repository](https://build.opensuse.org/project/show/Virtualization) from -[OBS](https://build.opensuse.org/) to install the `docker` package. - -Execute one of the following commands to add the Virtualization repository: - - # openSUSE 12.3 - $ sudo zypper ar -f http://download.opensuse.org/repositories/Virtualization/openSUSE_12.3/ Virtualization - - # openSUSE 13.1 - $ sudo zypper ar -f http://download.opensuse.org/repositories/Virtualization/openSUSE_13.1/ Virtualization - -No extra repository is required for openSUSE 13.2 and later. - -# SUSE Linux Enterprise - -Docker is available in **SUSE Linux Enterprise 12 and later**. Please note that -due to its current limitations Docker is able to run only on **64 bit** -architecture. - -# Installation - -Install the Docker package. - - $ sudo zypper in docker - -Now that it's installed, let's start the Docker daemon. - - $ sudo systemctl start docker - -If we want Docker to start at boot, we should also: - - $ sudo systemctl enable docker - -The docker package creates a new group named docker. Users, other than -root user, need to be part of this group in order to interact with the -Docker daemon. You can add users with: - - $ sudo /usr/sbin/usermod -a -G docker - -To verify that everything has worked as expected: - - $ sudo docker run --rm -i -t opensuse /bin/bash - -This should download and import the `opensuse` image, and then start `bash` in -a container. To exit the container type `exit`. - -If you want your containers to be able to access the external network you must -enable the `net.ipv4.ip_forward` rule. -This can be done using YaST by browsing to the -`Network Devices -> Network Settings -> Routing` menu and ensuring that the -`Enable IPv4 Forwarding` box is checked. - -This option cannot be changed when networking is handled by the Network Manager. -In such cases the `/etc/sysconfig/SuSEfirewall2` file needs to be edited by -hand to ensure the `FW_ROUTE` flag is set to `yes` like so: - - FW_ROUTE="yes" - - -**Done!** - -## Custom daemon options - -If you need to add an HTTP Proxy, set a different directory or partition for the -Docker runtime files, or make other customizations, read our systemd article to -learn how to [customize your systemd Docker daemon options](/articles/systemd/). - -## What's next - -Continue with the [User Guide](/userguide/). - diff --git a/installation/amazon.md~ b/installation/amazon.md~ deleted file mode 100644 index 6a28685dc5..0000000000 --- a/installation/amazon.md~ +++ /dev/null @@ -1,49 +0,0 @@ -page_title: Installation on Amazon EC2 -page_description: Installation instructions for Docker on Amazon EC2. -page_keywords: amazon ec2, virtualization, cloud, docker, documentation, installation - -# Amazon EC2 - -There are several ways to install Docker on AWS EC2. You can use Amazon Linux, which includes the Docker packages in its Software Repository, or opt for any of the other supported Linux images, for example a [*Standard Ubuntu Installation*](#standard-ubuntu-installation). - -**You'll need an** [AWS account](http://aws.amazon.com/) **first, of -course.** - -## Amazon QuickStart with Amazon Linux AMI 2014.09.1 - -The latest Amazon Linux AMI, 2014.09.1, is Docker ready. Docker packages can be installed from Amazon's provided Software -Repository. - -1. **Choose an image:** - - Launch the [Create Instance - Wizard](https://console.aws.amazon.com/ec2/v2/home?#LaunchInstanceWizard:) - menu on your AWS Console. - - In the Quick Start menu, select the Amazon provided AMI for Amazon Linux 2014.09.1 - - For testing you can use the default (possibly free) - `t2.micro` instance (more info on - [pricing](http://aws.amazon.com/ec2/pricing/)). - - Click the `Next: Configure Instance Details` - button at the bottom right. -2. After a few more standard choices where defaults are probably ok, - your Amazon Linux instance should be running! -3. SSH to your instance to install Docker : - `ssh -i ec2-user@` -4. Once connected to the instance, type - `sudo yum install -y docker ; sudo service docker start` - to install and start Docker - -**If this is your first AWS instance, you may need to set up your Security Group to allow SSH.** By default all incoming ports to your new instance will be blocked by the AWS Security Group, so you might just get timeouts when you try to connect. - -Once you`ve got Docker installed, you're ready to try it out – head on -over to the [User Guide](/userguide). - -## Standard Ubuntu Installation - -If you want a more hands-on installation, then you can follow the -[*Ubuntu*](/installation/ubuntulinux) instructions installing Docker -on any EC2 instance running Ubuntu. Just follow Step 1 from the Amazon -QuickStart above to pick an image (or use one of your -own) and skip the step with the *User Data*. Then continue with the -[*Ubuntu*](/installation/ubuntulinux) instructions. - -Continue with the [User Guide](/userguide/). diff --git a/installation/archlinux.md~ b/installation/archlinux.md~ deleted file mode 100644 index 99849c7aa0..0000000000 --- a/installation/archlinux.md~ +++ /dev/null @@ -1,61 +0,0 @@ -page_title: Installation on Arch Linux -page_description: Installation instructions for Docker on ArchLinux. -page_keywords: arch linux, virtualization, docker, documentation, installation - -# Arch Linux - -Installing on Arch Linux can be handled via the package in community: - - - [docker](https://www.archlinux.org/packages/community/x86_64/docker/) - -or the following AUR package: - - - [docker-git](https://aur.archlinux.org/packages/docker-git/) - -The docker package will install the latest tagged version of docker. The -docker-git package will build from the current master branch. - -## Dependencies - -Docker depends on several packages which are specified as dependencies -in the packages. The core dependencies are: - - - bridge-utils - - device-mapper - - iproute2 - - lxc - - sqlite - -## Installation - -For the normal package a simple - - pacman -S docker - -is all that is needed. - -For the AUR package execute: - - yaourt -S docker-git - -The instructions here assume **yaourt** is installed. See [Arch User -Repository](https://wiki.archlinux.org/index.php/Arch_User_Repository#Installing_packages) -for information on building and installing packages from the AUR if you -have not done so before. - -## Starting Docker - -There is a systemd service unit created for docker. To start the docker -service: - - $ sudo systemctl start docker - -To start on system boot: - - $ sudo systemctl enable docker - -## Custom daemon options - -If you need to add an HTTP Proxy, set a different directory or partition for the -Docker runtime files, or make other customizations, read our systemd article to -learn how to [customize your systemd Docker daemon options](/articles/systemd/). diff --git a/installation/binaries.md~ b/installation/binaries.md~ deleted file mode 100644 index ef9f5cafa2..0000000000 --- a/installation/binaries.md~ +++ /dev/null @@ -1,133 +0,0 @@ -page_title: Installation from Binaries -page_description: Instructions for installing Docker as a binary. Mostly meant for hackers who want to try out Docker on a variety of environments. -page_keywords: binaries, installation, docker, documentation, linux - -# Binaries - -**This instruction set is meant for hackers who want to try out Docker -on a variety of environments.** - -Before following these directions, you should really check if a packaged -version of Docker is already available for your distribution. We have -packages for many distributions, and more keep showing up all the time! - -## Check runtime dependencies - -To run properly, docker needs the following software to be installed at -runtime: - - - iptables version 1.4 or later - - Git version 1.7 or later - - procps (or similar provider of a "ps" executable) - - XZ Utils 4.9 or later - - a [properly mounted]( - https://github.com/tianon/cgroupfs-mount/blob/master/cgroupfs-mount) - cgroupfs hierarchy (having a single, all-encompassing "cgroup" mount - point [is](https://github.com/docker/docker/issues/2683) - [not](https://github.com/docker/docker/issues/3485) - [sufficient](https://github.com/docker/docker/issues/4568)) - -## Check kernel dependencies - -Docker in daemon mode has specific kernel requirements. For details, -check your distribution in [*Installation*](../#installation-list). - -A 3.10 Linux kernel is the minimum requirement for Docker. -Kernels older than 3.10 lack some of the features required to run Docker -containers. These older versions are known to have bugs which cause data loss -and frequently panic under certain conditions. - -The latest minor version (3.x.y) of the 3.10 (or a newer maintained version) -Linux kernel is recommended. Keeping the kernel up to date with the latest -minor version will ensure critical kernel bugs get fixed. - -> **Warning**: -> Installing custom kernels and kernel packages is probably not -> supported by your Linux distribution's vendor. Please make sure to -> ask your vendor about Docker support first before attempting to -> install custom kernels on your distribution. - -> **Warning**: -> Installing a newer kernel might not be enough for some distributions -> which provide packages which are too old or incompatible with -> newer kernels. - -Note that Docker also has a client mode, which can run on virtually any -Linux kernel (it even builds on OS X!). - -## Enable AppArmor and SELinux when possible - -Please use AppArmor or SELinux if your Linux distribution supports -either of the two. This helps improve security and blocks certain -types of exploits. Your distribution's documentation should provide -detailed steps on how to enable the recommended security mechanism. - -Some Linux distributions enable AppArmor or SELinux by default and -they run a kernel which doesn't meet the minimum requirements (3.10 -or newer). Updating the kernel to 3.10 or newer on such a system -might not be enough to start Docker and run containers. -Incompatibilities between the version of AppArmor/SELinux user -space utilities provided by the system and the kernel could prevent -Docker from running, from starting containers or, cause containers to -exhibit unexpected behaviour. - -> **Warning**: -> If either of the security mechanisms is enabled, it should not be -> disabled to make Docker or its containers run. This will reduce -> security in that environment, lose support from the distribution's -> vendor for the system, and might break regulations and security -> policies in heavily regulated environments. - -## Get the docker binary: - - $ wget https://get.docker.com/builds/Linux/x86_64/docker-latest -O docker - $ chmod +x docker - -> **Note**: -> If you have trouble downloading the binary, you can also get the smaller -> compressed release file: -> [https://get.docker.com/builds/Linux/x86_64/docker-latest.tgz]( -> https://get.docker.com/builds/Linux/x86_64/docker-latest.tgz) - -## Run the docker daemon - - # start the docker in daemon mode from the directory you unpacked - $ sudo ./docker -d & - -## Giving non-root access - -The `docker` daemon always runs as the root user, and the `docker` -daemon binds to a Unix socket instead of a TCP port. By default that -Unix socket is owned by the user *root*, and so, by default, you can -access it with `sudo`. - -If you (or your Docker installer) create a Unix group called *docker* -and add users to it, then the `docker` daemon will make the ownership of -the Unix socket read/writable by the *docker* group when the daemon -starts. The `docker` daemon must always run as the root user, but if you -run the `docker` client as a user in the *docker* group then you don't -need to add `sudo` to all the client commands. - -> **Warning**: -> The *docker* group (or the group specified with `-G`) is root-equivalent; -> see [*Docker Daemon Attack Surface*]( -> /articles/security/#docker-daemon-attack-surface) details. - -## Upgrades - -To upgrade your manual installation of Docker, first kill the docker -daemon: - - $ killall docker - -Then follow the regular installation steps. - -## Run your first container! - - # check your docker version - $ sudo ./docker version - - # run a container and open an interactive shell in the container - $ sudo ./docker run -i -t ubuntu /bin/bash - -Continue with the [User Guide](/userguide/). diff --git a/installation/centos.md~ b/installation/centos.md~ deleted file mode 100644 index 06dc8bfee8..0000000000 --- a/installation/centos.md~ +++ /dev/null @@ -1,136 +0,0 @@ -page_title: Installation on CentOS -page_description: Instructions for installing Docker on CentOS -page_keywords: Docker, Docker documentation, requirements, linux, centos, epel, docker.io, docker-io - -# CentOS - -Docker is supported on the following versions of CentOS: - -- [*CentOS 7 (64-bit)*](#installing-docker---centos-7) -- [*CentOS 6.5 (64-bit)*](#installing-docker---centos-6.5) or later - -These instructions are likely work for other binary compatible EL6/EL7 distributions -such as Scientific Linux, but they haven't been tested. - -Please note that due to the current Docker limitations, Docker is able to -run only on the **64 bit** architecture. - -## Kernel support - -Currently the CentOS project will only support Docker when running on kernels -shipped by the distribution. There are kernel changes which will cause issues -if one decides to step outside that box and run non-distribution kernel packages. - -To run Docker on [CentOS-6.5](http://www.centos.org) or later, you will need -kernel version 2.6.32-431 or higher as this has specific kernel fixes to allow -Docker to run. - -## Installing Docker - CentOS-7 -Docker is included by default in the CentOS-Extras repository. To install -run the following command: - - $ sudo yum install docker - -Please continue with the [Starting the Docker daemon](#starting-the-docker-daemon). - -### FirewallD - -CentOS-7 introduced firewalld, which is a wrapper around iptables and can -conflict with Docker. - -When `firewalld` is started or restarted it will remove the `DOCKER` chain -from iptables, preventing Docker from working properly. - -When using Systemd, `firewalld` is started before Docker, but if you -start or restart `firewalld` after Docker, you will have to restart the Docker daemon. - -## Installing Docker - CentOS-6.5 - -For Centos-6.5, the Docker package is part of [Extra Packages -for Enterprise Linux (EPEL)](https://fedoraproject.org/wiki/EPEL) repository, -a community effort to create and maintain additional packages for the RHEL distribution. - -Firstly, you need to ensure you have the EPEL repository enabled. Please -follow the [EPEL installation instructions]( -https://fedoraproject.org/wiki/EPEL#How_can_I_use_these_extra_packages.3F). - -For CentOS-6, there is a package name conflict with a system tray application -and its executable, so the Docker RPM package was called `docker-io`. - -To proceed with `docker-io` installation on CentOS-6, you may need to remove the -`docker` package first. - - $ sudo yum -y remove docker - -Next, let's install the `docker-io` package which will install Docker on our host. - - $ sudo yum install docker-io - -Please continue with the [Starting the Docker daemon](#starting-the-docker-daemon). - -## Manual installation of latest Docker release - -While using a package is the recommended way of installing Docker, -the above package might not be the current release version. If you need the latest -version, [you can install the binary directly]( -https://docs.docker.com/installation/binaries/). - -When installing the binary without a package, you may want -to integrate Docker with Systemd. For this, install the two unit files -(service and socket) from [the GitHub -repository](https://github.com/docker/docker/tree/master/contrib/init/systemd) -to `/etc/systemd/system`. - -Please continue with the [Starting the Docker daemon](#starting-the-docker-daemon). - -## Starting the Docker daemon - -Once Docker is installed, you will need to start the docker daemon. - - $ sudo service docker start - -If we want Docker to start at boot, we should also: - - $ sudo chkconfig docker on - -Now let's verify that Docker is working. First we'll need to get the latest -`centos` image. - - $ sudo docker pull centos - -Next we'll make sure that we can see the image by running: - - $ sudo docker images centos - -This should generate some output similar to: - - $ sudo docker images centos - REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE - centos latest 0b443ba03958 2 hours ago 297.6 MB - -Run a simple bash shell to test the image: - - $ sudo docker run -i -t centos /bin/bash - -If everything is working properly, you'll get a simple bash prompt. Type -`exit` to continue. - -## Custom daemon options - -If you need to add an HTTP Proxy, set a different directory or partition for the -Docker runtime files, or make other customizations, read our Systemd article to -learn how to [customize your Systemd Docker daemon options](/articles/systemd/). - -## Dockerfiles -The CentOS Project provides a number of sample Dockerfiles which you may use -either as templates or to familiarize yourself with docker. These templates -are available on GitHub at [https://github.com/CentOS/CentOS-Dockerfiles]( -https://github.com/CentOS/CentOS-Dockerfiles) - -**Done!** You can either continue with the [Docker User -Guide](/userguide/) or explore and build on the images yourself. - -## Issues? - -If you have any issues - please report them directly in the -[CentOS bug tracker](http://bugs.centos.org). diff --git a/installation/cruxlinux.md~ b/installation/cruxlinux.md~ deleted file mode 100644 index ead4c273ca..0000000000 --- a/installation/cruxlinux.md~ +++ /dev/null @@ -1,72 +0,0 @@ -page_title: Installation on CRUX Linux -page_description: Docker installation on CRUX Linux. -page_keywords: crux linux, virtualization, Docker, documentation, installation - -# CRUX Linux - -Installing on CRUX Linux can be handled via the contrib ports from -[James Mills](http://prologic.shortcircuit.net.au/) and are included in the -official [contrib](http://crux.nu/portdb/?a=repo&q=contrib) ports: - -- docker - -The `docker` port will build and install the latest tagged version of Docker. - - -## Installation - -Assuming you have contrib enabled, update your ports tree and install docker (*as root*): - - # prt-get depinst docker - - -## Kernel Requirements - -To have a working **CRUX+Docker** Host you must ensure your Kernel has -the necessary modules enabled for the Docker Daemon to function correctly. - -Please read the `README`: - - $ prt-get readme docker - -The `docker` port installs the `contrib/check-config.sh` script -provided by the Docker contributors for checking your kernel -configuration as a suitable Docker host. - -To check your Kernel configuration run: - - $ /usr/share/docker/check-config.sh - -## Starting Docker - -There is a rc script created for Docker. To start the Docker service (*as root*): - - # /etc/rc.d/docker start - -To start on system boot: - - - Edit `/etc/rc.conf` - - Put `docker` into the `SERVICES=(...)` array after `net`. - -## Images - -There is a CRUX image maintained by [James Mills](http://prologic.shortcircuit.net.au/) -as part of the Docker "Official Library" of images. To use this image simply pull it -or use it as part of your `FROM` line in your `Dockerfile(s)`. - - $ docker pull crux - $ docker run -i -t crux - -There are also user contributed [CRUX based image(s)](https://registry.hub.docker.com/repos/crux/) on the Docker Hub. - - -## Issues - -If you have any issues please file a bug with the -[CRUX Bug Tracker](http://crux.nu/bugs/). - -## Support - -For support contact the [CRUX Mailing List](http://crux.nu/Main/MailingLists) -or join CRUX's [IRC Channels](http://crux.nu/Main/IrcChannels). on the -[FreeNode](http://freenode.net/) IRC Network. diff --git a/installation/debian.md~ b/installation/debian.md~ deleted file mode 100644 index 74acd1d42b..0000000000 --- a/installation/debian.md~ +++ /dev/null @@ -1,103 +0,0 @@ -page_title: Installation on Debian -page_description: Instructions for installing Docker on Debian. -page_keywords: Docker, Docker documentation, installation, debian - -# Debian - -Docker is supported on the following versions of Debian: - - - [*Debian 8.0 Jessie (64-bit)*](#debian-jessie-80-64-bit) - - [*Debian 7.7 Wheezy (64-bit)*](#debian-wheezystable-7x-64-bit) - -## Debian Jessie 8.0 (64-bit) - -Debian 8 comes with a 3.14.0 Linux kernel, and a `docker.io` package which -installs all its prerequisites from Debian's repository. - -> **Note**: -> Debian contains a much older KDE3/GNOME2 package called ``docker``, so the -> package and the executable are called ``docker.io``. - -### Installation - -To install the latest Debian package (may not be the latest Docker release): - - $ sudo apt-get update - $ sudo apt-get install docker.io - -To verify that everything has worked as expected: - - $ sudo docker run -i -t ubuntu /bin/bash - -Which should download the `ubuntu` image, and then start `bash` in a container. - -> **Note**: -> If you want to enable memory and swap accounting see -> [this](/installation/ubuntulinux/#memory-and-swap-accounting). - -## Debian Wheezy/Stable 7.x (64-bit) - -Docker requires Kernel 3.8+, while Wheezy ships with Kernel 3.2 (for more details -on why 3.8 is required, see discussion on -[bug #407](https://github.com/docker/docker/issues/407%20kernel%20versions)). - -Fortunately, wheezy-backports currently has [Kernel 3.16 -](https://packages.debian.org/search?suite=wheezy-backports§ion=all&arch=any&searchon=names&keywords=linux-image-amd64), -which is officially supported by Docker. - -### Installation - -1. Install Kernel from wheezy-backports - - Add the following line to your `/etc/apt/sources.list` - - `deb http://http.debian.net/debian wheezy-backports main` - - then install the `linux-image-amd64` package (note the use of - `-t wheezy-backports`) - - $ sudo apt-get update - $ sudo apt-get install -t wheezy-backports linux-image-amd64 - -2. Install Docker using the get.docker.com script: - - `curl -sSL https://get.docker.com/ | sh` - -## Giving non-root access - -The `docker` daemon always runs as the `root` user and the `docker` -daemon binds to a Unix socket instead of a TCP port. By default that -Unix socket is owned by the user `root`, and so, by default, you can -access it with `sudo`. - -If you (or your Docker installer) create a Unix group called `docker` -and add users to it, then the `docker` daemon will make the ownership of -the Unix socket read/writable by the `docker` group when the daemon -starts. The `docker` daemon must always run as the root user, but if you -run the `docker` client as a user in the `docker` group then you don't -need to add `sudo` to all the client commands. From Docker 0.9.0 you can -use the `-G` flag to specify an alternative group. - -> **Warning**: -> The `docker` group (or the group specified with the `-G` flag) is -> `root`-equivalent; see [*Docker Daemon Attack Surface*]( -> /articles/security/#docker-daemon-attack-surface) details. - -**Example:** - - # Add the docker group if it doesn't already exist. - $ sudo groupadd docker - - # Add the connected user "${USER}" to the docker group. - # Change the user name to match your preferred user. - # You may have to logout and log back in again for - # this to take effect. - $ sudo gpasswd -a ${USER} docker - - # Restart the Docker daemon. - $ sudo service docker restart - - -## What next? - -Continue with the [User Guide](/userguide/). diff --git a/installation/fedora.md~ b/installation/fedora.md~ deleted file mode 100644 index ed4e8372a4..0000000000 --- a/installation/fedora.md~ +++ /dev/null @@ -1,84 +0,0 @@ -page_title: Installation on Fedora -page_description: Instructions for installing Docker on Fedora. -page_keywords: Docker, Docker documentation, Fedora, requirements, linux - -# Fedora - -Docker is supported on the following versions of Fedora: - -- [*Fedora 20 (64-bit)*](#fedora-20-installation) -- [*Fedora 21 and later (64-bit)*](#fedora-21-and-later-installation) - -Currently the Fedora project will only support Docker when running on kernels -shipped by the distribution. There are kernel changes which will cause issues -if one decides to step outside that box and run non-distribution kernel packages. - -## Fedora 21 and later installation - -Install the `docker` package which will install Docker on our host. - - $ sudo yum -y install docker - -To update the `docker` package: - - $ sudo yum -y update docker - -Please continue with the [Starting the Docker daemon](#starting-the-docker-daemon). - -## Fedora 20 installation - -For `Fedora 20`, there is a package name conflict with a system tray application -and its executable, so the Docker RPM package was called `docker-io`. - -To proceed with `docker-io` installation on Fedora 20, please remove the `docker` -package first. - - $ sudo yum -y remove docker - $ sudo yum -y install docker-io - -To update the `docker` package: - - $ sudo yum -y update docker-io - -Please continue with the [Starting the Docker daemon](#starting-the-docker-daemon). - -## Starting the Docker daemon - -Now that it's installed, let's start the Docker daemon. - - $ sudo systemctl start docker - -If we want Docker to start at boot, we should also: - - $ sudo systemctl enable docker - -Now let's verify that Docker is working. - - $ sudo docker run -i -t fedora /bin/bash - -> Note: If you get a `Cannot start container` error mentioning SELinux -> or permission denied, you may need to update the SELinux policies. -> This can be done using `sudo yum upgrade selinux-policy` and then rebooting. - -## Granting rights to users to use Docker - -The `docker` command line tool contacts the `docker` daemon process via a -socket file `/var/run/docker.sock` owned by `root:root`. Though it's -[recommended](https://lists.projectatomic.io/projectatomic-archives/atomic-devel/2015-January/msg00034.html) -to use `sudo` for docker commands, if users wish to avoid it, an administrator can -create a `docker` group, have it own `/var/run/docker.sock`, and add users to this group. - - $ sudo groupadd docker - $ sudo chown root:docker /var/run/docker.sock - $ sudo usermod -a -G docker $USERNAME - -## Custom daemon options - -If you need to add an HTTP Proxy, set a different directory or partition for the -Docker runtime files, or make other customizations, read our Systemd article to -learn how to [customize your Systemd Docker daemon options](/articles/systemd/). - -## What next? - -Continue with the [User Guide](/userguide/). - diff --git a/installation/frugalware.md~ b/installation/frugalware.md~ deleted file mode 100644 index 6b4db23b26..0000000000 --- a/installation/frugalware.md~ +++ /dev/null @@ -1,50 +0,0 @@ -page_title: Installation on FrugalWare -page_description: Installation instructions for Docker on FrugalWare. -page_keywords: frugalware linux, virtualization, docker, documentation, installation - -# FrugalWare - -Installing on FrugalWare is handled via the official packages: - - - [lxc-docker i686](http://www.frugalware.org/packages/200141) - - [lxc-docker x86_64](http://www.frugalware.org/packages/200130) - -The lxc-docker package will install the latest tagged version of Docker. - -## Dependencies - -Docker depends on several packages which are specified as dependencies -in the packages. The core dependencies are: - - - systemd - - lvm2 - - sqlite3 - - libguestfs - - lxc - - iproute2 - - bridge-utils - -## Installation - -A simple - - pacman -S lxc-docker - -is all that is needed. - -## Starting Docker - -There is a systemd service unit created for Docker. To start Docker as -service: - - $ sudo systemctl start lxc-docker - -To start on system boot: - - $ sudo systemctl enable lxc-docker - -## Custom daemon options - -If you need to add an HTTP Proxy, set a different directory or partition for the -Docker runtime files, or make other customizations, read our systemd article to -learn how to [customize your systemd Docker daemon options](/articles/systemd/). diff --git a/installation/gentoolinux.md~ b/installation/gentoolinux.md~ deleted file mode 100644 index 716eab9d82..0000000000 --- a/installation/gentoolinux.md~ +++ /dev/null @@ -1,97 +0,0 @@ -page_title: Installation on Gentoo -page_description: Installation instructions for Docker on Gentoo. -page_keywords: gentoo linux, virtualization, docker, documentation, installation - -# Gentoo - -Installing Docker on Gentoo Linux can be accomplished using one of two ways: the **official** way and the `docker-overlay` way. - -Official project page of [Gentoo Docker](https://wiki.gentoo.org/wiki/Project:Docker) team. - -## Official way -The first and recommended way if you are looking for a stable -experience is to use the official `app-emulation/docker` package directly -from the tree. - -If any issues arise from this ebuild including, missing kernel -configuration flags or dependencies, open a bug -on the Gentoo [Bugzilla](https://bugs.gentoo.org) assigned to `docker AT gentoo DOT org` -or join and ask in the official -[IRC](http://webchat.freenode.net?channels=%23gentoo-containers&uio=d4) channel on the Freenode network. - -## docker-overlay way - -If you're looking for a `-bin` ebuild, a live ebuild, or a bleeding edge -ebuild, use the provided overlay, [docker-overlay](https://github.com/tianon/docker-overlay) -which can be added using `app-portage/layman`. The most accurate and -up-to-date documentation for properly installing and using the overlay -can be found in the [overlay](https://github.com/tianon/docker-overlay/blob/master/README.md#using-this-overlay). - -If any issues arise from this ebuild or the resulting binary, including -and especially missing kernel configuration flags or dependencies, -open an [issue](https://github.com/tianon/docker-overlay/issues) on -the `docker-overlay` repository or ping `tianon` directly in the `#docker` -IRC channel on the Freenode network. - -## Installation - -### Available USE flags - -| USE Flag | Default | Description | -| ------------- |:-------:|:------------| -| aufs | |Enables dependencies for the "aufs" graph driver, including necessary kernel flags.| -| btrfs | |Enables dependencies for the "btrfs" graph driver, including necessary kernel flags.| -| contrib | Yes |Install additional contributed scripts and components.| -| device-mapper | Yes |Enables dependencies for the "devicemapper" graph driver, including necessary kernel flags.| -| doc | |Add extra documentation (API, Javadoc, etc). It is recommended to enable per package instead of globally.| -| lxc | |Enables dependencies for the "lxc" execution driver.| -| vim-syntax | |Pulls in related vim syntax scripts.| -| zsh-completion| |Enable zsh completion support.| - -USE flags are described in detail on [tianon's -blog](https://tianon.github.io/post/2014/05/17/docker-on-gentoo.html). - -The package should properly pull in all the necessary dependencies and -prompt for all necessary kernel options. - - $ sudo emerge -av app-emulation/docker - ->Note: Sometimes there is a disparity between the latest versions ->in the official **Gentoo tree** and the **docker-overlay**. ->Please be patient, and the latest version should propagate shortly. - -## Starting Docker - -Ensure that you are running a kernel that includes all the necessary -modules and configuration (and optionally for device-mapper -and AUFS or Btrfs, depending on the storage driver you've decided to use). - -To use Docker, the `docker` daemon must be running as **root**. -To use Docker as a **non-root** user, add yourself to the **docker** -group by running the following command: - - $ sudo usermod -a -G docker user - -### OpenRC - -To start the `docker` daemon: - - $ sudo /etc/init.d/docker start - -To start on system boot: - - $ sudo rc-update add docker default - -### systemd - -To start the `docker` daemon: - - $ sudo systemctl start docker - -To start on system boot: - - $ sudo systemctl enable docker - -If you need to add an HTTP Proxy, set a different directory or partition for the -Docker runtime files, or make other customizations, read our systemd article to -learn how to [customize your systemd Docker daemon options](/articles/systemd/). diff --git a/installation/google.md~ b/installation/google.md~ deleted file mode 100644 index 1cee5290da..0000000000 --- a/installation/google.md~ +++ /dev/null @@ -1,41 +0,0 @@ -page_title: Installation on Google Cloud Platform -page_description: Installation instructions for Docker on the Google Cloud Platform. -page_keywords: Docker, Docker documentation, installation, google, Google Compute Engine, Google Cloud Platform - -# Google Cloud Platform - -## QuickStart with Container-optimized Google Compute Engine images - -1. Go to [Google Cloud Console][1] and create a new Cloud Project with - [Compute Engine enabled][2] - -2. Download and configure the [Google Cloud SDK][3] to use your - project with the following commands: - - $ curl -sSL https://sdk.cloud.google.com | bash - $ gcloud auth login - $ gcloud config set project - -3. Start a new instance using the latest [Container-optimized image][4]: - (select a zone close to you and the desired instance size) - - $ gcloud compute instances create docker-playground \ - --image container-vm \ - --zone us-central1-a \ - --machine-type f1-micro - -4. Connect to the instance using SSH: - - $ gcloud compute ssh --zone us-central1-a docker-playground - docker-playground:~$ sudo docker run hello-world - Hello from Docker. - This message shows that your installation appears to be working correctly. - ... - -Read more about [deploying Containers on Google Cloud Platform][5]. - -[1]: https://cloud.google.com/console -[2]: https://developers.google.com/compute/docs/signup -[3]: https://developers.google.com/cloud/sdk -[4]: https://developers.google.com/compute/docs/containers#container-optimized_google_compute_engine_images -[5]: https://developers.google.com/compute/docs/containers diff --git a/installation/images/win/_01.gif b/installation/images/win/_01.gif deleted file mode 100644 index fbfc0a3028..0000000000 Binary files a/installation/images/win/_01.gif and /dev/null differ diff --git a/installation/images/win/_02.gif b/installation/images/win/_02.gif deleted file mode 100644 index 16d8a688ff..0000000000 Binary files a/installation/images/win/_02.gif and /dev/null differ diff --git a/installation/images/win/_06.gif b/installation/images/win/_06.gif deleted file mode 100644 index d935c02ae9..0000000000 Binary files a/installation/images/win/_06.gif and /dev/null differ diff --git a/installation/images/win/cygwin.gif b/installation/images/win/cygwin.gif deleted file mode 100644 index d00445486e..0000000000 Binary files a/installation/images/win/cygwin.gif and /dev/null differ diff --git a/installation/images/win/hp_bios_vm.JPG b/installation/images/win/hp_bios_vm.JPG deleted file mode 100644 index 468d95ef5a..0000000000 Binary files a/installation/images/win/hp_bios_vm.JPG and /dev/null differ diff --git a/installation/images/win/putty.gif b/installation/images/win/putty.gif deleted file mode 100644 index e7d418d2de..0000000000 Binary files a/installation/images/win/putty.gif and /dev/null differ diff --git a/installation/images/win/putty_2.gif b/installation/images/win/putty_2.gif deleted file mode 100644 index 053ad231fc..0000000000 Binary files a/installation/images/win/putty_2.gif and /dev/null differ diff --git a/installation/images/win/run_02_.gif b/installation/images/win/run_02_.gif deleted file mode 100644 index 4243bf6186..0000000000 Binary files a/installation/images/win/run_02_.gif and /dev/null differ diff --git a/installation/images/win/run_03.gif b/installation/images/win/run_03.gif deleted file mode 100644 index c1f620a3c3..0000000000 Binary files a/installation/images/win/run_03.gif and /dev/null differ diff --git a/installation/images/win/run_04.gif b/installation/images/win/run_04.gif deleted file mode 100644 index 75092762fc..0000000000 Binary files a/installation/images/win/run_04.gif and /dev/null differ diff --git a/installation/images/win/ssh-config.gif b/installation/images/win/ssh-config.gif deleted file mode 100644 index 4fd3b2b333..0000000000 Binary files a/installation/images/win/ssh-config.gif and /dev/null differ diff --git a/installation/images/win/ts_go_bios.JPG b/installation/images/win/ts_go_bios.JPG deleted file mode 100644 index c4159fc715..0000000000 Binary files a/installation/images/win/ts_go_bios.JPG and /dev/null differ diff --git a/installation/images/win/ts_no_docker.JPG b/installation/images/win/ts_no_docker.JPG deleted file mode 100644 index 9ccba01c1e..0000000000 Binary files a/installation/images/win/ts_no_docker.JPG and /dev/null differ diff --git a/installation/mac.md~ b/installation/mac.md~ deleted file mode 100644 index 9bf7632680..0000000000 --- a/installation/mac.md~ +++ /dev/null @@ -1,327 +0,0 @@ -page_title: Installation on Mac OS X -page_description: Instructions for installing Docker on OS X using boot2docker. -page_keywords: Docker, Docker documentation, requirements, boot2docker, VirtualBox, SSH, Linux, OSX, OS X, Mac - -# Install Docker on Mac OS X - -You can install Docker using Boot2Docker to run `docker` commands at your command-line. -Choose this installation if you are familiar with the command-line or plan to -contribute to the Docker project on GitHub. - -Alternatively, you may want to try Kitematic, an application that lets you set up Docker and -run containers using a graphical user interface (GUI). - -Download Kitematic - - -## Command-line Docker with Boot2Docker - -Because the Docker daemon uses Linux-specific kernel features, you can't run -Docker natively in OS X. Instead, you must install the Boot2Docker application. -The application includes a VirtualBox Virtual Machine (VM), Docker itself, and the -Boot2Docker management tool. - -The Boot2Docker management tool is a lightweight Linux virtual machine made -specifically to run the Docker daemon on Mac OS X. The VirtualBox VM runs -completely from RAM, is a small ~24MB download, and boots in approximately 5s. - -**Requirements** - -Your Mac must be running OS X 10.6 "Snow Leopard" or newer to run Boot2Docker. - -### Learn the key concepts before installing - -In a Docker installation on Linux, your machine is both the localhost and the -Docker host. In networking, localhost means your computer. The Docker host is -the machine on which the containers run. - -On a typical Linux installation, the Docker client, the Docker daemon, and any -containers run directly on your localhost. This means you can address ports on a -Docker container using standard localhost addressing such as `localhost:8000` or -`0.0.0.0:8376`. - -![Linux Architecture Diagram](/installation/images/linux_docker_host.png) - -In an OS X installation, the `docker` daemon is running inside a Linux virtual -machine provided by Boot2Docker. - -![OSX Architecture Diagram](/installation/images/mac_docker_host.png) - -In OS X, the Docker host address is the address of the Linux VM. -When you start the `boot2docker` process, the VM is assigned an IP address. Under -`boot2docker` ports on a container map to ports on the VM. To see this in -practice, work through the exercises on this page. - - -### Install Boot2Docker - -1. Go to the [boot2docker/osx-installer ]( -https://github.com/boot2docker/osx-installer/releases/latest) release page. - -4. Download Boot2Docker by clicking `Boot2Docker-x.x.x.pkg` in the "Downloads" -section. - -3. Install Boot2Docker by double-clicking the package. - - The installer places Boot2Docker in your "Applications" folder. - -The installation places the `docker` and `boot2docker` binaries in your -`/usr/local/bin` directory. - - -## Start the Boot2Docker Application - -To run a Docker container, you first start the `boot2docker` VM and then issue -`docker` commands to create, load, and manage containers. You can launch -`boot2docker` from your Applications folder or from the command line. - -> **NOTE**: Boot2Docker is designed as a development tool. You should not use -> it in production environments. - -### From the Applications folder - -When you launch the "Boot2Docker" application from your "Applications" folder, the -application: - -* opens a terminal window - -* creates a $HOME/.boot2docker directory - -* creates a VirtualBox ISO and certs - -* starts a VirtualBox VM running the `docker` daemon - -Once the launch completes, you can run `docker` commands. A good way to verify -your setup succeeded is to run the `hello-world` container. - - $ docker run hello-world - Unable to find image 'hello-world:latest' locally - 511136ea3c5a: Pull complete - 31cbccb51277: Pull complete - e45a5af57b00: Pull complete - hello-world:latest: The image you are pulling has been verified. Important: image verification is a tech preview feature and should not be relied on to provide security. - Status: Downloaded newer image for hello-world:latest - Hello from Docker. - This message shows that your installation appears to be working correctly. - - To generate this message, Docker took the following steps: - 1. The Docker client contacted the Docker daemon. - 2. The Docker daemon pulled the "hello-world" image from the Docker Hub. - (Assuming it was not already locally available.) - 3. The Docker daemon created a new container from that image which runs the - executable that produces the output you are currently reading. - 4. The Docker daemon streamed that output to the Docker client, which sent it - to your terminal. - - To try something more ambitious, you can run an Ubuntu container with: - $ docker run -it ubuntu bash - - For more examples and ideas, visit: - http://docs.docker.com/userguide/ - - -A more typical way to start and stop `boot2docker` is using the command line. - -### From your command line - -Initialize and run `boot2docker` from the command line, do the following: - -1. Create a new Boot2Docker VM. - - $ boot2docker init - - This creates a new virtual machine. You only need to run this command once. - -2. Start the `boot2docker` VM. - - $ boot2docker start - -3. Display the environment variables for the Docker client. - - $ boot2docker shellinit - Writing /Users/mary/.boot2docker/certs/boot2docker-vm/ca.pem - Writing /Users/mary/.boot2docker/certs/boot2docker-vm/cert.pem - Writing /Users/mary/.boot2docker/certs/boot2docker-vm/key.pem - export DOCKER_HOST=tcp://192.168.59.103:2376 - export DOCKER_CERT_PATH=/Users/mary/.boot2docker/certs/boot2docker-vm - export DOCKER_TLS_VERIFY=1 - - The specific paths and address on your machine will be different. - -4. To set the environment variables in your shell do the following: - - $ eval "$(boot2docker shellinit)" - - You can also set them manually by using the `export` commands `boot2docker` - returns. - -5. Run the `hello-world` container to verify your setup. - - $ docker run hello-world - - -## Basic Boot2Docker Exercises - -At this point, you should have `boot2docker` running and the `docker` client -environment initialized. To verify this, run the following commands: - - $ boot2docker status - $ docker version - -Work through this section to try some practical container tasks using `boot2docker` VM. - -### Access container ports - -1. Start an NGINX container on the DOCKER_HOST. - - $ docker run -d -P --name web nginx - - Normally, the `docker run` commands starts a container, runs it, and then - exits. The `-d` flag keeps the container running in the background - after the `docker run` command completes. The `-P` flag publishes exposed ports from the - container to your local host; this lets you access them from your Mac. - -2. Display your running container with `docker ps` command - - CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES - 5fb65ff765e9 nginx:latest "nginx -g 'daemon of 3 minutes ago Up 3 minutes 0.0.0.0:49156->443/tcp, 0.0.0.0:49157->80/tcp web - - At this point, you can see `nginx` is running as a daemon. - -3. View just the container's ports. - - $ docker port web - 443/tcp -> 0.0.0.0:49156 - 80/tcp -> 0.0.0.0:49157 - - This tells you that the `web` container's port `80` is mapped to port - `49157` on your Docker host. - -4. Enter the `http://localhost:49157` address (`localhost` is `0.0.0.0`) in your browser: - - ![Bad Address](/installation/images/bad_host.png) - - This didn't work. The reason it doesn't work is your `DOCKER_HOST` address is - not the localhost address (0.0.0.0) but is instead the address of the - `boot2docker` VM. - -5. Get the address of the `boot2docker` VM. - - $ boot2docker ip - 192.168.59.103 - -6. Enter the `http://192.168.59.103:49157` address in your browser: - - ![Correct Addressing](/installation/images/good_host.png) - - Success! - -7. To stop and then remove your running `nginx` container, do the following: - - $ docker stop web - $ docker rm web - -### Mount a volume on the container - -When you start `boot2docker`, it automatically shares your `/Users` directory -with the VM. You can use this share point to mount directories onto your container. -The next exercise demonstrates how to do this. - -1. Change to your user `$HOME` directory. - - $ cd $HOME - -2. Make a new `site` directory. - - $ mkdir site - -3. Change into the `site` directory. - - $ cd site - -4. Create a new `index.html` file. - - $ echo "my new site" > index.html - -5. Start a new `nginx` container and replace the `html` folder with your `site` directory. - - $ docker run -d -P -v $HOME/site:/usr/share/nginx/html --name mysite nginx - -6. Get the `mysite` container's port. - - $ docker port mysite - 80/tcp -> 0.0.0.0:49166 - 443/tcp -> 0.0.0.0:49165 - -7. Open the site in a browser: - - ![My site page](/installation/images/newsite_view.png) - -8. Try adding a page to your `$HOME/site` in real time. - - $ echo "This is cool" > cool.html - -9. Open the new page in the browser. - - ![Cool page](/installation/images/cool_view.png) - -9. Stop and then remove your running `mysite` container. - - $ docker stop mysite - $ docker rm mysite - -## Upgrade Boot2Docker - -If you running Boot2Docker 1.4.1 or greater, you can upgrade Boot2Docker from -the command line. If you are running an older version, you should use the -package provided by the `boot2docker` repository. - -### From the command line - -To upgrade from 1.4.1 or greater, you can do this: - -1. Open a terminal on your local machine. - -2. Stop the `boot2docker` application. - - $ boot2docker stop - -3. Run the upgrade command. - - $ boot2docker upgrade - - -### Use the installer - -To upgrade any version of Boot2Docker, do this: - -1. Open a terminal on your local machine. - -2. Stop the `boot2docker` application. - - $ boot2docker stop - -3. Go to the [boot2docker/osx-installer ]( - https://github.com/boot2docker/osx-installer/releases/latest) release page. - -4. Download Boot2Docker by clicking `Boot2Docker-x.x.x.pkg` in the "Downloads" -section. - -2. Install Boot2Docker by double-clicking the package. - - The installer places Boot2Docker in your "Applications" folder. - - -## Learning more and Acknowledgement - - -Use `boot2docker help` to list the full command line reference. For more -information about using SSH or SCP to access the Boot2Docker VM, see the README -at [Boot2Docker repository](https://github.com/boot2docker/boot2docker). - -Thanks to Chris Jones whose [blog](http://goo.gl/Be6cCk) inspired me to redo -this page. - -Continue with the [Docker User Guide](/userguide/). \ No newline at end of file diff --git a/installation/oracle.md~ b/installation/oracle.md~ deleted file mode 100644 index 6d2f782b49..0000000000 --- a/installation/oracle.md~ +++ /dev/null @@ -1,125 +0,0 @@ -page_title: Installation on Oracle Linux -page_description: Installation instructions for Docker on Oracle Linux. -page_keywords: Docker, Docker documentation, requirements, linux, rhel, centos, oracle, ol - -# Oracle Linux 6 and 7 - -You do not require an Oracle Linux Support subscription to install Docker on -Oracle Linux. - -*For Oracle Linux customers with an active support subscription:* -Docker is available in either the `ol6_x86_64_addons` or `ol7_x86_64_addons` -channel for Oracle Linux 6 and Oracle Linux 7 on the [Unbreakable Linux Network -(ULN)](https://linux.oracle.com). - -*For Oracle Linux users without an active support subscription:* -Docker is available in the appropriate `ol6_addons` or `ol7_addons` repository -on [Oracle Public Yum](http://public-yum.oracle.com). - -Docker requires the use of the Unbreakable Enterprise Kernel Release 3 (3.8.13) -or higher on Oracle Linux. This kernel supports the Docker btrfs storage engine -on both Oracle Linux 6 and 7. - -Due to current Docker limitations, Docker is only able to run only on the x86_64 -architecture. - -## To enable the *addons* channel via the Unbreakable Linux Network: - -1. Enable either the *ol6\_x86\_64\_addons* or *ol7\_x86\_64\_addons* channel -via the ULN web interface. -Consult the [Unbreakable Linux Network User's -Guide](http://docs.oracle.com/cd/E52668_01/E39381/html/index.html) for -documentation on subscribing to channels. - -## To enable the *addons* repository via Oracle Public Yum: - -The latest release of Oracle Linux 6 and 7 are automatically configured to use -the Oracle Public Yum repositories during installation. However, the *addons* -repository is not enabled by default. - -To enable the *addons* repository: - -1. Edit either `/etc/yum.repos.d/public-yum-ol6.repo` or -`/etc/yum.repos.d/public-yum-ol7.repo` -and set `enabled=1` in the `[ol6_addons]` or the `[ol7_addons]` stanza. - -## To install Docker: - -1. Ensure the appropriate *addons* channel or repository has been enabled. - -2. Use yum to install the Docker package: - - $ sudo yum install docker - -## To start Docker: - -1. Now that it's installed, start the Docker daemon: - - 1. On Oracle Linux 6: - - $ sudo service docker start - - 2. On Oracle Linux 7: - - $ sudo systemctl start docker.service - -2. If you want the Docker daemon to start automatically at boot: - - 1. On Oracle Linux 6: - - $ sudo chkconfig docker on - - 2. On Oracle Linux 7: - - $ sudo systemctl enable docker.service - -**Done!** - -## Custom daemon options - -If you need to add an HTTP Proxy, set a different directory or partition for the -Docker runtime files, or make other customizations, read our systemd article to -learn how to [customize your systemd Docker daemon options](/articles/systemd/). - -## Using the btrfs storage engine - -Docker on Oracle Linux 6 and 7 supports the use of the btrfs storage engine. -Before enabling btrfs support, ensure that `/var/lib/docker` is stored on a -btrfs-based filesystem. Review [Chapter -5](http://docs.oracle.com/cd/E37670_01/E37355/html/ol_btrfs.html) of the [Oracle -Linux Administrator's Solution -Guide](http://docs.oracle.com/cd/E37670_01/E37355/html/index.html) for details -on how to create and mount btrfs filesystems. - -To enable btrfs support on Oracle Linux: - -1. Ensure that `/var/lib/docker` is on a btrfs filesystem. -1. Edit `/etc/sysconfig/docker` and add `-s btrfs` to the `OTHER_ARGS` field. -2. Restart the Docker daemon: - -You can now continue with the [Docker User Guide](/userguide/). - -## Known issues - -### Docker unmounts btrfs filesystem on shutdown -If you're running Docker using the btrfs storage engine and you stop the Docker -service, it will unmount the btrfs filesystem during the shutdown process. You -should ensure the filesystem is mounted properly prior to restarting the Docker -service. - -On Oracle Linux 7, you can use a `systemd.mount` definition and modify the -Docker `systemd.service` to depend on the btrfs mount defined in systemd. - -### SElinux Support on Oracle Linux 7 -SElinux must be set to `Permissive` or `Disabled` in `/etc/sysconfig/selinux` to -use the btrfs storage engine on Oracle Linux 7. - -## Further issues? - -If you have a current Basic or Premier Support Subscription for Oracle Linux, -you can report any issues you have with the installation of Docker via a Service -Request at [My Oracle Support](http://support.oracle.com). - -If you do not have an Oracle Linux Support Subscription, you can use the [Oracle -Linux -Forum](https://community.oracle.com/community/server_%26_storage_systems/linux/oracle_linux) for community-based support. diff --git a/installation/rackspace.md~ b/installation/rackspace.md~ deleted file mode 100644 index 9fddf5e450..0000000000 --- a/installation/rackspace.md~ +++ /dev/null @@ -1,81 +0,0 @@ -page_title: Installation on Rackspace Cloud -page_description: Installation instructions for Docker on Rackspace Cloud. -page_keywords: Rackspace Cloud, installation, docker, linux, ubuntu - -# Rackspace Cloud - -Installing Docker on Ubuntu provided by Rackspace is pretty -straightforward, and you should mostly be able to follow the -[*Ubuntu*](../ubuntulinux/#ubuntu-linux) installation guide. - -**However, there is one caveat:** - -If you are using any Linux not already shipping with the 3.8 kernel you -will need to install it. And this is a little more difficult on -Rackspace. - -Rackspace boots their servers using grub's `menu.lst` -and does not like non `virtual` packages (e.g., Xen compatible) -kernels there, although they do work. This results in -`update-grub` not having the expected result, and -you will need to set the kernel manually. - -**Do not attempt this on a production machine!** - - # update apt - $ apt-get update - - # install the new kernel - $ apt-get install linux-generic-lts-raring - -Great, now you have the kernel installed in `/boot/`, next you need to -make it boot next time. - - # find the exact names - $ find /boot/ -name '*3.8*' - - # this should return some results - -Now you need to manually edit `/boot/grub/menu.lst`, -you will find a section at the bottom with the existing options. Copy -the top one and substitute the new kernel into that. Make sure the new -kernel is on top, and double check the kernel and initrd lines point to -the right files. - -Take special care to double check the kernel and initrd entries. - - # now edit /boot/grub/menu.lst - $ vi /boot/grub/menu.lst - -It will probably look something like this: - - ## ## End Default Options ## - - title Ubuntu 12.04.2 LTS, kernel 3.8.x generic - root (hd0) - kernel /boot/vmlinuz-3.8.0-19-generic root=/dev/xvda1 ro quiet splash console=hvc0 - initrd /boot/initrd.img-3.8.0-19-generic - - title Ubuntu 12.04.2 LTS, kernel 3.2.0-38-virtual - root (hd0) - kernel /boot/vmlinuz-3.2.0-38-virtual root=/dev/xvda1 ro quiet splash console=hvc0 - initrd /boot/initrd.img-3.2.0-38-virtual - - title Ubuntu 12.04.2 LTS, kernel 3.2.0-38-virtual (recovery mode) - root (hd0) - kernel /boot/vmlinuz-3.2.0-38-virtual root=/dev/xvda1 ro quiet splash single - initrd /boot/initrd.img-3.2.0-38-virtual - -Reboot the server (either via command line or console) - - # reboot - -Verify the kernel was updated - - $ uname -a - # Linux docker-12-04 3.8.0-19-generic #30~precise1-Ubuntu SMP Wed May 1 22:26:36 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux - - # nice! 3.8. - -Now you can finish with the [*Ubuntu*](../ubuntulinux/#ubuntu-linux) -instructions. diff --git a/installation/rhel.md~ b/installation/rhel.md~ deleted file mode 100644 index 58b2316c6f..0000000000 --- a/installation/rhel.md~ +++ /dev/null @@ -1,126 +0,0 @@ -page_title: Installation on Red Hat Enterprise Linux -page_description: Instructions for installing Docker on Red Hat Enterprise Linux. -page_keywords: Docker, Docker documentation, requirements, linux, rhel - -# Red Hat Enterprise Linux - -Docker is supported on the following versions of RHEL: - -- [*Red Hat Enterprise Linux 7 (64-bit)*](#red-hat-enterprise-linux-7-installation) -- [*Red Hat Enterprise Linux 6.5 (64-bit)*](#red-hat-enterprise-linux-6.5-installation) or later - -## Kernel support - -RHEL will only support Docker via the *extras* channel or EPEL package when -running on kernels shipped by the distribution. There are kernel changes which -will cause issues if one decides to step outside that box and run -non-distribution kernel packages. - -## Red Hat Enterprise Linux 7 Installation - -**Red Hat Enterprise Linux 7 (64 bit)** has [shipped with -Docker](https://access.redhat.com/site/products/red-hat-enterprise-linux/docker-and-containers). -An overview and some guidance can be found in the [Release -Notes](https://access.redhat.com/site/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/7.0_Release_Notes/chap-Red_Hat_Enterprise_Linux-7.0_Release_Notes-Linux_Containers_with_Docker_Format.html). - -Docker is located in the *extras* channel. To install Docker: - -1. Enable the *extras* channel: - - $ sudo subscription-manager repos --enable=rhel-7-server-extras-rpms - -2. Install Docker: - - $ sudo yum install docker - -Additional installation, configuration, and usage information, -including a [Get Started with Docker Containers in Red Hat -Enterprise Linux 7](https://access.redhat.com/site/articles/881893) -guide, can be found by Red Hat customers on the [Red Hat Customer -Portal](https://access.redhat.com/). - -Please continue with the [Starting the Docker daemon](#starting-the-docker-daemon). - -## Red Hat Enterprise Linux 6.5 Installation - -You will need **64 bit** [RHEL -6.5](https://access.redhat.com/site/articles/3078#RHEL6) or later, with -a RHEL 6 kernel version 2.6.32-431 or higher as this has specific kernel -fixes to allow Docker to work. - -Docker is available for **RHEL6.5** on EPEL. Please note that -this package is part of [Extra Packages for Enterprise Linux -(EPEL)](https://fedoraproject.org/wiki/EPEL), a community effort to -create and maintain additional packages for the RHEL distribution. - -### Kernel support - -RHEL will only support Docker via the *extras* channel or EPEL package when -running on kernels shipped by the distribution. There are things like namespace -changes which will cause issues if one decides to step outside that box and run -non-distro kernel packages. - -> **Warning**: -> Please keep your system up to date using `yum update` and rebooting -> your system. Keeping your system updated ensures critical security -> vulnerabilities and severe bugs (such as those found in kernel 2.6.32) -> are fixed. - -## Installation - -Firstly, you need to install the EPEL repository. Please follow the -[EPEL installation -instructions](https://fedoraproject.org/wiki/EPEL#How_can_I_use_these_extra_packages.3F). - -There is a package name conflict with a system tray application -and its executable, so the Docker RPM package was called `docker-io`. - -To proceed with `docker-io` installation, you may need to remove the -`docker` package first. - - $ sudo yum -y remove docker - -Next, let's install the `docker-io` package which will install Docker on our host. - - $ sudo yum install docker-io - -To update the `docker-io` package - - $ sudo yum -y update docker-io - -Please continue with the [Starting the Docker daemon](#starting-the-docker-daemon). - -## Starting the Docker daemon - -Now that it's installed, let's start the Docker daemon. - - $ sudo service docker start - -If we want Docker to start at boot, we should also: - - $ sudo chkconfig docker on - -Now let's verify that Docker is working. - - $ sudo docker run -i -t fedora /bin/bash - -> Note: If you get a `Cannot start container` error mentioning SELinux -> or permission denied, you may need to update the SELinux policies. -> This can be done using `sudo yum upgrade selinux-policy` and then rebooting. - -**Done!** - -Continue with the [User Guide](/userguide/). - -## Custom daemon options - -If you need to add an HTTP Proxy, set a different directory or partition for the -Docker runtime files, or make other customizations, read our Systemd article to -learn how to [customize your Systemd Docker daemon options](/articles/systemd/). - - -## Issues? - -If you have any issues - please report them directly in the -[Red Hat Bugzilla for docker-io component]( -https://bugzilla.redhat.com/enter_bug.cgi?product=Fedora%20EPEL&component=docker-io). diff --git a/installation/softlayer.md~ b/installation/softlayer.md~ deleted file mode 100644 index d594896a92..0000000000 --- a/installation/softlayer.md~ +++ /dev/null @@ -1,30 +0,0 @@ -page_title: Installation on IBM SoftLayer -page_description: Installation instructions for Docker on IBM Softlayer. -page_keywords: IBM SoftLayer, virtualization, cloud, docker, documentation, installation - -# IBM SoftLayer - -1. Create an [IBM SoftLayer account]( - https://www.softlayer.com/cloud-servers/). -2. Log in to the [SoftLayer Customer Portal]( - https://control.softlayer.com/). -3. From the *Devices* menu select [*Device List*](https://control.softlayer.com/devices) -4. Click *Order Devices* on the top right of the window below the menu bar. -5. Under *Virtual Server* click [*Hourly*](https://manage.softlayer.com/Sales/orderHourlyComputingInstance) -6. Create a new *SoftLayer Virtual Server Instance* (VSI) using the default - values for all the fields and choose: - - - The desired location for *Datacenter* - - *Ubuntu Linux 12.04 LTS Precise Pangolin - Minimal Install (64 bit)* - for *Operating System*. - -7. Click the *Continue Your Order* button at the bottom right. -8. Fill out VSI *hostname* and *domain*. -9. Insert the required *User Metadata* and place the order. -10. Then continue with the [*Ubuntu*](../ubuntulinux/#ubuntu-linux) - instructions. - -## What next? - -Continue with the [User Guide](/userguide/). - diff --git a/installation/ubuntulinux.md~ b/installation/ubuntulinux.md~ deleted file mode 100644 index 85a37d768d..0000000000 --- a/installation/ubuntulinux.md~ +++ /dev/null @@ -1,305 +0,0 @@ -page_title: Installation on Ubuntu -page_description: Instructions for installing Docker on Ubuntu. -page_keywords: Docker, Docker documentation, requirements, virtualbox, installation, ubuntu - -#Ubuntu - -Docker is supported on these Ubuntu operating systems: - -- Ubuntu Trusty 14.04 (LTS) -- Ubuntu Precise 12.04 (LTS) -- Ubuntu Saucy 13.10 - -This page instructs you to install using Docker-managed release packages and -installation mechanisms. Using these packages ensures you get the latest release -of Docker. If you wish to install using Ubuntu-managed packages, consult your -Ubuntu documentation. - -##Prerequisites - -Docker requires a 64-bit installation regardless of your Ubuntu version. -Additionally, your kernel must be 3.10 at minimum. The latest 3.10 minor version -or a newer maintained version are also acceptable. - -Kernels older than 3.10 lack some of the features required to run Docker -containers. These older versions are known to have bugs which cause data loss -and frequently panic under certain conditions. - -To check your current kernel version, open a terminal and use `uname -r` to display -your kernel version: - - $ uname -r - 3.11.0-15-generic - ->**Caution** Some Ubuntu OS versions **require a version higher than 3.10** to ->run Docker, see the prerequisites on this page that apply to your Ubuntu ->version. - -###For Trusty 14.04 - -There are no prerequisites for this version. - -###For Precise 12.04 (LTS) - -For Ubuntu Precise, Docker requires the 3.13 kernel version. If your kernel -version is older than 3.13, you must upgrade it. Refer to this table to see -which packages are required for your environment: - - - -
linux-image-generic-lts-trusty Generic -Linux kernel image. This kernel has AUFS built in. This is required to run -Docker.
linux-headers-generic-lts-trustyAllows packages such as ZFS and VirtualBox guest additions -which depend on them. If you didn't install the headers for your existing -kernel, then you can skip these headers for the"trusty" kernel. If you're -unsure, you should include this package for safety.
xserver-xorg-lts-trusty Optional in non-graphical environments without Unity/Xorg. -Required when running Docker on machine with a graphical environment. - -

To learn more about the reasons for these packages, read the installation -instructions for backported kernels, specifically the LTS -Enablement Stack — refer to note 5 under each version.

libgl1-mesa-glx-lts-trusty
  - -To upgrade your kernel and install the additional packages, do the following: - -1. Open a terminal on your Ubuntu host. - -2. Update your package manager. - - $ sudo apt-get update - -3. Install both the required and optional packages. - - $ sudo apt-get install linux-image-generic-lts-trusty - - Depending on your environment, you may install more as described in the preceding table. - -4. Reboot your host. - - $ sudo reboot - -5. After your system reboots, go ahead and [install Docker](#installing-docker-on-ubuntu). - - -###For Saucy 13.10 (64 bit) - -Docker uses AUFS as the default storage backend. If you don't have this -prerequisite installed, Docker's installation process adds it. - -##Installing Docker on Ubuntu - -Make sure you have intalled the prerequisites for your Ubuntu version. Then, -install Docker using the following: - -1. Log into your Ubuntu installation as a user with `sudo` privileges. - -2. Verify that you have `wget` installed. - - $ which wget - - If `wget` isn't installed, install it after updating your manager: - - $ sudo apt-get update $ sudo apt-get install wget - -3. Get the latest Docker package. - - $ wget -qO- https://get.docker.com/ | sh - - The system prompts you for your `sudo` password. Then, it downloads and - installs Docker and its dependencies. - -4. Verify `docker` is installed correctly. - - $ sudo docker run hello-world - - This command downloads a test image and runs it in a container. - -## Optional Configurations for Docker on Ubuntu - -This section contains optional procedures for configuring your Ubuntu to work -better with Docker. - -* [Create a docker group](#create-a-docker-group) -* [Adjust memory and swap accounting](#adjust-memory-and-swap-accounting) -* [Enable UFW forwarding](#enable-ufw-forwarding) -* [Configure a DNS server for use by Docker](#configure-a-dns-server-for-docker) - -### Create a docker group - -The `docker` daemon binds to a Unix socket instead of a TCP port. By default -that Unix socket is owned by the user `root` and other users can access it with -`sudo`. For this reason, `docker` daemon always runs as the `root` user. - -To avoid having to use `sudo` when you use the `docker` command, create a Unix -group called `docker` and add users to it. When the `docker` daemon starts, it -makes the ownership of the Unix socket read/writable by the `docker` group. - ->**Warning**: The `docker` group is equivalent to the `root` user; For details ->on how this impacts security in your system, see [*Docker Daemon Attack ->Surface*](/articles/security/#docker-daemon-attack-surface) for details. - -To create the `docker` group and add your user: - -1. Log into Ubuntu as a user with `sudo` privileges. - - This procedure assumes you log in as the `ubuntu` user. - -3. Create the `docker` group and add your user. - - $ sudo usermod -aG docker ubuntu - -3. Log out and log back in. - - This ensures your user is running with the correct permissions. - -4. Verify your work by running `docker` without `sudo`. - - $ docker run hello-world - - -### Adjust memory and swap accounting - -When users run Docker, they may see these messages when working with an image: - - WARNING: Your kernel does not support cgroup swap limit. WARNING: Your - kernel does not support swap limit capabilities. Limitation discarded. - -To prevent these messages, enable memory and swap accounting on your system. To -enable these on system using GNU GRUB (GNU GRand Unified Bootloader), do the -following. - -1. Log into Ubuntu as a user with `sudo` privileges. - -2. Edit the `/etc/default/grub` file. - -3. Set the `GRUB_CMDLINE_LINUX` value as follows: - - GRUB_CMDLINE_LINUX="cgroup_enable=memory swapaccount=1" - -4. Save and close the file. - -5. Update GRUB. - - $ sudo update-grub - -6. Reboot your system. - - -### Enable UFW forwarding - -If you use [UFW (Uncomplicated Firewall)](https://help.ubuntu.com/community/UFW) -on the same host as you run Docker, you'll need to do additional configuration. -Docker uses a bridge to manage container networking. By default, UFW drops all -forwarding traffic. As a result, for Docker to run when UFW is -enabled, you must set UFW's forwarding policy appropriately. - -Also, UFW's default set of rules denies all incoming traffic. If you want to be able -to reach your containers from another host then you should also allow incoming -connections on the Docker port (default `2375`). - -To configure UFW and allow incoming connections on the Docker port: - -1. Log into Ubuntu as a user with `sudo` privileges. - -2. Verify that UFW is installed and enabled. - - $ sudo ufw status - -3. Open the `/etc/default/ufw` file for editing. - - $ sudo nano /etc/default/ufw - -4. Set the `DEFAULT_FORWARD_POLICY` policy to: - - DEFAULT_FORWARD_POLICY="ACCEPT" - -5. Save and close the file. - -6. Reload UFW to use the new setting. - - $ sudo ufw reload - -7. Allow incoming connections on the Docker port. - - $ sudo ufw allow 2375/tcp - -### Configure a DNS server for use by Docker - -Systems that run Ubuntu or an Ubuntu derivative on the desktop typically use -`127.0.0.1` as the default `nameserver` in `/etc/resolv.conf` file. The -NetworkManager also sets up `dnsmasq` to use the real DNS servers of the -connection and sets up `nameserver 127.0.0.1` in /`etc/resolv.conf`. - -When starting containers on desktop machines with these configurations, Docker -users see this warning: - - WARNING: Local (127.0.0.1) DNS resolver found in resolv.conf and containers - can't use it. Using default external servers : [8.8.8.8 8.8.4.4] - -The warning occurs because Docker containers can't use the local DNS nameserver. -Instead, Docker defaults to using an external nameserver. - -To avoid this warning, you can specify a DNS server for use by Docker -containers. Or, you can disable `dnsmasq` in NetworkManager. Though, disabiling -`dnsmasq` might make DNS resolution slower on some networks. - -To specify a DNS server for use by Docker: - -1. Log into Ubuntu as a user with `sudo` privileges. - -2. Open the `/etc/default/docker` file for editing. - - $ sudo nano /etc/default/docker - -3. Add a setting for Docker. - - DOCKER_OPTS="--dns 8.8.8.8" - - Replace `8.8.8.8` with a local DNS server such as `192.168.1.1`. You can also - specify multiple DNS servers. Separated them with spaces, for example: - - --dns 8.8.8.8 --dns 192.168.1.1 - - >**Warning**: If you're doing this on a laptop which connects to various - >networks, make sure to choose a public DNS server. - -4. Save and close the file. - -5. Restart the Docker daemon. - - $ sudo restart docker - - -  -  - -**Or, as an alternative to the previous procedure,** disable `dnsmasq` in -NetworkManager (this might slow your network). - -1. Open the `/etc/default/docker` file for editing. - - $ sudo nano /etc/NetworkManager/NetworkManager.conf - -2. Comment out the `dns=dsnmasq` line: - - dns=dnsmasq - -3. Save and close the file. - -4. Restart both the NetworkManager and Docker. - - $ sudo restart network-manager $ sudo restart docker - - -## Upgrade Docker - -To install the latest version of Docker, use the standard `-N` flag with `wget`: - - $ wget -N https://get.docker.com/ | sh - diff --git a/installation/windows.md~ b/installation/windows.md~ deleted file mode 100644 index 4d425cc59d..0000000000 --- a/installation/windows.md~ +++ /dev/null @@ -1,105 +0,0 @@ -page_title: Installation on Windows -page_description: Docker installation on Microsoft Windows -page_keywords: Docker, Docker documentation, Windows, requirements, virtualbox, boot2docker - -# Windows -> **Note:** -> Docker has been tested on Windows 7.1 and 8; it may also run on older versions. -> Your processor needs to support hardware virtualization. - -The Docker Engine uses Linux-specific kernel features, so to run it on Windows -we need to use a lightweight virtual machine (vm). You use the Windows Docker client to -control the virtualized Docker Engine to build, run, and manage Docker containers. - -To make this process easier, we've designed a helper application called -[Boot2Docker](https://github.com/boot2docker/boot2docker) that installs the -virtual machine and runs the Docker daemon. - -## Demonstration - - - -## Installation - -1. Download the latest release of the [Docker for Windows Installer](https://github.com/boot2docker/windows-installer/releases/latest) -2. Run the installer, which will install VirtualBox, MSYS-git, the boot2docker Linux ISO, -and the Boot2Docker management tool. - ![](/installation/images/windows-installer.png) -3. Run the `Boot2Docker Start` shell script from your Desktop or Program Files > Boot2Docker for Windows. - The Start script will ask you to enter an ssh key passphrase - the simplest - (but least secure) is to just hit [Enter]. - - ![](/installation/images/windows-boot2docker-start.png) - - The `Boot2Docker Start` script will connect you to a shell session in the virtual - machine. If needed, it will initialize a new VM and start it. - -## Upgrading - -1. Download the latest release of the [Docker for Windows Installer]( - https://github.com/boot2docker/windows-installer/releases/latest) - -2. Run the installer, which will update the Boot2Docker management tool. - -3. To upgrade your existing virtual machine, open a terminal and run: - - boot2docker stop - boot2docker download - boot2docker start - -## Running Docker - -> **Note:** if you are using a remote Docker daemon, such as Boot2Docker, -> then _do not_ type the `sudo` before the `docker` commands shown in the -> documentation's examples. - -Boot2Docker will log you in automatically so you can start using Docker right away. - -Let's try the `hello-world` example image. Run - - $ docker run hello-world - -This should download the very small `hello-world` image and print a `Hello from Docker.` message. - -## Login with PUTTY instead of using the CMD - -Boot2Docker generates and uses the public/private key pair in your `%HOMEPATH%\.ssh` -directory so to log in you need to use the private key from this same directory. - -The private key needs to be converted into the format PuTTY uses. - -You can do this with -[puttygen](http://www.chiark.greenend.org.uk/~sgtatham/putty/download.html): - -- Open `puttygen.exe` and load ("File"->"Load" menu) the private key from - `%HOMEPATH%\.ssh\id_boot2docker` -- then click: "Save Private Key". -- Then use the saved file to login with PuTTY using `docker@127.0.0.1:2022`. - -# Further Details - -The Boot2Docker management tool provides several commands: - - $ ./boot2docker - Usage: ./boot2docker [] {help|init|up|ssh|save|down|poweroff|reset|restart|config|status|info|ip|delete|download|version} [] - - -## Container port redirection - -If you are curious, the username for the boot2docker default user is `docker` and the password is `tcuser`. - -The latest version of `boot2docker` sets up a host only network adaptor which provides access to the container's ports. - -If you run a container with an exposed port: - - docker run --rm -i -t -p 80:80 nginx - -Then you should be able to access that nginx server using the IP address reported -to you using: - - boot2docker ip - -Typically, it is 192.168.59.103, but it could get changed by Virtualbox's DHCP -implementation. - -For further information or to report issues, please see the [Boot2Docker site](http://boot2docker.io) diff --git a/installation/windows.md~~ b/installation/windows.md~~ deleted file mode 100644 index 26b2a42a45..0000000000 --- a/installation/windows.md~~ +++ /dev/null @@ -1,103 +0,0 @@ -page_title: Installation on Windows -page_description: Docker installation on Microsoft Windows -page_keywords: Docker, Docker documentation, Windows, requirements, virtualbox, boot2docker - -# Windows -> **Note:** -> Docker has been tested on Windows 7.1 and 8; it may also run on older versions. -> Your processor needs to support hardware virtualization. - -The Docker Engine uses Linux-specific kernel features, so to run it on Windows -we need to use a lightweight virtual machine (vm). You use the Windows Docker client to -control the virtualized Docker Engine to build, run, and manage Docker containers. - -To make this process easier, we've designed a helper application called -[Boot2Docker](https://github.com/boot2docker/boot2docker) that installs the -virtual machine and runs the Docker daemon. - -## Demonstration - - - -## Installation - -1. Download the latest release of the [Docker for Windows Installer](https://github.com/boot2docker/windows-installer/releases/latest) -2. Run the installer, which will install VirtualBox, MSYS-git, the boot2docker Linux ISO, -and the Boot2Docker management tool. - ![](/installation/images/windows-installer.png) -3. Run the `Boot2Docker Start` shell script from your Desktop or Program Files > Boot2Docker for Windows. - The Start script will ask you to enter an ssh key passphrase - the simplest - (but least secure) is to just hit [Enter]. - - ![](/installation/images/windows-boot2docker-start.png) - - The `Boot2Docker Start` script will connect you to a shell session in the virtual - machine. If needed, it will initialize a new VM and start it. - -## Upgrading - -1. Download the latest release of the [Docker for Windows Installer]( - https://github.com/boot2docker/windows-installer/releases/latest) - -2. Run the installer, which will update the Boot2Docker management tool. - -3. To upgrade your existing virtual machine, open a terminal and run: - - boot2docker stop - boot2docker download - boot2docker start - -## Running Docker - -{{ include "no-remote-sudo.md" }} - -Boot2Docker will log you in automatically so you can start using Docker right away. - -Let's try the `hello-world` example image. Run - - $ docker run hello-world - -This should download the very small `hello-world` image and print a `Hello from Docker.` message. - -## Login with PUTTY instead of using the CMD - -Boot2Docker generates and uses the public/private key pair in your `%HOMEPATH%\.ssh` -directory so to log in you need to use the private key from this same directory. - -The private key needs to be converted into the format PuTTY uses. - -You can do this with -[puttygen](http://www.chiark.greenend.org.uk/~sgtatham/putty/download.html): - -- Open `puttygen.exe` and load ("File"->"Load" menu) the private key from - `%HOMEPATH%\.ssh\id_boot2docker` -- then click: "Save Private Key". -- Then use the saved file to login with PuTTY using `docker@127.0.0.1:2022`. - -# Further Details - -The Boot2Docker management tool provides several commands: - - $ ./boot2docker - Usage: ./boot2docker [] {help|init|up|ssh|save|down|poweroff|reset|restart|config|status|info|ip|delete|download|version} [] - - -## Container port redirection - -If you are curious, the username for the boot2docker default user is `docker` and the password is `tcuser`. - -The latest version of `boot2docker` sets up a host only network adaptor which provides access to the container's ports. - -If you run a container with an exposed port: - - docker run --rm -i -t -p 80:80 nginx - -Then you should be able to access that nginx server using the IP address reported -to you using: - - boot2docker ip - -Typically, it is 192.168.59.103, but it could get changed by Virtualbox's DHCP -implementation. - -For further information or to report issues, please see the [Boot2Docker site](http://boot2docker.io) diff --git a/introduction/understanding-docker.md~ b/introduction/understanding-docker.md~ deleted file mode 100644 index 9c9995972f..0000000000 --- a/introduction/understanding-docker.md~ +++ /dev/null @@ -1,286 +0,0 @@ -page_title: Understanding Docker -page_description: Docker explained in depth -page_keywords: docker, introduction, documentation, about, technology, understanding - -# Understanding Docker -**What is Docker?** - -Docker is an open platform for developing, shipping, and running applications. -Docker is designed to deliver your applications faster. With Docker you can -separate your applications from your infrastructure AND treat your -infrastructure like a managed application. Docker helps you ship code faster, -test faster, deploy faster, and shorten the cycle between writing code and -running code. - -Docker does this by combining a lightweight container virtualization platform -with workflows and tooling that help you manage and deploy your applications. - -At its core, Docker provides a way to run almost any application securely -isolated in a container. The isolation and security allow you to run many -containers simultaneously on your host. The lightweight nature of containers, -which run without the extra load of a hypervisor, means you can get more out of -your hardware. - -Surrounding the container virtualization are tooling and a platform which can -help you in several ways: - -* getting your applications (and supporting components) into Docker containers -* distributing and shipping those containers to your teams for further development -and testing -* deploying those applications to your production environment, - whether it be in a local data center or the Cloud. - -## What can I use Docker for? - -*Faster delivery of your applications* - -Docker is perfect for helping you with the development lifecycle. Docker -allows your developers to develop on local containers that contain your -applications and services. It can then integrate into a continuous integration and -deployment workflow. - -For example, your developers write code locally and share their development stack via -Docker with their colleagues. When they are ready, they push their code and the -stack they are developing onto a test environment and execute any required -tests. From the testing environment, you can then push the Docker images into -production and deploy your code. - -*Deploying and scaling more easily* - -Docker's container-based platform allows for highly portable workloads. Docker -containers can run on a developer's local host, on physical or virtual machines -in a data center, or in the Cloud. - -Docker's portability and lightweight nature also make dynamically managing -workloads easy. You can use Docker to quickly scale up or tear down applications -and services. Docker's speed means that scaling can be near real time. - -*Achieving higher density and running more workloads* - -Docker is lightweight and fast. It provides a viable, cost-effective alternative -to hypervisor-based virtual machines. This is especially useful in high density -environments: for example, building your own Cloud or Platform-as-a-Service. But -it is also useful for small and medium deployments where you want to get more -out of the resources you have. - -## What are the major Docker components? -Docker has two major components: - - -* Docker: the open source container virtualization platform. -* [Docker Hub](https://hub.docker.com): our Software-as-a-Service - platform for sharing and managing Docker containers. - - -> **Note:** Docker is licensed under the open source Apache 2.0 license. - -## What is Docker's architecture? -Docker uses a client-server architecture. The Docker *client* talks to the -Docker *daemon*, which does the heavy lifting of building, running, and -distributing your Docker containers. Both the Docker client and the daemon *can* -run on the same system, or you can connect a Docker client to a remote Docker -daemon. The Docker client and daemon communicate via sockets or through a -RESTful API. - -![Docker Architecture Diagram](/article-img/architecture.svg) - -### The Docker daemon -As shown in the diagram above, the Docker daemon runs on a host machine. The -user does not directly interact with the daemon, but instead through the Docker -client. - -### The Docker client -The Docker client, in the form of the `docker` binary, is the primary user -interface to Docker. It accepts commands from the user and communicates back and -forth with a Docker daemon. - -### Inside Docker -To understand Docker's internals, you need to know about three components: - -* Docker images. -* Docker registries. -* Docker containers. - -#### Docker images - -A Docker image is a read-only template. For example, an image could contain an Ubuntu -operating system with Apache and your web application installed. Images are used to create -Docker containers. Docker provides a simple way to build new images or update existing -images, or you can download Docker images that other people have already created. -Docker images are the **build** component of Docker. - -#### Docker Registries -Docker registries hold images. These are public or private stores from which you upload -or download images. The public Docker registry is called -[Docker Hub](http://hub.docker.com). It provides a huge collection of existing -images for your use. These can be images you create yourself or you -can use images that others have previously created. Docker registries are the -**distribution** component of Docker. - -####Docker containers -Docker containers are similar to a directory. A Docker container holds everything that -is needed for an application to run. Each container is created from a Docker -image. Docker containers can be run, started, stopped, moved, and deleted. Each -container is an isolated and secure application platform. Docker containers are the - **run** component of Docker. - -##So how does Docker work? -So far, we've learned that: - -1. You can build Docker images that hold your applications. -2. You can create Docker containers from those Docker images to run your - applications. -3. You can share those Docker images via - [Docker Hub](https://hub.docker.com) or your own registry. - -Let's look at how these elements combine together to make Docker work. - -### How does a Docker Image work? -We've already seen that Docker images are read-only templates from which Docker -containers are launched. Each image consists of a series of layers. Docker -makes use of [union file systems](http://en.wikipedia.org/wiki/UnionFS) to -combine these layers into a single image. Union file systems allow files and -directories of separate file systems, known as branches, to be transparently -overlaid, forming a single coherent file system. - -One of the reasons Docker is so lightweight is because of these layers. When you -change a Docker image—for example, update an application to a new version— a new layer -gets built. Thus, rather than replacing the whole image or entirely -rebuilding, as you may do with a virtual machine, only that layer is added or -updated. Now you don't need to distribute a whole new image, just the update, -making distributing Docker images faster and simpler. - -Every image starts from a base image, for example `ubuntu`, a base Ubuntu image, -or `fedora`, a base Fedora image. You can also use images of your own as the -basis for a new image, for example if you have a base Apache image you could use -this as the base of all your web application images. - -> **Note:** Docker usually gets these base images from -> [Docker Hub](https://hub.docker.com). - -Docker images are then built from these base images using a simple, descriptive -set of steps we call *instructions*. Each instruction creates a new layer in our -image. Instructions include actions like: - -* Run a command. -* Add a file or directory. -* Create an environment variable. -* What process to run when launching a container from this image. - -These instructions are stored in a file called a `Dockerfile`. Docker reads this -`Dockerfile` when you request a build of an image, executes the instructions, and -returns a final image. - -### How does a Docker registry work? -The Docker registry is the store for your Docker images. Once you build a Docker -image you can *push* it to a public registry [Docker Hub](https://hub.docker.com) or to -your own registry running behind your firewall. - -Using the Docker client, you can search for already published images and then -pull them down to your Docker host to build containers from them. - -[Docker Hub](https://hub.docker.com) provides both public and private storage -for images. Public storage is searchable and can be downloaded by anyone. -Private storage is excluded from search results and only you and your users can -pull images down and use them to build containers. You can [sign up for a storage plan -here](https://hub.docker.com/plans). - -### How does a container work? -A container consists of an operating system, user-added files, and meta-data. As -we've seen, each container is built from an image. That image tells Docker -what the container holds, what process to run when the container is launched, and -a variety of other configuration data. The Docker image is read-only. When -Docker runs a container from an image, it adds a read-write layer on top of the -image (using a union file system as we saw earlier) in which your application can -then run. - -### What happens when you run a container? -Either by using the `docker` binary or via the API, the Docker client tells the Docker -daemon to run a container. - - $ sudo docker run -i -t ubuntu /bin/bash - -Let's break down this command. The Docker client is launched using the `docker` -binary with the `run` option telling it to launch a new container. The bare -minimum the Docker client needs to tell the Docker daemon to run the container -is: - -* What Docker image to build the container from, here `ubuntu`, a base Ubuntu -image; -* The command you want to run inside the container when it is launched, -here `/bin/bash`, to start the Bash shell inside the new container. - -So what happens under the hood when we run this command? - -In order, Docker does the following: - -- **Pulls the `ubuntu` image:** Docker checks for the presence of the `ubuntu` -image and, if it doesn't exist locally on the host, then Docker downloads it from -[Docker Hub](https://hub.docker.com). If the image already exists, then Docker -uses it for the new container. -- **Creates a new container:** Once Docker has the image, it uses it to create a -container. -- **Allocates a filesystem and mounts a read-write _layer_:** The container is created in -the file system and a read-write layer is added to the image. -- **Allocates a network / bridge interface:** Creates a network interface that allows the -Docker container to talk to the local host. -- **Sets up an IP address:** Finds and attaches an available IP address from a pool. -- **Executes a process that you specify:** Runs your application, and; -- **Captures and provides application output:** Connects and logs standard input, outputs -and errors for you to see how your application is running. - -You now have a running container! From here you can manage your container, interact with -your application and then, when finished, stop and remove your container. - -## The underlying technology -Docker is written in Go and makes use of several Linux kernel features to -deliver the functionality we've seen. - -### Namespaces -Docker takes advantage of a technology called `namespaces` to provide the -isolated workspace we call the *container*. When you run a container, Docker -creates a set of *namespaces* for that container. - -This provides a layer of isolation: each aspect of a container runs in its own -namespace and does not have access outside it. - -Some of the namespaces that Docker uses are: - - - **The `pid` namespace:** Used for process isolation (PID: Process ID). - - **The `net` namespace:** Used for managing network interfaces (NET: - Networking). - - **The `ipc` namespace:** Used for managing access to IPC - resources (IPC: InterProcess Communication). - - **The `mnt` namespace:** Used for managing mount-points (MNT: Mount). - - **The `uts` namespace:** Used for isolating kernel and version identifiers. (UTS: Unix -Timesharing System). - -### Control groups -Docker also makes use of another technology called `cgroups` or control groups. -A key to running applications in isolation is to have them only use the -resources you want. This ensures containers are good multi-tenant citizens on a -host. Control groups allow Docker to share available hardware resources to -containers and, if required, set up limits and constraints. For example, -limiting the memory available to a specific container. - -### Union file systems -Union file systems, or UnionFS, are file systems that operate by creating layers, -making them very lightweight and fast. Docker uses union file systems to provide -the building blocks for containers. Docker can make use of several union file system variants -including: AUFS, btrfs, vfs, and DeviceMapper. - -### Container format -Docker combines these components into a wrapper we call a container format. The -default container format is called `libcontainer`. Docker also supports -traditional Linux containers using [LXC](https://linuxcontainers.org/). In the -future, Docker may support other container formats, for example, by integrating with -BSD Jails or Solaris Zones. - -## Next steps -### Installing Docker -Visit the [installation section](/installation/#installation). - -### The Docker User Guide -[Learn Docker in depth](/userguide/). - - diff --git a/machine/index.md~ b/machine/index.md~ deleted file mode 100644 index 6a59e84361..0000000000 --- a/machine/index.md~ +++ /dev/null @@ -1,863 +0,0 @@ -no_version_dropdown: truepage_title: Docker Machine -page_description: Working with Docker Machine -page_keywords: docker, machine, virtualbox, digitalocean, amazonec2 - -# Docker Machine - -> **Note**: Machine is currently in beta, so things are likely to change. We -> don't recommend you use it in production yet. - -Machine makes it really easy to create Docker hosts on your computer, on cloud -providers and inside your own data center. It creates servers, installs Docker -on them, then configures the Docker client to talk to them. - -Once your Docker host has been created, it then has a number of commands for -managing them: - - - Starting, stopping, restarting - - Upgrading Docker - - Configuring the Docker client to talk to your host - -## Installation - -Docker Machine is supported on Windows, OSX, and Linux. To install Docker -Machine, download the appropriate binary for your OS and architecture to the -correct place in your `PATH`: - -- [Windows - x86_64](https://github.com/docker/machine/releases/download/v0.1.0/docker-machine_windows-amd64.exe) -- [OSX - x86_64](https://github.com/docker/machine/releases/download/v0.1.0/docker-machine_darwin-amd64) -- [Linux - x86_64](https://github.com/docker/machine/releases/download/v0.1.0/docker-machine_linux-amd64) -- [Windows - i386](https://github.com/docker/machine/releases/download/v0.1.0/docker-machine_windows-386.exe) -- [OSX - i386](https://github.com/docker/machine/releases/download/v0.1.0/docker-machine_darwin-386) -- [Linux - i386](https://github.com/docker/machine/releases/download/v0.1.0/docker-machine_linux-386) - -Now you should be able to check the version with `docker-machine -v`: - -``` -$ docker-machine -v -machine version 0.1.0 -``` - -## Getting started with Docker Machine using a local VM - -Let's take a look at using `docker-machine` to creating, using, and managing a Docker -host inside of [VirtualBox](https://www.virtualbox.org/). - -First, ensure that -[VirtualBox 4.3.20](https://www.virtualbox.org/wiki/Downloads) is correctly -installed on your system. - -If you run the `docker-machine ls` command to show all available machines, you will see -that none have been created so far. - -``` -$ docker-machine ls -NAME ACTIVE DRIVER STATE URL -``` - -To create one, we run the `docker-machine create` command, passing the string -`virtualbox` to the `--driver` flag. The final argument we pass is the name of -the machine - in this case, we will name our machine "dev". - -This will download a lightweight Linux distribution -([boot2docker](https://github.com/boot2docker/boot2docker)) with the Docker -daemon installed, and will create and start a VirtualBox VM with Docker running. - - -``` -$ docker-machine create --driver virtualbox dev -INFO[0000] Creating SSH key... -INFO[0000] Creating VirtualBox VM... -INFO[0007] Starting VirtualBox VM... -INFO[0007] Waiting for VM to start... -INFO[0038] "dev" has been created and is now the active machine -INFO[0038] To connect: docker $(docker-machine config dev) ps -``` - -To use the Docker CLI, you can use the `env` command to list the commands -needed to connect to the instance. - -``` -$ docker-machine env dev -export DOCKER_TLS_VERIFY=yes -export DOCKER_CERT_PATH=/home/ehazlett/.docker/machines/.client -export DOCKER_HOST=tcp://192.168.99.100:2376 - -``` - -You can see the machine you have created by running the `docker-machine ls` command -again: - -``` -$ docker-machine ls -NAME ACTIVE DRIVER STATE URL -dev * virtualbox Running tcp://192.168.99.100:2376 -``` - -The `*` next to `dev` indicates that it is the active host. - -Next, as noted in the output of the `docker-machine create` command, we have to tell -Docker to talk to that machine. You can do this with the `docker-machine config` -command. For example, - -``` -$ docker $(docker-machine config dev) ps -``` - -This will pass arguments to the Docker client that specify the TLS settings. -To see what will be passed, run `docker-machine config dev`. - -You can now run Docker commands on this host: - -``` -$ docker $(docker-machine config dev) run busybox echo hello world -Unable to find image 'busybox' locally -Pulling repository busybox -e72ac664f4f0: Download complete -511136ea3c5a: Download complete -df7546f9f060: Download complete -e433a6c5b276: Download complete -hello world -``` - -Any exposed ports are available on the Docker host’s IP address, which you can -get using the `docker-machine ip` command: - -``` -$ docker-machine ip -192.168.99.100 -``` - -Now you can manage as many local VMs running Docker as you please- just run -`docker-machine create` again. - -If you are finished using a host, you can stop it with `docker stop` and start -it again with `docker start`: - -``` -$ docker-machine stop -$ docker-machine start -``` - -If they aren't passed any arguments, commands such as `docker-machine stop` will run -against the active host (in this case, the VirtualBox VM). You can also specify -a host to run a command against as an argument. For instance, you could also -have written: - -``` -$ docker-machine stop dev -$ docker-machine start dev -``` - -## Using Docker Machine with a cloud provider - -One of the nice things about `docker-machine` is that it provides several “drivers” -which let you use the same interface to create hosts on many different cloud -platforms. This is accomplished by using the `docker-machine create` command with the - `--driver` flag. Here we will be demonstrating the -[Digital Ocean](https://digitalocean.com) driver (called `digitalocean`), but -there are drivers included for several providers including Amazon Web Services, -Google Compute Engine, and Microsoft Azure. - -Usually it is required that you pass account verification credentials for these -providers as flags to `docker-machine create`. These flags are unique for each driver. -For instance, to pass a Digital Ocean access token you use the -`--digitalocean-access-token` flag. - -Let's take a look at how to do this. - -To generate your access token: - -1. Go to the Digital Ocean administrator panel and click on "Apps and API" in -the side panel. -2. Click on "Generate New Token". -3. Give the token a clever name (e.g. "machine"), make sure the "Write" checkbox -is checked, and click on "Generate Token". -4. Grab the big long hex string that is generated (this is your token) and store it somehwere safe. - -Now, run `docker-machine create` with the `digitalocean` driver and pass your key to -the `--digitalocean-access-token` flag. - -Example: - -``` -$ docker-machine create \ - --driver digitalocean \ - --digitalocean-access-token 0ab77166d407f479c6701652cee3a46830fef88b8199722b87821621736ab2d4 \ - staging -INFO[0000] Creating SSH key... -INFO[0000] Creating Digital Ocean droplet... -INFO[0002] Waiting for SSH... -INFO[0085] "staging" has been created and is now the active machine -INFO[0085] To connect: docker $(docker-machine config dev) staging -``` - -For convenience, `docker-machine` will use sensible defaults for choosing settings such - as the image that the VPS is based on, but they can also be overridden using -their respective flags (e.g. `--digitalocean-image`). This is useful if, for -instance, you want to create a nice large instance with a lot of memory and CPUs -(by default `docker-machine` creates a small VPS). For a full list of the -flags/settings available and their defaults, see the output of -`docker-machine create -h`. - -When the creation of a host is initiated, a unique SSH key for accessing the -host (initially for provisioning, then directly later if the user runs the -`docker-machine ssh` command) will be created automatically and stored in the client's -directory in `~/.docker/machines`. After the creation of the SSH key, Docker -will be installed on the remote machine and the daemon will be configured to -accept remote connections over TCP using TLS for authentication. Once this -is finished, the host is ready for connection. - -And then from this point, the remote host behaves much like the local host we -created in the last section. If we look at `docker-machine`, we’ll see it is now the -active host: - -``` -$ docker-machine active dev -$ docker-machine ls -NAME ACTIVE DRIVER STATE URL -dev virtualbox Running tcp://192.168.99.103:2376 -staging * digitalocean Running tcp://104.236.50.118:2376 -``` - -To select an active host, you can use the `docker-machine active` command. - -``` -$ docker-machine active dev -$ docker-machine ls -NAME ACTIVE DRIVER STATE URL -dev * virtualbox Running tcp://192.168.99.103:2376 -staging digitalocean Running tcp://104.236.50.118:2376 -``` - -To remove a host and all of its containers and images, use `docker-machine rm`: - -``` -$ docker-machine rm dev staging -$ docker-machine ls -NAME ACTIVE DRIVER STATE URL -``` - -## Adding a host without a driver - -You can add a host to Docker which only has a URL and no driver. Therefore it -can be used an alias for an existing host so you don’t have to type out the URL -every time you run a Docker command. - -``` -$ docker-machine create --url=tcp://50.134.234.20:2376 custombox -$ docker-machine ls -NAME ACTIVE DRIVER STATE URL -custombox * none Running tcp://50.134.234.20:2376 -``` - -## Using Docker Machine with Docker Swarm -Docker Machine can also provision [Swarm](https://github.com/docker/swarm) -clusters. This can be used with any driver and will be secured with TLS. - -> **Note**: This is an experimental feature so the subcommands and -> options are likely to change in future versions. - -First, create a Swarm token. Optionally, you can use another discovery service. -See the Swarm docs for details. - -To create the token, first create a Machine. This example will use VirtualBox. - -``` -$ docker-machine create -d virtualbox local -``` - -Load the Machine configuration into your shell: - -``` -$ $(docker-machine env local) -``` -Then run generate the token using the Swarm Docker image: - -``` -$ docker run swarm create -1257e0f0bbb499b5cd04b4c9bdb2dab3 -``` -Once you have the token, you can create the cluster. - -### Swarm Master - -Create the Swarm master: - -``` -docker-machine create \ - -d virtualbox \ - --swarm \ - --swarm-master \ - --swarm-discovery token:// \ - swarm-master -``` - -Replace `` with your random token. -This will create the Swarm master and add itself as a Swarm node. - -### Swarm Nodes - -Now, create more Swarm nodes: - -``` -docker-machine create \ - -d virtualbox \ - --swarm \ - --swarm-discovery token:// \ - swarm-node-00 -``` - -You now have a Swarm cluster across two nodes. -To connect to the Swarm master, use `docker-machine env --swarm swarm-master` - -For example: - -``` -$ docker-machine env --swarm swarm-master -export DOCKER_TLS_VERIFY=yes -export DOCKER_CERT_PATH=/home/ehazlett/.docker/machines/.client -export DOCKER_HOST=tcp://192.168.99.100:3376 -``` - -You can load this into your environment using -`$(docker-machine env --swarm swarm-master)`. - -Now you can use the Docker CLI to query: - -``` -$ docker info -Containers: 1 -Nodes: 1 - swarm-master: 192.168.99.100:2376 - └ Containers: 2 - └ Reserved CPUs: 0 / 4 - └ Reserved Memory: 0 B / 999.9 MiB -``` - -## Subcommands - -#### active - -Get or set the active machine. - -``` -$ docker-machine ls -NAME ACTIVE DRIVER STATE URL -dev virtualbox Running tcp://192.168.99.103:2376 -staging * digitalocean Running tcp://104.236.50.118:2376 -$ docker-machine active dev -$ docker-machine ls -NAME ACTIVE DRIVER STATE URL -dev * virtualbox Running tcp://192.168.99.103:2376 -staging digitalocean Running tcp://104.236.50.118:2376 -``` - -#### create - -Create a machine. - -``` -$ docker-machine create --driver virtualbox dev -INFO[0000] Creating SSH key... -INFO[0000] Creating VirtualBox VM... -INFO[0007] Starting VirtualBox VM... -INFO[0007] Waiting for VM to start... -INFO[0038] "dev" has been created and is now the active machine. To point Docker at this machine, run: export DOCKER_HOST=$(docker-machine url) DOCKER_AUTH=identity -``` - -#### config - -Show the Docker client configuration for a machine. - -``` -$ docker-machine config dev ---tls --tlscacert=/Users/ehazlett/.docker/machines/dev/ca.pem --tlscert=/Users/ehazlett/.docker/machines/dev/cert.pem --tlskey=/Users/ehazlett/.docker/machines/dev/key.pem -H tcp://192.168.99.103:2376 -``` - -#### env - -Set environment variables to dictate that `docker` should run a command against -a particular machine. - -`docker-machine env machinename` will print out `export` commands which can be -run in a subshell. Running `docker-machine env -u` will print -`unset` commands which reverse this effect. - -``` -$ env | grep DOCKER -$ $(docker-machine env dev) -$ env | grep DOCKER -DOCKER_HOST=tcp://192.168.99.101:2376 -DOCKER_CERT_PATH=/Users/nathanleclaire/.docker/machines/.client -DOCKER_TLS_VERIFY=yes -$ # If you run a docker command, now it will run against that host. -$ $(docker-machine env -u) -$ env | grep DOCKER -$ # The environment variables have been unset. -``` - -#### inspect - -Inspect information about a machine. - -``` -$ docker-machine inspect dev -{ - "DriverName": "virtualbox", - "Driver": { - "MachineName": "docker-host-128be8d287b2028316c0ad5714b90bcfc11f998056f2f790f7c1f43f3d1e6eda", - "SSHPort": 55834, - "Memory": 1024, - "DiskSize": 20000, - "Boot2DockerURL": "" - } -} -``` - -#### help - -Show help text. - -#### ip - -Get the IP address of a machine. - -``` -$ docker-machine ip -192.168.99.104 -``` - -#### kill - -Kill (abruptly force stop) a machine. - -``` -$ docker-machine ls -NAME ACTIVE DRIVER STATE URL -dev * virtualbox Running tcp://192.168.99.104:2376 -$ docker-machine kill dev -$ docker-machine ls -NAME ACTIVE DRIVER STATE URL -dev * virtualbox Stopped -``` - -#### ls - -List machines. - -``` -$ docker-machine ls -NAME ACTIVE DRIVER STATE URL -dev virtualbox Stopped -foo0 virtualbox Running tcp://192.168.99.105:2376 -foo1 virtualbox Running tcp://192.168.99.106:2376 -foo2 virtualbox Running tcp://192.168.99.107:2376 -foo3 virtualbox Running tcp://192.168.99.108:2376 -foo4 * virtualbox Running tcp://192.168.99.109:2376 -``` - -#### restart - -Restart a machine. Oftentimes this is equivalent to -`docker-machine stop; machine start`. - -``` -$ docker-machine restart -INFO[0005] Waiting for VM to start... -``` - -#### rm - -Remove a machine. This will remove the local reference as well as delete it -on the cloud provider or virtualization management platform. - -``` -$ docker-machine ls -NAME ACTIVE DRIVER STATE URL -foo0 virtualbox Running tcp://192.168.99.105:2376 -foo1 virtualbox Running tcp://192.168.99.106:2376 -$ docker-machine rm foo1 -$ docker-machine ls -NAME ACTIVE DRIVER STATE URL -foo0 virtualbox Running tcp://192.168.99.105:2376 -``` - -#### ssh - -Log into or run a command on a machine using SSH. - -To login, just run `docker-machine ssh machinename`: - -``` -$ docker-machine ssh dev - ## . - ## ## ## == - ## ## ## ## === - /""""""""""""""""\___/ === - ~~~ {~~ ~~~~ ~~~ ~~~~ ~~ ~ / ===- ~~~ - \______ o __/ - \ \ __/ - \____\______/ - _ _ ____ _ _ -| |__ ___ ___ | |_|___ \ __| | ___ ___| | _____ _ __ -| '_ \ / _ \ / _ \| __| __) / _` |/ _ \ / __| |/ / _ \ '__| -| |_) | (_) | (_) | |_ / __/ (_| | (_) | (__| < __/ | -|_.__/ \___/ \___/ \__|_____\__,_|\___/ \___|_|\_\___|_| -Boot2Docker version 1.4.0, build master : 69cf398 - Fri Dec 12 01:39:42 UTC 2014 -docker@boot2docker:~$ ls / -Users/ dev/ home/ lib/ mnt/ proc/ run/ sys/ usr/ -bin/ etc/ init linuxrc opt/ root/ sbin/ tmp var/ -``` - -You can also specify commands to run remotely by appending them directly to the -`docker-machine ssh` command, much like the regular `ssh` program works: - -``` -$ docker-machine ssh dev free - total used free shared buffers -Mem: 1023556 183136 840420 0 30920 --/+ buffers: 152216 871340 -Swap: 1212036 0 1212036 -``` - -If the command you are appending has flags, e.g. `df -h`, you can use the flag -parsing terminator (`--`) to avoid confusing the `docker-machine` client, which -will otherwise interpret them as flags you intended to pass to it: - -``` -$ docker-machine ssh dev -- df -h -Filesystem Size Used Available Use% Mounted on -rootfs 899.6M 85.9M 813.7M 10% / -tmpfs 899.6M 85.9M 813.7M 10% / -tmpfs 499.8M 0 499.8M 0% /dev/shm -/dev/sda1 18.2G 58.2M 17.2G 0% /mnt/sda1 -cgroup 499.8M 0 499.8M 0% /sys/fs/cgroup -/dev/sda1 18.2G 58.2M 17.2G 0% -/mnt/sda1/var/lib/docker/aufs -``` - -#### start - -Gracefully start a machine. - -``` -$ docker-machine restart -INFO[0005] Waiting for VM to start... -``` - -#### stop - -Gracefully stop a machine. - -``` -$ docker-machine ls -NAME ACTIVE DRIVER STATE URL -dev * virtualbox Running tcp://192.168.99.104:2376 -$ docker-machine stop dev -$ docker-machine ls -NAME ACTIVE DRIVER STATE URL -dev * virtualbox Stopped -``` - -#### upgrade - -Upgrade a machine to the latest version of Docker. - -``` -$ docker-machine upgrade dev -``` - -#### url - -Get the URL of a host - -``` -$ docker-machine url -tcp://192.168.99.109:2376 -``` - -## Drivers - -TODO: List all possible values (where applicable) for all flags for every -driver. - -#### Amazon Web Services -Create machines on [Amazon Web Services](http://aws.amazon.com). You will need an Access Key ID, Secret Access Key and a VPC ID. To find the VPC ID, login to the AWS console and go to Services -> VPC -> Your VPCs. Select the one where you would like to launch the instance. - -Options: - - - `--amazonec2-access-key`: **required** Your access key id for the Amazon Web Services API. - - `--amazonec2-ami`: The AMI ID of the instance to use Default: `ami-4ae27e22` - - `--amazonec2-instance-type`: The instance type to run. Default: `t2.micro` - - `--amazonec2-region`: The region to use when launching the instance. Default: `us-east-1` - - `--amazonec2-root-size`: The root disk size of the instance (in GB). Default: `16` - - `--amazonec2-secret-key`: **required** Your secret access key for the Amazon Web Services API. - - `--amazonec2-security-group`: AWS VPC security group name. Default: `docker-machine` - - `--amazonec2-session-token`: Your session token for the Amazon Web Services API. - - `--amazonec2-subnet-id`: AWS VPC subnet id - - `--amazonec2-vpc-id`: **required** Your VPC ID to launch the instance in. - - `--amazonec2-zone`: The AWS zone launch the instance in (i.e. one of a,b,c,d,e). Default: `a` - -By default, the Amazon EC2 driver will use a daily image of Ubuntu 14.04 LTS. - - | Region | AMI ID | - |:--------------|:-----------| - |ap-northeast-1 |ami-44f1e245| - |ap-southeast-1 |ami-f95875ab| - |ap-southeast-2 |ami-890b62b3| - |cn-north-1 |ami-fe7ae8c7| - |eu-west-1 |ami-823686f5| - |eu-central-1 |ami-ac1524b1| - |sa-east-1 |ami-c770c1da| - |us-east-1 |ami-4ae27e22| - |us-west-1 |ami-d1180894| - |us-west-2 |ami-898dd9b9| - |us-gov-west-1 |ami-cf5630ec| - -#### Digital Ocean - -Create Docker machines on [Digital Ocean](https://www.digitalocean.com/). - -You need to create a personal access token under "Apps & API" in the Digital Ocean -Control Panel and pass that to `docker-machine create` with the `--digitalocean-access-token` option. - - $ docker-machine create --driver digitalocean --digitalocean-access-token=aa9399a2175a93b17b1c86c807e08d3fc4b79876545432a629602f61cf6ccd6b test-this - -Options: - - - `--digitalocean-access-token`: Your personal access token for the Digital Ocean API. - - `--digitalocean-image`: The name of the Digital Ocean image to use. Default: `docker` - - `--digitalocean-region`: The region to create the droplet in, see [Regions API](https://developers.digitalocean.com/documentation/v2/#regions) for how to get a list. Default: `nyc3` - - `--digitalocean-size`: The size of the Digital Ocean driver (larger than default options are of the form `2gb`). Default: `512mb` - -The DigitalOcean driver will use `ubuntu-14-04-x64` as the default image. - -#### Google Compute Engine -Create machines on [Google Compute Engine](https://cloud.google.com/compute/). You will need a Google account and project name. See https://cloud.google.com/compute/docs/projects for details on projects. - -The Google driver uses oAuth. When creating the machine, you will have your browser opened to authorize. Once authorized, paste the code given in the prompt to launch the instance. - -Options: - - - `--google-zone`: The zone to launch the instance. Default: `us-central1-a` - - `--google-machine-type`: The type of instance. Default: `f1-micro` - - `--google-username`: The username to use for the instance. Default: `docker-user` - - `--google-instance-name`: The name of the instance. Default: `docker-machine` - - `--google-project`: The name of your project to use when launching the instance. - -The GCE driver will use the `ubuntu-1404-trusty-v20141212` instance type unless otherwise specified. - -#### IBM Softlayer - -Create machines on [Softlayer](http://softlayer.com). - -You need to generate an API key in the softlayer control panel. -[Retrieve your API key](http://knowledgelayer.softlayer.com/procedure/retrieve-your-api-key) - -Options: - - `--softlayer-api-endpoint=`: Change softlayer API endpoint - - `--softlayer-user`: **required** username for your softlayer account, api key needs to match this user. - - `--softlayer-api-key`: **required** API key for your user account - - `--softlayer-cpu`: Number of CPU's for the machine. - - `--softlayer-disk-size: Size of the disk in MB. `0` sets the softlayer default. - - `--softlayer-domain`: **required** domain name for the machine - - `--softlayer-hostname`: hostname for the machine - - `--softlayer-hourly-billing`: Sets the hourly billing flag (default), otherwise uses monthly billing - - `--softlayer-image`: OS Image to use - - `--softlayer-local-disk`: Use local machine disk instead of softlayer SAN. - - `--softlayer-memory`: Memory for host in MB - - `--softlayer-private-net-only`: Disable public networking - - `--softlayer-region`: softlayer region - -The SoftLayer driver will use `UBUNTU_LATEST` as the image type by default. - - -#### Microsoft Azure - -Create machines on [Microsoft Azure](http://azure.microsoft.com/). - -You need to create a subscription with a cert. Run these commands and answer the questions: - - $ openssl req -x509 -nodes -days 365 -newkey rsa:1024 -keyout mycert.pem -out mycert.pem - $ openssl pkcs12 -export -out mycert.pfx -in mycert.pem -name "My Certificate" - $ openssl x509 -inform pem -in mycert.pem -outform der -out mycert.cer - -Go to the Azure portal, go to the "Settings" page (you can find the link at the bottom of the -left sidebar - you need to scroll), then "Management Certificates" and upload `mycert.cer`. - -Grab your subscription ID from the portal, then run `docker-machine create` with these details: - - $ docker-machine create -d azure --azure-subscription-id="SUB_ID" --azure-subscription-cert="mycert.pem" A-VERY-UNIQUE-NAME - -Options: - - - `--azure-subscription-id`: Your Azure subscription ID (A GUID like `d255d8d7-5af0-4f5c-8a3e-1545044b861e`). - - `--azure-subscription-cert`: Your Azure subscription cert. - -The Azure driver uses the `b39f27a8b8c64d52b05eac6a62ebad85__Ubuntu-14_04_1-LTS-amd64-server-20140927-en-us-30GB` -image by default. Note, this image is not available in the Chinese regions. In China you should - specify `b549f4301d0b4295b8e76ceb65df47d4__Ubuntu-14_04_1-LTS-amd64-server-20140927-en-us-30GB`. - -You may need to `machine ssh` in to the virtual machine and reboot to ensure that the OS is updated. - -#### Microsoft Hyper-V - -Creates a Boot2Docker virtual machine locally on your Windows machine -using Hyper-V. [See here](http://windows.microsoft.com/en-us/windows-8/hyper-v-run-virtual-machines) -for instructions to enable Hyper-V. You will need to use an -Administrator level account to create and manage Hyper-V machines. - -> **Note**: You will need an existing virtual switch to use the -> driver. Hyper-V can share an external network interface (aka -> bridging), see [this blog](http://blogs.technet.com/b/canitpro/archive/2014/03/11/step-by-step-enabling-hyper-v-for-use-on-windows-8-1.aspx). -> If you would like to use NAT, create an internal network, and use -> [Internet Connection -> Sharing](http://www.packet6.com/allowing-windows-8-1-hyper-v-vm-to-work-with-wifi/). - -Options: - - - `--hyper-v-boot2docker-location`: Location of a local boot2docker iso to use. Overrides the URL option below. - - `--hyper-v-boot2docker-url`: The URL of the boot2docker iso. Defaults to the latest available version. - - `--hyper-v-disk-size`: Size of disk for the host in MB. Defaults to `20000`. - - `--hyper-v-memory`: Size of memory for the host in MB. Defaults to `1024`. The machine is setup to use dynamic memory. - - `--hyper-v-virtual-switch`: Name of the virtual switch to use. Defaults to first found. - -#### Openstack -Create machines on [Openstack](http://www.openstack.org/software/) - -Mandatory: - - - `--openstack-flavor-id`: The flavor ID to use when creating the machine - - `--openstack-image-id`: The image ID to use when creating the machine. - -Options: - - - `--openstack-auth-url`: Keystone service base URL. - - `--openstack-username`: User identifer to authenticate with. - - `--openstack-password`: User password. It can be omitted if the standard environment variable `OS_PASSWORD` is set. - - `--openstack-tenant-name` or `--openstack-tenant-id`: Identify the tenant in which the machine will be created. - - `--openstack-region`: The region to work on. Can be omitted if there is ony one region on the OpenStack. - - `--openstack-endpoint-type`: Endpoint type can be `internalURL`, `adminURL` on `publicURL`. If is a helper for the driver - to choose the right URL in the OpenStack service catalog. If not provided the default id `publicURL` - - `--openstack-net-id`: The private network id the machine will be connected on. If your OpenStack project project - contains only one private network it will be use automatically. - - `--openstack-sec-groups`: If security groups are available on your OpenStack you can specify a comma separated list - to use for the machine (e.g. `secgrp001,secgrp002`). - - `--openstack-floatingip-pool`: The IP pool that will be used to get a public IP an assign it to the machine. If there is an - IP address already allocated but not assigned to any machine, this IP will be chosen and assigned to the machine. If - there is no IP address already allocated a new IP will be allocated and assigned to the machine. - - `--openstack-ssh-user`: The username to use for SSH into the machine. If not provided `root` will be used. - - `--openstack-ssh-port`: Customize the SSH port if the SSH server on the machine does not listen on the default port. - -Environment variables: - -Here comes the list of the supported variables with the corresponding options. If both environment variable -and CLI option are provided the CLI option takes the precedence. - -| Environment variable | CLI option | -|----------------------|-----------------------------| -| `OS_AUTH_URL` | `--openstack-auth-url` | -| `OS_USERNAME` | `--openstack-username` | -| `OS_PASSWORD` | `--openstack-password` | -| `OS_TENANT_NAME` | `--openstack-tenant-name` | -| `OS_TENANT_ID` | `--openstack-tenant-id` | -| `OS_REGION_NAME` | `--openstack-region` | -| `OS_ENDPOINT_TYPE` | `--openstack-endpoint-type` | - -#### Rackspace -Create machines on [Rackspace cloud](http://www.rackspace.com/cloud) - -Options: - - - `--rackspace-username`: Rackspace account username - - `--rackspace-api-key`: Rackspace API key - - `--rackspace-region`: Rackspace region name - - `--rackspace-endpoint-type`: Rackspace endpoint type (adminURL, internalURL or the default publicURL) - - `--rackspace-image-id`: Rackspace image ID. Default: Ubuntu 14.10 (Utopic Unicorn) (PVHVM) - - `--rackspace-flavor-id`: Rackspace flavor ID. Default: General Purpose 1GB - - `--rackspace-ssh-user`: SSH user for the newly booted machine. Set to root by default - - `--rackspace-ssh-port`: SSH port for the newly booted machine. Set to 22 by default - -Environment variables: - -Here comes the list of the supported variables with the corresponding options. If both environment -variable and CLI option are provided the CLI option takes the precedence. - -| Environment variable | CLI option | -|----------------------|-----------------------------| -| `OS_USERNAME` | `--rackspace-username` | -| `OS_API_KEY` | `--rackspace-ap-key` | -| `OS_REGION_NAME` | `--rackspace-region` | -| `OS_ENDPOINT_TYPE` | `--rackspace-endpoint-type` | - -The Rackspace driver will use `598a4282-f14b-4e50-af4c-b3e52749d9f9` (Ubuntu 14.04 LTS) by default. - -#### Oracle VirtualBox - -Create machines locally using [VirtualBox](https://www.virtualbox.org/). -This driver requires VirtualBox to be installed on your host. - - $ docker-machine create --driver=virtualbox vbox-test - -Options: - - - `--virtualbox-boot2docker-url`: The URL of the boot2docker image. Defaults to the latest available version. - - `--virtualbox-disk-size`: Size of disk for the host in MB. Default: `20000` - - `--virtualbox-memory`: Size of memory for the host in MB. Default: `1024` - -The VirtualBox driver uses the latest boot2docker image. - -#### VMware Fusion -Creates machines locally on [VMware Fusion](http://www.vmware.com/products/fusion). Requires VMware Fusion to be installed. - -Options: - - - `--vmwarefusion-boot2docker-url`: URL for boot2docker image. - - `--vmwarefusion-disk-size`: Size of disk for host VM (in MB). Default: `20000` - - `--vmwarefusion-memory-size`: Size of memory for host VM (in MB). Default: `1024` - -The VMware Fusion driver uses the latest boot2docker image. - -#### VMware vCloud Air -Creates machines on [vCloud Air](http://vcloud.vmware.com) subscription service. You need an account within an existing subscription of vCloud Air VPC or Dedicated Cloud. - -Options: - - - `--vmwarevcloudair-username`: vCloud Air Username. - - `--vmwarevcloudair-password`: vCloud Air Password. - - `--vmwarevcloudair-catalog`: Catalog. Default: `Public Catalog` - - `--vmwarevcloudair-catalogitem`: Catalog Item. Default: `Ubuntu Server 12.04 LTS (amd64 20140927)` - - `--vmwarevcloudair-computeid`: Compute ID (if using Dedicated Cloud). - - `--vmwarevcloudair-cpu-count`: VM Cpu Count. Default: `1` - - `--vmwarevcloudair-docker-port`: Docker port. Default: `2376` - - `--vmwarevcloudair-edgegateway`: Organization Edge Gateway. Default: `` - - `--vmwarevcloudair-memory-size`: VM Memory Size in MB. Default: `2048` - - `--vmwarevcloudair-name`: vApp Name. Default: `` - - `--vmwarevcloudair-orgvdcnetwork`: Organization VDC Network to attach. Default: `-default-routed` - - `--vmwarevcloudair-provision`: Install Docker binaries. Default: `true` - - `--vmwarevcloudair-publicip`: Org Public IP to use. - - `--vmwarevcloudair-ssh-port`: SSH port. Default: `22` - - `--vmwarevcloudair-vdcid`: Virtual Data Center ID. - -The VMware vCloud Air driver will use the `Ubuntu Server 12.04 LTS (amd64 20140927)` image by default. - -#### VMware vSphere -Creates machines on a [VMware vSphere](http://www.vmware.com/products/vsphere) Virtual Infrastructure. Requires a working vSphere (ESXi and optionally vCenter) installation. The vSphere driver depends on [`govc`](https://github.com/vmware/govmomi/tree/master/govc) (must be in path) and has been tested with [vmware/govmomi@`c848630`](https://github.com/vmware/govmomi/commit/c8486300bfe19427e4f3226e3b3eac067717ef17). - -Options: - - - `--vmwarevsphere-username`: vSphere Username. - - `--vmwarevsphere-password`: vSphere Password. - - `--vmwarevsphere-boot2docker-url`: URL for boot2docker image. - - `--vmwarevsphere-compute-ip`: Compute host IP where the Docker VM will be instantiated. - - `--vmwarevsphere-cpu-count`: CPU number for Docker VM. Default: `2` - - `--vmwarevsphere-datacenter`: Datacenter for Docker VM (must be set to `ha-datacenter` when connecting to a single host). - - `--vmwarevsphere-datastore`: Datastore for Docker VM. - - `--vmwarevsphere-disk-size`: Size of disk for Docker VM (in MB). Default: `20000` - - `--vmwarevsphere-memory-size`: Size of memory for Docker VM (in MB). Default: `2048` - - `--vmwarevsphere-network`: Network where the Docker VM will be attached. - - `--vmwarevsphere-pool`: Resource pool for Docker VM. - - `--vmwarevsphere-vcenter`: IP/hostname for vCenter (or ESXi if connecting directly to a single host). - -The VMware vSphere driver uses the latest boot2docker image. diff --git a/project/advanced-contributing.md~ b/project/advanced-contributing.md~ deleted file mode 100644 index df5756d9d7..0000000000 --- a/project/advanced-contributing.md~ +++ /dev/null @@ -1,139 +0,0 @@ -page_title: Advanced contributing -page_description: Explains workflows for refactor and design proposals -page_keywords: contribute, project, design, refactor, proposal - -# Advanced contributing - -In this section, you learn about the more advanced contributions you can make. -They are advanced because they have a more involved workflow or require greater -programming experience. Don't be scared off though, if you like to stretch and -challenge yourself, this is the place for you. - -This section gives generalized instructions for advanced contributions. You'll -read about the workflow but there are not specific descriptions of commands. -Your goal should be to understand the processes described. - -At this point, you should have read and worked through the earlier parts of -the project contributor guide. You should also have - made at least one project contribution. - -## Refactor or cleanup proposal - -A refactor or cleanup proposal changes Docker's internal structure without -altering the external behavior. To make this type of proposal: - -1. Fork `docker/docker`. - -2. Make your changes in a feature branch. - -3. Sync and rebase with `master` as you work. - -3. Run the full test suite. - -4. Submit your code through a pull request (PR). - - The PR's title should have the format: - - **Cleanup:** _short title_ - - If your changes required logic changes, note that in your request. - -5. Work through Docker's review process until merge. - - -## Design proposal - -A design proposal solves a problem or adds a feature to the Docker software. -The process for submitting design proposals requires two pull requests, one -for the design and one for the implementation. - -![Simple process](/project/images/proposal.png) - -The important thing to notice is that both the design pull request and the -implementation pull request go through a review. In other words, there is -considerable time commitment in a design proposal; so, you might want to pair -with someone on design work. - -The following provides greater detail on the process: - -1. Come up with an idea. - - Ideas usually come from limitations users feel working with a product. So, - take some time to really use Docker. Try it on different platforms; explore - how it works with different web applications. Go to some community events - and find out what other users want. - -2. Review existing issues and proposals to make sure no other user is proposing a similar idea. - - The design proposals are all online in our GitHub pull requests. - -3. Talk to the community about your idea. - - We have lots of community forums - where you can get feedback on your idea. Float your idea in a forum or two - to get some commentary going on it. - -4. Fork `docker/docker` and clone the repo to your local host. - -5. Create a new Markdown file in the area you wish to change. - - For example, if you want to redesign our daemon create a new file under the - `daemon/` folder. - -6. Name the file descriptively, for example `redesign-daemon-proposal.md`. - -7. Write a proposal for your change into the file. - - This is a Markdown file that describes your idea. Your proposal - should include information like: - - * Why is this changed needed or what are the use cases? - * What are the requirements this change should meet? - * What are some ways to design/implement this feature? - * Which design/implementation do you think is best and why? - * What are the risks or limitations of your proposal? - - This is your chance to convince people your idea is sound. - -8. Submit your proposal in a pull request to `docker/docker`. - - The title should have the format: - - **Proposal:** _short title_ - - The body of the pull request should include a brief summary of your change - and then say something like "_See the file for a complete description_". - -9. Refine your proposal through review. - - The maintainers and the community review your proposal. You'll need to - answer questions and sometimes explain or defend your approach. This is - chance for everyone to both teach and learn. - -10. Pull request accepted. - - Your request may also be rejected. Not every idea is a good fit for Docker. - Let's assume though your proposal succeeded. - -11. Implement your idea. - - Implementation uses all the standard practices of any contribution. - - * fork `docker/docker` - * create a feature branch - * sync frequently back to master - * test as you go and full test before a PR - - If you run into issues, the community is there to help. - -12. When you have a complete implementation, submit a pull request back to `docker/docker`. - -13. Review and iterate on your code. - - If you are making a large code change, you can expect greater scrutiny - during this phase. - -14. Acceptance and merge! - diff --git a/project/coding-style.md~ b/project/coding-style.md~ deleted file mode 100644 index e5b6f5fe9c..0000000000 --- a/project/coding-style.md~ +++ /dev/null @@ -1,93 +0,0 @@ -page_title: Coding Style Checklist -page_description: List of guidelines for coding Docker contributions -page_keywords: change, commit, squash, request, pull request, test, unit test, integration tests, Go, gofmt, LGTM - -# Coding Style Checklist - -This checklist summarizes the material you experienced working through [make a -code contribution](/project/make-a-contribution) and [advanced -contributing](/project/advanced-contributing). The checklist applies to code -that is program code or code that is documentation code. - -## Change and commit code - -* Fork the `docker/docker` repository. - -* Make changes on your fork in a feature branch. Name your branch `XXXX-something` - where `XXXX` is the issue number you are working on. - -* Run `gofmt -s -w file.go` on each changed file before - committing your changes. Most editors have plug-ins that do this automatically. - -* Update the documentation when creating or modifying features. - -* Commits that fix or close an issue should reference them in the commit message - `Closes #XXXX` or `Fixes #XXXX`. Mentions help by automatically closing the - issue on a merge. - -* After every commit, run the test suite and ensure it is passing. - -* Sync and rebase frequently as you code to keep up with `docker` master. - -* Set your `git` signature and make sure you sign each commit. - -* Do not add yourself to the `AUTHORS` file. This file is autogenerated from the - Git history. - -## Tests and testing - -* Submit unit tests for your changes. - -* Make use of the builtin Go test framework built. - -* Use existing Docker test files (`name_test.go`) for inspiration. - -* Run the full test suite on your - branch before submitting a pull request. - -* Run `make docs` to build the documentation and then check it locally. - -* Use an online grammar - checker or similar to test you documentation changes for clarity, - concision, and correctness. - -## Pull requests - -* Sync and cleanly rebase on top of Docker's `master` without multiple branches - mixed into the PR. - -* Before the pull request, squash your commits into logical units of work using - `git rebase -i` and `git push -f`. - -* Include documentation changes in the same commit so that a revert would - remove all traces of the feature or fix. - -* Reference each issue in your pull request description (`#XXXX`) - -## Respond to pull requests reviews - -* Docker maintainers use LGTM (**l**ooks-**g**ood-**t**o-**m**e) in PR comments - to indicate acceptance. - -* Code review comments may be added to your pull request. Discuss, then make - the suggested modifications and push additional commits to your feature - branch. - -* Incorporate changes on your feature branch and push to your fork. This - automatically updates your open pull request. - -* Post a comment after pushing to alert reviewers to PR changes; pushing a - change does not send notifications. - -* A change requires LGTMs from an absolute majority maintainers of an - affected component. For example, if you change `docs/` and `registry/` code, - an absolute majority of the `docs/` and the `registry/` maintainers must - approve your PR. - -## Merges after pull requests - -* After a merge, [a master build](https://master.dockerproject.com/) is - available almost immediately. - -* If you made a documentation change, you can see it at - [docs.master.dockerproject.com](http://docs.master.dockerproject.com/). diff --git a/project/create-pr.md~ b/project/create-pr.md~ deleted file mode 100644 index 197aee849d..0000000000 --- a/project/create-pr.md~ +++ /dev/null @@ -1,127 +0,0 @@ -page_title: Create a pull request (PR) -page_description: Basic workflow for Docker contributions -page_keywords: contribute, pull request, review, workflow, beginner, squash, commit - -# Create a pull request (PR) - -A pull request (PR) sends your changes to the Docker maintainers for review. You -create a pull request on GitHub. A pull request "pulls" changes from your forked -repository into the `docker/docker` repository. - -You can see the -list of active pull requests to Docker on GitHub. - -## Check Your Work - -Before you create a pull request, check your work. - -1. In a terminal window, go to the root of your `docker-fork` repository. - - $ cd ~/repos/docker-fork - -2. Checkout your feature branch. - - $ git checkout 11038-fix-rhel-link - Already on '11038-fix-rhel-link' - -3. Run the full test suite on your branch. - - $ make test - - All the tests should pass. If they don't, find out why and correct the - situation. - -4. Optionally, if modified the documentation, build the documentation: - - $ make docs - -5. Commit and push any changes that result from your checks. - -## Rebase your branch - -Always rebase and squash your commits before making a pull request. - -1. Fetch any of the last minute changes from `docker/docker`. - - $ git fetch upstream master - From github.com:docker/docker - * branch master -> FETCH_HEAD - -3. Start an interactive rebase. - - $ git rebase -i upstream/master - -4. Rebase opens an editor with a list of commits. - - pick 1a79f55 Tweak some of the other text for grammar - pick 53e4983 Fix a link - pick 3ce07bb Add a new line about RHEL - - If you run into trouble, `git --rebase abort` removes any changes and gets - you back to where you started. - -4. Squash the `pick` keyword with `squash` on all but the first commit. - - pick 1a79f55 Tweak some of the other text for grammar - squash 53e4983 Fix a link - squash 3ce07bb Add a new line about RHEL - - After closing the file, `git` opens your editor again to edit the commit - message. - -5. Edit and save your commit message. - - `git commit -s` - - Make sure your message includes **Note:** -> The documentation is written with paragraphs wrapped at 80 column lines to -> make it easier for terminal use. You can probably set up your favorite text -> editor to do this automatically for you. - -### Prose style - -In general, try to write simple, declarative prose. We prefer short, -single-clause sentences and brief three-to-five sentence paragraphs. Try to -choose vocabulary that is straightforward and precise. Avoid creating new terms, -using obscure terms or, in particular, using a lot of jargon. For example, use -"use" instead of leveraging "leverage". - -That said, don’t feel like you have to write for localization or for -English-as-a-second-language (ESL) speakers specifically. Assume you are writing -for an ordinary speaker of English with a basic university education. If your -prose is simple, clear, and straightforward it will translate readily. - -One way to think about this is to assume Docker’s users are generally university -educated and read at at least a "16th" grade level (meaning they have a -university degree). You can use a [readability -tester](https://readability-score.com/) to help guide your judgement. For -example, the readability score for the phrase "Containers should be ephemeral" -is around the 13th grade level (first year at university), and so is acceptable. - -In all cases, we prefer clear, concise communication over stilted, formal -language. Don't feel like you have to write documentation that "sounds like -technical writing." - -### Metaphor and figurative language - -One exception to the "don’t write directly for ESL" rule is to avoid the use of -metaphor or other -[figurative language](http://en.wikipedia.org/wiki/Literal_and_figurative_language) to -describe things. There are too many cultural and social issues that can prevent -a reader from correctly interpreting a metaphor. - -## Specific conventions - -Below are some specific recommendations (and a few deviations) from AP style -that we use in our docs. - -### Contractions - -As long as your prose does not become too slangy or informal, it's perfectly -acceptable to use contractions in our documentation. Make sure to use -apostrophes correctly. - -### Use of dashes in a sentence. - -Dashes refers to the en dash (–) and the em dash (—). Dashes can be used to -separate parenthetical material. - -Usage Example: This is an example of a Docker client – which uses the Big Widget -to run – and does x, y, and z. - -Use dashes cautiously and consider whether commas or parentheses would work just -as well. We always emphasize short, succinct sentences. - -More info from the always handy [Grammar Girl site](http://www.quickanddirtytips.com/education/grammar/dashes-parentheses-and-commas). - -### Pronouns - -It's okay to use first and second person pronouns. Specifically, use "we" to -refer to Docker and "you" to refer to the user. For example, "We built the -`exec` command so you can resize a TTY session." - -As much as possible, avoid using gendered pronouns ("he" and "she", etc.). -Either recast the sentence so the pronoun is not needed or, less preferably, -use "they" instead. If you absolutely can't get around using a gendered pronoun, -pick one and stick to it. Which one you choose is up to you. One common -convention is to use the pronoun of the author's gender, but if you prefer to -default to "he" or "she", that's fine too. - -### Capitalization - -#### In general - -Only proper nouns should be capitalized in body text. In general, strive to be -as strict as possible in applying this rule. Avoid using capitals for emphasis -or to denote "specialness". - -The word "Docker" should always be capitalized when referring to either the -company or the technology. The only exception is when the term appears in a code -sample. - -#### Starting sentences - -Because code samples should always be written exactly as they would appear -on-screen, you should avoid starting sentences with a code sample. - -#### In headings - -Headings take sentence capitalization, meaning that only the first letter is -capitalized (and words that would normally be capitalized in a sentence, e.g., -"Docker"). Do not use Title Case (i.e., capitalizing every word) for headings. Generally, we adhere to [AP style -for titles](http://www.quickanddirtytips.com/education/grammar/capitalizing-titles). - -## Periods - -We prefer one space after a period at the end of a sentence, not two. - -See [lists](#lists) below for how to punctuate list items. - -### Abbreviations and acronyms - -* Exempli gratia (e.g.) and id est ( i.e.): these should always have periods and -are always followed by a comma. - -* Acronyms are pluralized by simply adding "s", e.g., PCs, OSs. - -* On first use on a given page, the complete term should be used, with the -abbreviation or acronym in parentheses. E.g., Red Hat Enterprise Linux (RHEL). -The exception is common, non-technical acronyms like AKA or ASAP. Note that -acronyms other than i.e. and e.g. are capitalized. - -* Other than "e.g." and "i.e." (as discussed above), acronyms do not take -periods, PC not P.C. - - -### Lists - -When writing lists, keep the following in mind: - -Use bullets when the items being listed are independent of each other and the -order of presentation is not important. - -Use numbers for steps that have to happen in order or if you have mentioned the -list in introductory text. For example, if you wrote "There are three config -settings available for SSL, as follows:", you would number each config setting -in the subsequent list. - -In all lists, if an item is a complete sentence, it should end with a -period. Otherwise, we prefer no terminal punctuation for list items. -Each item in a list should start with a capital. - -### Numbers - -Write out numbers in body text and titles from one to ten. From 11 on, use numerals. - -### Notes - -Use notes sparingly and only to bring things to the reader's attention that are -critical or otherwise deserving of being called out from the body text. Please -format all notes as follows: - - > **Note:** - > One line of note text - > another line of note text - -### Avoid excess use of "i.e." - -Minimize your use of "i.e.". It can add an unnecessary interpretive burden on -the reader. Avoid writing "This is a thing, i.e., it is like this". Just -say what it is: "This thing is …" - -### Preferred usages - -#### Login vs. log in. - -A "login" is a noun (one word), as in "Enter your login". "Log in" is a compound -verb (two words), as in "Log in to the terminal". - -### Oxford comma - -One way in which we differ from AP style is that Docker’s docs use the [Oxford -comma](http://en.wikipedia.org/wiki/Serial_comma) in all cases. That’s our -position on this controversial topic, we won't change our mind, and that’s that! - -### Code and UI text styling - -We require `code font` styling (monospace, sans-serif) for all text that refers -to a command or other input or output from the CLI. This includes file paths -(e.g., `/etc/hosts/docker.conf`). If you enclose text in backticks (`) markdown -will style the text as code. - -Text from a CLI should be quoted verbatim, even if it contains errors or its -style contradicts this guide. You can add "(sic)" after the quote to indicate -the errors are in the quote and are not errors in our docs. - -Text taken from a GUI (e.g., menu text or button text) should appear in "double -quotes". The text should take the exact same capitalisation, etc. as appears in -the GUI. E.g., Click "Continue" to save the settings. - -Text that refers to a keyboard command or hotkey is capitalized (e.g., Ctrl-D). - -When writing CLI examples, give the user hints by making the examples resemble -exactly what they see in their shell: - -* Indent shell examples by 4 spaces so they get rendered as code blocks. -* Start typed commands with `$ ` (dollar space), so that they are easily - differentiated from program output. -* Program output has no prefix. -* Comments begin with # (hash space). -* In-container shell commands, begin with `$$ ` (dollar dollar space). - -Please test all code samples to ensure that they are correct and functional so -that users can successfully cut-and-paste samples directly into the CLI. - -## Pull requests - -The pull request (PR) process is in place so that we can ensure changes made to -the docs are the best changes possible. A good PR will do some or all of the -following: - -* Explain why the change is needed -* Point out potential issues or questions -* Ask for help from experts in the company or the community -* Encourage feedback from core developers and others involved in creating the - software being documented. - -Writing a PR that is singular in focus and has clear objectives will encourage -all of the above. Done correctly, the process allows reviewers (maintainers and -community members) to validate the claims of the documentation and identify -potential problems in communication or presentation. - -### Commit messages - -In order to write clear, useful commit messages, please follow these -[recommendations](http://robots.thoughtbot.com/5-useful-tips-for-a-better-commit-message). - -## Links - -For accessibility and usability reasons, avoid using phrases such as "click -here" for link text. Recast your sentence so that the link text describes the -content of the link, as we did in the -["Commit messages" section](#commit-messages) above. - -You can use relative links (../linkeditem) to link to other pages in Docker's -documentation. - -## Graphics - -When you need to add a graphic, try to make the file-size as small as possible. -If you need help reducing file-size of a high-resolution image, feel free to -contact us for help. -Usually, graphics should go in the same directory as the .md file that -references them, or in a subdirectory for images if one already exists. - -The preferred file format for graphics is PNG, but GIF and JPG are also -acceptable. - -If you are referring to a specific part of the UI in an image, use -call-outs (circles and arrows or lines) to highlight what you’re referring to. -Line width for call-outs should not exceed five pixels. The preferred color for -call-outs is red. - -Be sure to include descriptive alt-text for the graphic. This greatly helps -users with accessibility issues. - -Lastly, be sure you have permission to use any included graphics. \ No newline at end of file diff --git a/project/find-an-issue.md~ b/project/find-an-issue.md~ deleted file mode 100644 index 2b3396e6e7..0000000000 --- a/project/find-an-issue.md~ +++ /dev/null @@ -1,240 +0,0 @@ -page_title: Make a project contribution -page_description: Basic workflow for Docker contributions -page_keywords: contribute, pull request, review, workflow, beginner, expert, squash, commit - - - - -# Find and claim an issue - -On this page, you choose what you want to work on. As a contributor you can work -on whatever you want. If you are new to contributing, you should start by -working with our known issues. - -## Understand the issue types - -An existing issue is something reported by a Docker user. As issues come in, -our maintainers triage them. Triage is its own topic. For now, it is important -for you to know that triage includes ranking issues according to difficulty. - -Triaged issues have one of these labels: - - - - - - - - - - - - - - - - - - - - - - - - - - -
LevelExperience level guideline
exp/beginnerYou have made less than 10 contributions in your life time to any open source project.
exp/noviceYou have made more than 10 contributions to an open source project or at least 5 contributions to Docker.
exp/proficientYou have made more than 5 contributions to Docker which amount to at least 200 code lines or 1000 documentation lines.
exp/expertYou have made less than 20 commits to Docker which amount to 500-1000 code lines or 1000-3000 documentation lines.
exp/masterYou have made more than 20 commits to Docker and greater than 1000 code lines or 3000 documentation lines.
- -As the table states, these labels are meant as guidelines. You might have -written a whole plugin for Docker in a personal project and never contributed to -Docker. With that kind of experience, you could take on an exp/expert or exp/master level task. - -## Claim a beginner or novice issue - -In this section, you find and claim an open documentation lines issue. - - -1. Go to the `docker/docker`
repository. - -2. Click on the "Issues" link. - - A list of the open issues appears. - - ![Open issues](/project/images/issue_list.png) - -3. Look for the exp/beginner items on the list. - -4. Click on the "labels" dropdown and select exp/beginner. - - The system filters to show only open exp/beginner issues. - -5. Open an issue that interests you. - - The comments on the issues can tell you both the problem and the potential - solution. - -6. Make sure that no other user has chosen to work on the issue. - - We don't allow external contributors to assign issues to themselves. So, you - need to read the comments to find if a user claimed the issue by leaving a - `#dibs` comment on the issue. - -7. When you find an open issue that both interests you and is unclaimed, add a -`#dibs` comment. - - ![Easy issue](/project/images/easy_issue.png) - - This example uses issue 11038. Your issue # will be different depending on - what you claimed. After a moment, Gordon the Docker bot, changes the issue - status to claimed. - -8. Make a note of the issue number; you'll need it later. - -## Sync your fork and create a new branch - -If you have followed along in this guide, you forked the `docker/docker` -repository. Maybe that was an hour ago or a few days ago. In any case, before -you start working on your issue, sync your repository with the upstream -`docker/docker` master. Syncing ensures your repository has the latest -changes. - -To sync your repository: - -1. Open a terminal on your local host. - -2. Change directory to the `docker-fork` root. - - $ cd ~/repos/docker-fork - -3. Checkout the master branch. - - $ git checkout master - Switched to branch 'master' - Your branch is up-to-date with 'origin/master'. - - Recall that `origin/master` is a branch on your remote GitHub repository. - -4. Make sure you have the upstream remote `docker/docker` by listing them. - - $ git remote -v - origin https://github.com/moxiegirl/docker.git (fetch) - origin https://github.com/moxiegirl/docker.git (push) - upstream https://github.com/docker/docker.git (fetch) - upstream https://github.com/docker/docker.git ( - - If the `upstream` is missing, add it. - - $ git remote add upstream https://github.com/docker/docker.git - -5. Fetch all the changes from the `upstream/master` branch. - - $ git fetch upstream - remote: Counting objects: 141, done. - remote: Compressing objects: 100% (29/29), done. - remote: Total 141 (delta 52), reused 46 (delta 46), pack-reused 66 - Receiving objects: 100% (141/141), 112.43 KiB | 0 bytes/s, done. - Resolving deltas: 100% (79/79), done. - From github.com:docker/docker - 9ffdf1e..01d09e4 docs -> upstream/docs - 05ba127..ac2521b master -> upstream/master - - This command says get all the changes from the `master` branch belonging to - the `upstream` remote. - -7. Rebase your local master with the `upstream/master`. - - $ git rebase upstream/master - First, rewinding head to replay your work on top of it... - Fast-forwarded master to upstream/master. - - This command writes all the commits from the upstream branch into your local - branch. - -8. Check the status of your local branch. - - $ git status - On branch master - Your branch is ahead of 'origin/master' by 38 commits. - (use "git push" to publish your local commits) - nothing to commit, working directory clean - - Your local repository now has any changes from the `upstream` remote. You - need to push the changes to your own remote fork which is `origin/master`. - -9. Push the rebased master to `origin/master`. - - $ git push origin - Username for 'https://github.com': moxiegirl - Password for 'https://moxiegirl@github.com': - Counting objects: 223, done. - Compressing objects: 100% (38/38), done. - Writing objects: 100% (69/69), 8.76 KiB | 0 bytes/s, done. - Total 69 (delta 53), reused 47 (delta 31) - To https://github.com/moxiegirl/docker.git - 8e107a9..5035fa1 master -> master - -9. Create a new feature branch to work on your issue. - - Your branch name should have the format `XXXX-descriptive` where `XXXX` is - the issue number you are working on. For example: - - $ git checkout -b 11038-fix-rhel-link - Switched to a new branch '11038-fix-rhel-link' - - Your branch should be up-to-date with the upstream/master. Why? Because you - branched off a freshly synced master. Let's check this anyway in the next - step. - -9. Rebase your branch from upstream/master. - - $ git rebase upstream/master - Current branch 11038-fix-rhel-link is up to date. - - At this point, your local branch, your remote repository, and the Docker - repository all have identical code. You are ready to make changesfor your - issues. - - -## Where to go next - -At this point, you know what you want to work on and you have a branch to do -your work in. Go onto the next section to learn [how to work on your -changes](/project/work-issue/). diff --git a/project/get-help.md~ b/project/get-help.md~ deleted file mode 100644 index 9c98549c9d..0000000000 --- a/project/get-help.md~ +++ /dev/null @@ -1,147 +0,0 @@ -page_title: Where to chat or get help -page_description: Describes Docker's communication channels -page_keywords: IRC, Google group, Twitter, blog, Stackoverflow - - - -# Where to chat or get help - -There are several communications channels you can use to chat with Docker -community members and developers. - - - - - - - - - - - - - - - - - - - -
Internet Relay Chat (IRC) - -

- IRC a direct line to our most knowledgeable Docker users. - The #docker and #docker-dev group on - irc.freenode.net. IRC was first created in 1988. - So, it is a rich chat protocol but it can overwhelm new users. You can search - our chat archives. -

- Read our IRC quickstart guide below for an easy way to get started. -
Google Groups - There are two groups. - Docker-user - is for people using Docker containers. - The docker-dev - group is for contributors and other people contributing to the Docker - project. -
Twitter - You can follow Docker's twitter - to get updates on our products. You can also tweet us questions or just - share blogs or stories. -
Stack Overflow - Stack Overflow has over 7000K Docker questions listed. We regularly - monitor Docker questions - and so do many other knowledgeable Docker users. -
- - -## IRC Quickstart - -IRC can also be overwhelming for new users. This quickstart shows you -the easiest way to connect to IRC. - -1. In your browser open http://webchat.freenode.net - - ![Login screen](/project/images/irc_connect.png) - - -2. Fill out the form. - - - - - - - - - - - - - - -
NicknameThe short name you want to be known as in IRC.
Channels#docker
reCAPTCHAUse the value provided.
- -3. Click "Connect". - - The system connects you to chat. You'll see a lot of text. At the bottom of - the display is a command line. Just above the command line the system asks - you to register. - - ![Login screen](/project/images/irc_after_login.png) - - -4. In the command line, register your nickname. - - /msg NickServ REGISTER password youremail@example.com - - ![Login screen](/project/images/register_nic.png) - - The IRC system sends an email to the address you - enter. The email contains instructions for completing your registration. - -5. Open your mail client and look for the email. - - ![Login screen](/project/images/register_email.png) - -6. Back in the browser, complete the registration according to the email. - - /msg NickServ VERIFY REGISTER moxiegirl_ acljtppywjnr - -7. Join the `#docker` group using the following command. - - /j #docker - - You can also join the `#docker-dev` group. - - /j #docker-dev - -8. To ask questions to the channel just type messages in the command line. - - ![Login screen](/project/images/irc_chat.png) - -9. To quit, close the browser window. - - -### Tips and learning more about IRC - -Next time you return to log into chat, you'll need to re-enter your password -on the command line using this command: - - /msg NickServ identify - -If you forget or lose your password see the FAQ on -freenode.net to learn how to recover it. - -This quickstart was meant to get you up and into IRC very quickly. If you find -IRC useful there is a lot more to learn. Drupal, another open source project, -actually has -written a lot of good documentation about using IRC for their project -(thanks Drupal!). diff --git a/project/glossary.md~ b/project/glossary.md~ deleted file mode 100644 index 5324cda153..0000000000 --- a/project/glossary.md~ +++ /dev/null @@ -1,7 +0,0 @@ -page_title: Glossary -page_description: tbd -page_keywords: tbd - -## Glossary - -TBD \ No newline at end of file diff --git a/project/make-a-contribution.md~ b/project/make-a-contribution.md~ deleted file mode 100644 index e0b4e89720..0000000000 --- a/project/make-a-contribution.md~ +++ /dev/null @@ -1,35 +0,0 @@ -page_title: Understand how to contribute -page_description: Explains basic workflow for Docker contributions -page_keywords: contribute, maintainers, review, workflow, process - -# Understand how to contribute - -Contributing is a process where you work with Docker maintainers and the -community to improve Docker. The maintainers are experienced contributors -who specialize in one or more Docker components. Maintainers play a big role -in reviewing contributions. - -There is a formal process for contributing. We try to keep our contribution -process simple so you'll want to contribute frequently. - - -## The basic contribution workflow - -In this guide, you work through Docker's basic contribution workflow by fixing a -single *beginner* issue in the `docker/docker` repository. The workflow -for fixing simple issues looks like this: - -![Simple process](/project/images/existing_issue.png) - -All Docker repositories have code and documentation. You use this same workflow -for either content type. For example, you can find and fix doc or code issues. -Also, you can propose a new Docker feature or propose a new Docker tutorial. - -Some workflow stages do have slight differences for code or documentation -contributions. When you reach that point in the flow, we make sure to tell you. - - -## Where to go next - -Now that you know a little about the contribution process, go to the next section -to [find an issue you want to work on](/project/find-an-issue/). diff --git a/project/review-pr.md~ b/project/review-pr.md~ deleted file mode 100644 index e8cb6c7c04..0000000000 --- a/project/review-pr.md~ +++ /dev/null @@ -1,124 +0,0 @@ -page_title: Participate in the PR Review -page_description: Basic workflow for Docker contributions -page_keywords: contribute, pull request, review, workflow, beginner, squash, commit - - -# Participate in the PR Review - -Creating a pull request is nearly the end of the contribution process. At this -point, your code is reviewed both by our continuous integration (CI) systems and -by our maintainers. - -The CI system is an automated system. The maintainers are human beings that also -work on Docker. You need to understand and work with both the "bots" and the -"beings" to review your contribution. - - -## How we proces your review - -First to review your pull request is Gordon. Gordon is fast. He checks your -pull request (PR) for common problems like a missing signature. If Gordon finds a -problem, he'll send an email through your GitHub user account: - -![Gordon](/project/images/gordon.jpeg) - -Our build bot system starts building your changes while Gordon sends any emails. - -The build system double-checks your work by compiling your code with Docker's master -code. Building includes running the same tests you ran locally. If you forgot -to run tests or missed something in fixing problems, the automated build is our -safety check. - -After Gordon and the bots, the "beings" review your work. Docker maintainers look -at your pull request and comment on it. The shortest comment you might see is -`LGTM` which means **l**ooks-**g**ood-**t**o-**m**e. If you get an `LGTM`, that -is a good thing, you passed that review. - -For complex changes, maintainers may ask you questions or ask you to change -something about your submission. All maintainer comments on a PR go to the -email address associated with your GitHub account. Any GitHub user who -"participates" in a PR receives an email to. Participating means creating or -commenting on a PR. - -Our maintainers are very experienced Docker users and open source contributors. -So, they value your time and will try to work efficiently with you by keeping -their comments specific and brief. If they ask you to make a change, you'll -need to update your pull request with additional changes. - -## Update an Existing Pull Request - -To update your existing pull request: - -1. Change one or more files in your local `docker-fork` repository. - -2. Commit the change with the `git commit --amend` command. - - $ git commit --amend - - Git opens an editor containing your last commit message. - -3. Adjust your last comment to reflect this new change. - - Added a new sentence per Anaud's suggestion - - Signed-off-by: Mary Anthony - - # Please enter the commit message for your changes. Lines starting - # with '#' will be ignored, and an empty message aborts the commit. - # On branch 11038-fix-rhel-link - # Your branch is up-to-date with 'origin/11038-fix-rhel-link'. - # - # Changes to be committed: - # modified: docs/sources/installation/mac.md - # modified: docs/sources/installation/rhel.md - -4. Push to your origin. - - $ git push origin - -5. Open your browser to your pull request on GitHub. - - You should see your pull request now contains your newly pushed code. - -6. Add a comment to your pull request. - - GitHub only notifies PR participants when you comment. For example, you can - mention that you updated your PR. Your comment alerts the maintainers that - you made an update. - -A change requires LGTMs from an absolute majority of an affected component's -maintainers. For example, if you change `docs/` and `registry/` code, an -absolute majority of the `docs/` and the `registry/` maintainers must approve -your PR. Once you get approval, we merge your pull request into Docker's -`master` code branch. - -## After the merge - -It can take time to see a merged pull request in Docker's official release. -A master build is available almost immediately though. Docker builds and -updates its development binaries after each merge to `master`. - -1. Browse to https://master.dockerproject.com/. - -2. Look for the binary appropriate to your system. - -3. Download and run the binary. - - You might want to run the binary in a container though. This - will keep your local host environment clean. - -4. View any documentation changes at docs.master.dockerproject.com. - -Once you've verified everything merged, feel free to delete your feature branch -from your fork. For information on how to do this, - -see the GitHub help on deleting branches. - -## Where to go next - -At this point, you have completed all the basic tasks in our contributors guide. -If you enjoyed contributing, let us know by completing another beginner -issue or two. We really appreciate the help. - -If you are very experienced and want to make a major change, go on to -[learn about advanced contributing](/project/advanced-contributing). diff --git a/project/set-up-dev-env.md~ b/project/set-up-dev-env.md~ deleted file mode 100644 index 0629822b93..0000000000 --- a/project/set-up-dev-env.md~ +++ /dev/null @@ -1,421 +0,0 @@ -page_title: Work with a development container -page_description: How to use Docker's development environment -page_keywords: development, inception, container, image Dockerfile, dependencies, Go, artifacts - -# Work with a development container - -In this section, you learn to develop like a member of Docker's core team. -The `docker` repository includes a `Dockerfile` at its root. This file defines -Docker's development environment. The `Dockerfile` lists the environment's -dependencies: system libraries and binaries, go environment, go dependencies, -etc. - -Docker's development environment is itself, ultimately a Docker container. -You use the `docker` repository and its `Dockerfile` to create a Docker image, -run a Docker container, and develop code in the container. Docker itself builds, -tests, and releases new Docker versions using this container. - -If you followed the procedures that -set up Git for contributing, you should have a fork of the `docker/docker` -repository. You also created a branch called `dry-run-test`. In this section, -you continue working with your fork on this branch. - -## Clean your host of Docker artifacts - -Docker developers run the latest stable release of the Docker software; Or -Boot2docker and Docker if their machine is Mac OS X. They clean their local -hosts of unnecessary Docker artifacts such as stopped containers or unused -images. Cleaning unnecessary artifacts isn't strictly necessary but it is -good practice, so it is included here. - -To remove unnecessary artifacts. - -1. Verify that you have no unnecessary containers running on your host. - - $ docker ps - - You should see something similar to the following: - - - - - - - - - - - -
CONTAINER IDIMAGECOMMANDCREATEDSTATUSPORTSNAMES
- - There are no running containers on this host. If you have running but unused - containers, stop and then remove them with the `docker stop` and `docker rm` - commands. - -2. Verify that your host has no dangling images. - - $ docker images - - You should see something similar to the following: - - - - - - - - - -
REPOSITORYTAGIMAGE IDCREATEDVIRTUAL SIZE
- - This host has no images. You may have one or more _dangling_ images. A - dangling image is not used by a running container and is not an ancestor of - another image on your system. A fast way to remove dangling containers is - the following: - - $ docker rmi -f $(docker images -q -a -f dangling=true) - - This command uses `docker images` to lists all images (`-a` flag) by numeric - IDs (`-q` flag) and filter them to find dangling images (`-f - dangling=true`). Then, the `docker rmi` command forcibly (`-f` flag) removes - the resulting list. To remove just one image, use the `docker rmi ID` - command. - - -## Build an image - -If you followed the last procedure, your host is clean of unnecessary images -and containers. In this section, you build an image from the Docker development -environment. - -1. Open a terminal. - - Mac users, use `boot2docker status` to make sure Boot2Docker is running. You - may need to run `eval "$(boot2docker shellinit)"` to initialize your shell - environment. - -3. Change into the root of your forked repository. - - $ cd ~/repos/docker-fork - - If you are following along with this guide, you created a `dry-run-test` - branch when you set up Git for - contributing - -4. Ensure you are on your `dry-run-test` branch. - - $ git checkout dry-run-test - - If you get a message that the branch doesn't exist, add the `-b` flag so the - command both creates the branch and checks it out. - -5. Compile your development environment container into an image. - - $ docker build -t dry-run-test . - - The `docker build` command returns informational message as it runs. The - first build may take a few minutes to create an image. Using the - instructions in the `Dockerfile`, the build may need to download source and - other images. A successful build returns a final status message similar to - the following: - - Successfully built 676815d59283 - -6. List your Docker images again. - - $ docker images - - You should see something similar to this: - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
REPOSTITORYTAGIMAGE IDCREATEDVIRTUAL SIZE
dry-run-testlatest663fbee70028About a minute ago
ubuntutrusty2d24f826cb162 days ago188.3 MB
ubuntutrusty-20150218.12d24f826cb162 days ago188.3 MB
ubuntu14.042d24f826cb162 days ago188.3 MB
ubuntu14.04.22d24f826cb162 days ago188.3 MB
ubuntulatest2d24f826cb162 days ago188.3 MB
- - Locate your new `dry-run-test` image in the list. You should also see a - number of `ubuntu` images. The build process creates these. They are the - ancestors of your new Docker development image. When you next rebuild your - image, the build process reuses these ancestors images if they exist. - - Keeping the ancestor images improves the build performance. When you rebuild - the child image, the build process uses the local ancestors rather than - retrieving them from the Hub. The build process gets new ancestors only if - DockerHub has updated versions. - -## Start a container and run a test - -At this point, you have created a new Docker development environment image. Now, -you'll use this image to create a Docker container to develop in. Then, you'll -build and run a `docker` binary in your container. - -1. Open two additional terminals on your host. - - At this point, you'll have about three terminals open. - - ![Multiple terminals](/project/images/three_terms.png) - - Mac OSX users, make sure you run `eval "$(boot2docker shellinit)"` in any new - terminals. - -2. In a terminal, create a new container from your `dry-run-test` image. - - $ docker run --privileged --rm -ti dry-run-test /bin/bash - root@5f8630b873fe:/go/src/github.com/docker/docker# - - The command creates a container from your `dry-run-test` image. It opens an - interactive terminal (`-ti`) running a `/bin/bash shell`. The - `--privileged` flag gives the container access to kernel features and device - access. It is this flag that allows you to run a container in a container. - Finally, the `-rm` flag instructs Docker to remove the container when you - exit the `/bin/bash` shell. - - The container includes the source of your image repository in the - `/go/src/github.com/docker/docker` directory. Try listing the contents to - verify they are the same as that of your `docker-fork` repo. - - ![List example](/project/images/list_example.png) - - -3. Investigate your container bit. - - If you do a `go version` you'll find the `go` language is part of the - container. - - root@31ed86e9ddcf:/go/src/github.com/docker/docker# go version - go version go1.4.2 linux/amd64 - - Similarly, if you do a `docker version` you find the container - has no `docker` binary. - - root@31ed86e9ddcf:/go/src/github.com/docker/docker# docker version - bash: docker: command not found - - You will create one in the next steps. - -4. From the `/go/src/github.com/docker/docker` directory make a `docker` binary -with the `make.sh` script. - - root@5f8630b873fe:/go/src/github.com/docker/docker# hack/make.sh binary - - You only call `hack/make.sh` to build a binary _inside_ a Docker - development container as you are now. On your host, you'll use `make` - commands (more about this later). - - As it makes the binary, the `make.sh` script reports the build's progress. - When the command completes successfully, you should see the following - output: - - ---> Making bundle: ubuntu (in bundles/1.5.0-dev/ubuntu) - Created package {:path=>"lxc-docker-1.5.0-dev_1.5.0~dev~git20150223.181106.0.1ab0d23_amd64.deb"} - Created package {:path=>"lxc-docker_1.5.0~dev~git20150223.181106.0.1ab0d23_amd64.deb"} - -5. List all the contents of the `binary` directory. - - root@5f8630b873fe:/go/src/github.com/docker/docker# ls bundles/1.5.0-dev/binary/ - docker docker-1.5.0-dev docker-1.5.0-dev.md5 docker-1.5.0-dev.sha256 - - You should see that `binary` directory, just as it sounds, contains the - made binaries. - - -6. Copy the `docker` binary to the `/usr/bin` of your container. - - root@5f8630b873fe:/go/src/github.com/docker/docker# cp bundles/1.5.0-dev/binary/docker /usr/bin - -7. Inside your container, check your Docker version. - - root@5f8630b873fe:/go/src/github.com/docker/docker# docker --version - Docker version 1.5.0-dev, build 6e728fb - - Inside the container you are running a development version. This is version - on the current branch it reflects the value of the `VERSION` file at the - root of your `docker-fork` repository. - -8. Start a `docker` daemon running inside your container. - - root@5f8630b873fe:/go/src/github.com/docker/docker# docker -dD - - The `-dD` flag starts the daemon in debug mode; You'll find this useful - when debugging your code. - -9. Bring up one of the terminals on your local host. - - -10. List your containers and look for the container running the `dry-run-test` image. - - $ docker ps - - - - - - - - - - - - - - - - - - - - -
CONTAINER IDIMAGECOMMANDCREATEDSTATUSPORTSNAMES
474f07652525dry-run-test:latest"hack/dind /bin/bash14 minutes agoUp 14 minutestender_shockley
- - In this example, the container's name is `tender_shockley`; yours will be - different. - -11. From the terminal, start another shell on your Docker development container. - - $ docker exec -it tender_shockley bash - - At this point, you have two terminals both with a shell open into your - development container. One terminal is running a debug session. The other - terminal is displaying a `bash` prompt. - -12. At the prompt, test the Docker client by running the `hello-world` container. - - root@9337c96e017a:/go/src/github.com/docker/docker# docker run hello-world - - You should see the image load and return. Meanwhile, you - can see the calls made via the debug session in your other terminal. - - ![List example](/project/images/three_running.png) - - -## Restart a container with your source - -At this point, you have experienced the "Docker inception" technique. That is, -you have: - -* built a Docker image from the Docker repository -* created and started a Docker development container from that image -* built a Docker binary inside of your Docker development container -* launched a `docker` daemon using your newly compiled binary -* called the `docker` client to run a `hello-world` container inside - your development container - -When you really get to developing code though, you'll want to iterate code -changes and builds inside the container. For that you need to mount your local -Docker repository source into your Docker container. Try that now. - -1. If you haven't already, exit out of BASH shells in your running Docker -container. - - If you have followed this guide exactly, exiting out your BASH shells stops - the running container. You can use the `docker ps` command to verify the - development container is stopped. All of your terminals should be at the - local host prompt. - -2. Choose a terminal and make sure you are in your `docker-fork` repository. - - $ pwd - /Users/mary/go/src/github.com/moxiegirl/docker-fork - - Your location will be different because it reflects your environment. - -3. Create a container using `dry-run-test` but this time mount your repository -onto the `/go` directory inside the container. - - $ docker run --privileged --rm -ti -v `pwd`:/go/src/github.com/docker/docker dry-run-test /bin/bash - - When you pass `pwd`, `docker` resolves it to your current directory. - -4. From inside the container, list your `binary` directory. - - root@074626fc4b43:/go/src/github.com/docker/docker# ls bundles/1.5.0-dev/binary - ls: cannot access binary: No such file or directory - - Your `dry-run-test` image does not retain any of the changes you made inside - the container. This is the expected behavior for a container. - -5. In a fresh terminal on your local host, change to the `docker-fork` root. - - $ cd ~/repos/docker-fork/ - -6. Create a fresh binary but this time use the `make` command. - - $ make BINDDIR=. binary - - The `BINDDIR` flag is only necessary on Mac OS X but it won't hurt to pass - it on Linux command line. The `make` command, like the `make.sh` script - inside the container, reports its progress. When the make succeeds, it - returns the location of the new binary. - - -7. Back in the terminal running the container, list your `binary` directory. - - root@074626fc4b43:/go/src/github.com/docker/docker# ls bundles/1.5.0-dev/binary - docker docker-1.5.0-dev docker-1.5.0-dev.md5 docker-1.5.0-dev.sha256 - - The compiled binaries created from your repository on your local host are - now available inside your running Docker development container. - -8. Repeat the steps you ran in the previous procedure. - - * copy the binary inside the development container using - `cp bundles/1.5.0-dev/binary/docker /usr/bin` - * start `docker -dD` to launch the Docker daemon inside the container - * run `docker ps` on local host to get the development container's name - * connect to your running container `docker exec -it container_name bash` - * use the `docker run hello-world` command to create and run a container - inside your development container - -## Where to go next - -Congratulations, you have successfully achieved Docker inception. At this point, -you've set up your development environment and verified almost all the essential -processes you need to contribute. Of course, before you start contributing, -[you'll need to learn one more piece of the development environment, the test -framework](/project/test-and-docs/). diff --git a/project/set-up-git.md~ b/project/set-up-git.md~ deleted file mode 100644 index ba42c81006..0000000000 --- a/project/set-up-git.md~ +++ /dev/null @@ -1,238 +0,0 @@ -page_title: Configure Git for contributing -page_description: Describes how to set up your local machine and repository -page_keywords: GitHub account, repository, clone, fork, branch, upstream, Git, Go, make, - -# Configure Git for contributing - -Work through this page to configure Git and a repository you'll use throughout -the Contributor Guide. The work you do further in the guide, depends on the work -you do here. - -## Fork and clone the Docker code - -Before contributing, you first fork the Docker code repository. A fork copies -a repository at a particular point in time. GitHub tracks for you where a fork -originates. - -As you make contributions, you change your fork's code. When you are ready, -you make a pull request back to the original Docker repository. If you aren't -familiar with this workflow, don't worry, this guide walks you through all the -steps. - -To fork and clone Docker: - -1. Open a browser and log into GitHub with your account. - -2. Go to the docker/docker repository. - -3. Click the "Fork" button in the upper right corner of the GitHub interface. - - ![Branch Signature](/project/images/fork_docker.png) - - GitHub forks the repository to your GitHub account. The original - `docker/docker` repository becomes a new fork `YOUR_ACCOUNT/docker` under - your account. - -4. Copy your fork's clone URL from GitHub. - - GitHub allows you to use HTTPS or SSH protocols for clones. You can use the - `git` command line or clients like Subversion to clone a repository. - - ![Copy clone URL](/project/images/copy_url.png) - - This guide assume you are using the HTTPS protocol and the `git` command - line. If you are comfortable with SSH and some other tool, feel free to use - that instead. You'll need to convert what you see in the guide to what is - appropriate to your tool. - -5. Open a terminal window on your local host and change to your home directory. - - $ cd ~ - -6. Create a `repos` directory. - - $ mkdir repos - -7. Change into your `repos` directory. - - $ cd repos - -5. Clone the fork to your local host into a repository called `docker-fork`. - - $ git clone https://github.com/moxiegirl/docker.git docker-fork - - Naming your local repo `docker-fork` should help make these instructions - easier to follow; experienced coders don't typically change the name. - -6. Change directory into your new `docker-fork` directory. - - $ cd docker-fork - - Take a moment to familiarize yourself with the repository's contents. List - the contents. - -## Set your signature and an upstream remote - -When you contribute to Docker, you must certify you agree with the -Developer Certificate of Origin. -You indicate your agreement by signing your `git` commits like this: - - Signed-off-by: Pat Smith - -To create a signature, you configure your username and email address in Git. -You can set these globally or locally on just your `docker-fork` repository. -You must sign with your real name. We don't accept anonymous contributions or -contributions through pseudonyms. - -As you change code in your fork, you'll want to keep it in sync with the changes -others make in the `docker/docker` repository. To make syncing easier, you'll -also add a _remote_ called `upstream` that points to `docker/docker`. A remote -is just another a project version hosted on the internet or network. - -To configure your username, email, and add a remote: - -1. Change to the root of your `docker-fork` repository. - - $ cd docker-fork - -2. Set your `user.name` for the repository. - - $ git config --local user.name "FirstName LastName" - -3. Set your `user.email` for the repository. - - $ git config --local user.email "emailname@mycompany.com" - -4. Set your local repo to track changes upstream, on the `docker` repository. - - $ git remote add upstream https://github.com/docker/docker.git - -7. Check the result in your `git` configuration. - - $ git config --local -l - core.repositoryformatversion=0 - core.filemode=true - core.bare=false - core.logallrefupdates=true - remote.origin.url=https://github.com/moxiegirl/docker.git - remote.origin.fetch=+refs/heads/*:refs/remotes/origin/* - branch.master.remote=origin - branch.master.merge=refs/heads/master - user.name=Mary Anthony - user.email=mary@docker.com - remote.upstream.url=https://github.com/docker/docker.git - remote.upstream.fetch=+refs/heads/*:refs/remotes/upstream/* - - To list just the remotes use: - - $ git remote -v - origin https://github.com/moxiegirl/docker.git (fetch) - origin https://github.com/moxiegirl/docker.git (push) - upstream https://github.com/docker/docker.git (fetch) - upstream https://github.com/docker/docker.git (push) - -## Create and push a branch - -As you change code in your fork, you make your changes on a repository branch. -The branch name should reflect what you are working on. In this section, you -create a branch, make a change, and push it up to your fork. - -This branch is just for testing your config for this guide. The changes arepart -of a dry run so the branch name is going to be dry-run-test. To create an push -the branch to your fork on GitHub: - -1. Open a terminal and go to the root of your `docker-fork`. - - $ cd docker-fork - -2. Create a `dry-run-test` branch. - - $ git checkout -b dry-run-test - - This command creates the branch and switches the repository to it. - -3. Verify you are in your new branch. - - $ git branch - * dry-run-test - master - - The current branch has an * (asterisk) marker. So, these results shows you - are on the right branch. - -4. Create a `TEST.md` file in the repository's root. - - $ touch TEST.md - -5. Edit the file and add your email and location. - - ![Add your information](/project/images/contributor-edit.png) - - You can use any text editor you are comfortable with. - -6. Close and save the file. - -7. Check the status of your branch. - - $ git status - On branch dry-run-test - Untracked files: - (use "git add ..." to include in what will be committed) - - TEST.md - - nothing added to commit but untracked files present (use "git add" to track) - - You've only changed the one file. It is untracked so far by git. - -8. Add your file. - - $ git add TEST.md - - That is the only _staged_ file. Stage is fancy word for work that Git is - tracking. - -9. Sign and commit your change. - - $ git commit -s -m "Making a dry run test." - [dry-run-test 6e728fb] Making a dry run test - 1 file changed, 1 insertion(+) - create mode 100644 TEST.md - - Commit messages should have a short summary sentence of no more than 50 - characters. Optionally, you can also include a more detailed explanation - after the summary. Separate the summary from any explanation with an empty - line. - -8. Push your changes to GitHub. - - $ git push --set-upstream origin dry-run-test - Username for 'https://github.com': moxiegirl - Password for 'https://moxiegirl@github.com': - - Git prompts you for your GitHub username and password. Then, the command - returns a result. - - Counting objects: 13, done. - Compressing objects: 100% (2/2), done. - Writing objects: 100% (3/3), 320 bytes | 0 bytes/s, done. - Total 3 (delta 1), reused 0 (delta 0) - To https://github.com/moxiegirl/docker.git - * [new branch] dry-run-test -> dry-run-test - Branch dry-run-test set up to track remote branch dry-run-test from origin. - -9. Open your browser to Github. - -10. Navigate to your Docker fork. - -11. Make sure the `dry-run-test` branch exists, that it has your commit, and the -commit is signed. - - ![Branch Signature](/project/images/branch-sig.png) - -## Where to go next - -Congratulations, you have finished configuring both your local host environment -and Git for contributing. In the next section you'll [learn how to set up and -work in a Docker development container](/project/set-up-dev-env/). diff --git a/project/set-up-prereqs.md~ b/project/set-up-prereqs.md~ deleted file mode 100644 index 30fd0d9c6c..0000000000 --- a/project/set-up-prereqs.md~ +++ /dev/null @@ -1,298 +0,0 @@ -page_title: Set up the prerequisites -page_description: Describes how to set up your local machine and repository -page_keywords: GitHub account, repository, clone, fork, branch, upstream, Git, Go, make, - -# Set up the prerequisites - -Work through this page to set up the software and host environment you need to contribute. You'll find instructions for configuring your `git` repository and creating a fork you'll use later in the guide. - -## Get the Required Software - -Before you begin contributing you must have: - -* a GitHub account -* `git` -* `make` -* `docker` - -You'll notice that `go`, the language that Docker is written in, is not listed. That's because you don't need it installed; Docker's development environment provides it for you. You'll learn more about the development environment later. - -### Get a GitHub account - -To contribute to the Docker project, you will need a GitHub account. A free account is -fine. All the Docker project repositories are public and visible to everyone. - -You should also have some experience using both the GitHub application and `git` on the command line. - -### Install git - -Install `git` on your local system. You can check if `git` is on already on your -system and properly installed with the following command: - - $ git --version - - -This documentation is written using `git` version 2.2.2. Your version may be different depending on your OS. - -### Install make - -Install `make`. You can check if `make` is on your system with the following -command: - - $ make -v - -This documentation is written using GNU Make 3.81. Your version may be different depending on your OS. - -### Install or upgrade Docker - -If you haven't already, install the Docker software using the instructions for your operating system. If you have an existing installation, check your version and make sure you have the latest Docker. - -To check if `docker` is already installed on Linux: - - $ docker --version - Docker version 1.5.0, build a8a31ef - -On Mac OS X or Windows, you should have installed Boot2Docker which includes -Docker. You'll need to verify both Boot2Docker and then Docker. This -documentation was written on OS X using the following versions. - - - $ boot2docker version - Boot2Docker-cli version: v1.5.0 - Git commit: ccd9032 - - $ docker --version - Docker version 1.5.0, build a8a31ef - -## Linux users and sudo - -This guide assumes you have added your user to the `docker` group on your system. To check, list the group's contents: - - $ getent group docker - docker:x:999:ubuntu - -If the command returns no matches, you have two choices. You can preface this guide's `docker` commands with `sudo` as you work. Alternatively, you can add your user to the `docker` group as follows: - - $ sudo usermod -aG docker ubuntu - -You must log out and back in for this modification to take effect. - - -## Fork and clone the Docker code - -When contributing, you first fork the Docker code repository. A fork copies -a repository at a particular point in time. GitHub tracks for you where a fork originates. - -As you make contributions, you change your fork's code. When you are ready, -you make a pull request back to the original Docker repository. If you aren't -familiar with this workflow, don't worry, this guide walks you through all the -steps. - -To fork and clone Docker: - -1. Open a browser and log into GitHub with your account. - -2. Go to the docker/docker repository. - -3. Click the "Fork" button in the upper right corner of the GitHub interface. - - ![Branch Signature](/project/images/fork_docker.png) - - GitHub forks the repository to your GitHub account. The original - `docker/docker` repository becomes a new fork `YOUR_ACCOUNT/docker` under - your account. - -4. Copy your fork's clone URL from GitHub. - - GitHub allows you to use HTTPS or SSH protocols for clones. You can use the - `git` command line or clients like Subversion to clone a repository. - - ![Copy clone URL](/project/images/copy_url.png) - - This guide assume you are using the HTTPS protocol and the `git` command - line. If you are comfortable with SSH and some other tool, feel free to use - that instead. You'll need to convert what you see in the guide to what is - appropriate to your tool. - -5. Open a terminal window on your local host and change to your home directory. - - $ cd ~ - -6. Create a `repos` directory. - - $ mkdir repos - -7. Change into your `repos` directory. - - $ cd repos - -5. Clone the fork to your local host into a repository called `docker-fork`. - - $ git clone https://github.com/moxiegirl/docker.git docker-fork - - Naming your local repo `docker-fork` should help make these instructions - easier to follow; experienced coders don't typically change the name. - -6. Change directory into your new `docker-fork` directory. - - $ cd docker-fork - - Take a moment to familiarize yourself with the repository's contents. List - the contents. - - -## Set your signature and an upstream remote - -When you contribute to Docker, you must certify you agree with the Developer Certificate of Origin. You indicate your agreement by signing your `git` commits like this: - - Signed-off-by: Pat Smith - -To create a signature, you configure your username and email address in Git. You can set these globally or locally on just your `docker-fork` repository. You must sign with your real name. We don't accept anonymous contributions or contributions through pseudonyms. - -As you change code in your fork, you'll want to keep it in sync with the changes others make in the `docker/docker` repository. To make syncing easier, you'll also add a _remote_ called `upstream` that points to `docker/docker`. A remote is just another a project version hosted on the internet or network. - -To configure your username, email, and add a remote: - -1. Change to the root of your `docker-fork` repository. - - $ cd docker-fork - -2. Set your `user.name` for the repository. - - $ git config --local user.name "FirstName LastName" - -3. Set your `user.email` for the repository. - - $ git config --local user.email "emailname@mycompany.com" - -4. Set your local repo to track changes upstream, on the `docker` repository. - - $ git remote add upstream https://github.com/docker/docker.git - -7. Check the result in your `git` configuration. - - $ git config --local -l - core.repositoryformatversion=0 - core.filemode=true - core.bare=false - core.logallrefupdates=true - remote.origin.url=https://github.com/moxiegirl/docker.git - remote.origin.fetch=+refs/heads/*:refs/remotes/origin/* - branch.master.remote=origin - branch.master.merge=refs/heads/master - user.name=Mary Anthony - user.email=mary@docker.com - remote.upstream.url=https://github.com/docker/docker.git - remote.upstream.fetch=+refs/heads/*:refs/remotes/upstream/* - - To list just the remotes use: - - $ git remote -v - origin https://github.com/moxiegirl/docker.git (fetch) - origin https://github.com/moxiegirl/docker.git (push) - upstream https://github.com/docker/docker.git (fetch) - upstream https://github.com/docker/docker.git (push) - - - -## Create and push a branch - -As you change code in your fork, you make your changes on a repository branch. The branch name should reflect what you are working on. In this section, you create a branch, make a change, and push it up to your fork. - -This branch is just for testing your config for this guide. The changes are part of a dry run so the branch name is going to be dry-run-test. To create an push the branch to your fork on GitHub: - -1. Open a terminal and go to the root of your `docker-fork`. - - $ cd docker-fork - -2. Create a `dry-run-test` branch. - - $ git checkout -b dry-run-test - - This command creates the branch and switches the repository to it. - -3. Verify you are in your new branch. - - $ git branch - * dry-run-test - master - - The current branch has an * (asterisk) marker. So, these results shows you - are on the right branch. - -4. Create a `TEST.md` file in the repository's root. - - $ touch TEST.md - -5. Edit the file and add your email and location. - - ![Add your information](/project/images/contributor-edit.png) - - You can use any text editor you are comfortable with. - -6. Close and save the file. - -7. Check the status of your branch. - - $ git status - On branch dry-run-test - Untracked files: - (use "git add ..." to include in what will be committed) - - TEST.md - - nothing added to commit but untracked files present (use "git add" to track) - - You've only changed the one file. It is untracked so far by git. - -8. Add your file. - - $ git add TEST.md - - That is the only _staged_ file. Stage is fancy word for work that Git is - tracking. - -9. Sign and commit your change. - - $ git -s -m "Making a dry run test." - [dry-run-test 6e728fb] Making a dry run test - 1 file changed, 1 insertion(+) - create mode 100644 TEST.md - - - Commit messages should have a short summary sentence of no more than 50 - characters. Optionally, you can also include a more detailed explanation - after the summary. Separate the summary from any explanation with an empty - line. - -8. Push your changes to GitHub. - - $ git push --set-upstream origin dry-run-test - Username for 'https://github.com': moxiegirl - Password for 'https://moxiegirl@github.com': - - Git prompts you for your GitHub username and password. Then, the command - returns a result. - - Counting objects: 13, done. - Compressing objects: 100% (2/2), done. - Writing objects: 100% (3/3), 320 bytes | 0 bytes/s, done. - Total 3 (delta 1), reused 0 (delta 0) - To https://github.com/moxiegirl/docker.git - * [new branch] dry-run-test -> dry-run-test - Branch dry-run-test set up to track remote branch dry-run-test from origin. - - -9. Open your browser to Github. - -10. Navigate to your Docker fork. - -11. Make sure the `dry-run-test` branch exists, that it has your commit, and the commit is signed. - - ![Branch Signature](/project/images/branch-sig.png) - - -## Where to go next - -Congratulations, you have set up and validated the contributor requirements. In the next section you'll [learn how to set up and work in a Docker development container](/project/set-up-dev-env/). \ No newline at end of file diff --git a/project/software-required.md~ b/project/software-required.md~ deleted file mode 100644 index 476cbbc2ca..0000000000 --- a/project/software-required.md~ +++ /dev/null @@ -1,91 +0,0 @@ -page_title: Get the required software -page_description: Describes the software required to contribute to Docker -page_keywords: GitHub account, repository, Docker, Git, Go, make, - -# Get the required software - -Before you begin contributing you must have: - -* a GitHub account -* `git` -* `make` -* `docker` - -You'll notice that `go`, the language that Docker is written in, is not listed. -That's because you don't need it installed; Docker's development environment -provides it for you. You'll learn more about the development environment later. - -### Get a GitHub account - -To contribute to the Docker project, you will need a GitHub account. A free account is -fine. All the Docker project repositories are public and visible to everyone. - -You should also have some experience using both the GitHub application and `git` -on the command line. - -### Install git - -Install `git` on your local system. You can check if `git` is on already on your -system and properly installed with the following command: - - $ git --version - - -This documentation is written using `git` version 2.2.2. Your version may be -different depending on your OS. - -### Install make - -Install `make`. You can check if `make` is on your system with the following -command: - - $ make -v - -This documentation is written using GNU Make 3.81. Your version may be different -depending on your OS. - -### Install or upgrade Docker - -If you haven't already, install the Docker software using the -instructions for your operating system. -If you have an existing installation, check your version and make sure you have -the latest Docker. - -To check if `docker` is already installed on Linux: - - $ docker --version - Docker version 1.5.0, build a8a31ef - -On Mac OS X or Windows, you should have installed Boot2Docker which includes -Docker. You'll need to verify both Boot2Docker and then Docker. This -documentation was written on OS X using the following versions. - - $ boot2docker version - Boot2Docker-cli version: v1.5.0 - Git commit: ccd9032 - - $ docker --version - Docker version 1.5.0, build a8a31ef - -## Linux users and sudo - -This guide assumes you have added your user to the `docker` group on your system. -To check, list the group's contents: - - $ getent group docker - docker:x:999:ubuntu - -If the command returns no matches, you have two choices. You can preface this -guide's `docker` commands with `sudo` as you work. Alternatively, you can add -your user to the `docker` group as follows: - - $ sudo usermod -aG docker ubuntu - -You must log out and back in for this modification to take effect. - - -## Where to go next - -In the next section, you'll [learn how to set up and configure Git for -contributing to Docker](/project/set-up-git/). diff --git a/project/test-and-docs.md~ b/project/test-and-docs.md~ deleted file mode 100644 index 32bd8e4007..0000000000 --- a/project/test-and-docs.md~ +++ /dev/null @@ -1,296 +0,0 @@ -page_title: Run tests and test documentation -page_description: Describes Docker's testing infrastructure -page_keywords: make test, make docs, Go tests, gofmt, contributing, running tests - -# Run tests and test documentation - -Contributing includes testing your changes. If you change the Docker code, you -may need to add a new test or modify an existing one. Your contribution could -even be adding tests to Docker. For this reason, you need to know a little -about Docker's test infrastructure. - -Many contributors contribute documentation only. Or, a contributor makes a code -contribution that changes how Docker behaves and that change needs -documentation. For these reasons, you also need to know how to build, view, and -test the Docker documentation. - -In this section, you run tests in the `dry-run-test` branch of your Docker -fork. If you have followed along in this guide, you already have this branch. -If you don't have this branch, you can create it or simply use another of your -branches. - -## Understand testing at Docker - -Docker tests use the Go language's test framework. In this framework, files -whose names end in `_test.go` contain test code; you'll find test files like -this throughout the Docker repo. Use these files for inspiration when writing -your own tests. For information on Go's test framework, see Go's testing package -documentation and the go test help. - -You are responsible for _unit testing_ your contribution when you add new or -change existing Docker code. A unit test is a piece of code that invokes a -single, small piece of code ( _unit of work_ ) to verify the unit works as -expected. - -Depending on your contribution, you may need to add _integration tests_. These -are tests that combine two or more work units into one component. These work -units each have unit tests and then, together, integration tests that test the -interface between the components. The `integration` and `integration-cli` -directories in the Docker repository contain integration test code. - -Testing is its own speciality. If you aren't familiar with testing techniques, -there is a lot of information available to you on the Web. For now, you should -understand that, the Docker maintainers may ask you to write a new test or -change an existing one. - -### Run tests on your local host - -Before submitting any code change, you should run the entire Docker test suite. -The `Makefile` contains a target for the entire test suite. The target's name -is simply `test`. The make file contains several targets for testing: - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
TargetWhat this target does
testRun all the tests.
test-unitRun just the unit tests.
test-integrationRun just integration tests.
test-integration-cliRun the test for the integration command line interface.
test-docker-pyRun the tests for Docker API client.
docs-testRuns the documentation test build.
- -Run the entire test suite on your current repository: - -1. Open a terminal on your local host. - -2. Change to the root your Docker repository. - - $ cd docker-fork - -3. Make sure you are in your development branch. - - $ git checkout dry-run-test - -4. Run the `make test` command. - - $ make test - - This command does several things, it creates a container temporarily for - testing. Inside that container, the `make`: - - * creates a new binary - * cross-compiles all the binaries for the various operating systems - * runs the all the tests in the system - - It can take several minutes to run all the tests. When they complete - successfully, you see the output concludes with something like this: - - - [PASSED]: top - sleep process should be listed in privileged mode - [PASSED]: version - verify that it works and that the output is properly formatted - PASS - coverage: 70.8% of statements - ---> Making bundle: test-docker-py (in bundles/1.5.0-dev/test-docker-py) - +++ exec docker --daemon --debug --host unix:///go/src/github.com/docker/docker/bundles/1.5.0-dev/test-docker-py/docker.sock --storage-driver vfs --exec-driver native --pidfile /go/src/github.com/docker/docker/bundles/1.5.0-dev/test-docker-py/docker.pid - ................................................................. - ---------------------------------------------------------------------- - Ran 65 tests in 89.266s - - -### Run test targets inside the development container - -If you are working inside a Docker development container, you use the -`hack/make.sh` script to run tests. The `hack/make.sh` script doesn't -have a single target that runs all the tests. Instead, you provide a single -commmand line with multiple targets that does the same thing. - -Try this now. - -1. Open a terminal and change to the `docker-fork` root. - -2. Start a Docker development image. - - If you are following along with this guide, you should have a - `dry-run-test` image. - - $ docker run --privileged --rm -ti -v `pwd`:/go/src/github.com/docker/docker dry-run-test /bin/bash - -3. Run the tests using the `hack/make.sh` script. - - root@5f8630b873fe:/go/src/github.com/docker/docker# hack/make.sh dynbinary binary cross test-unit test-integration test-integration-cli test-docker-py - - The tests run just as they did within your local host. - - -Of course, you can also run a subset of these targets too. For example, to run -just the unit tests: - - root@5f8630b873fe:/go/src/github.com/docker/docker# hack/make.sh dynbinary binary cross test-unit - -Most test targets require that you build these precursor targets first: -`dynbinary binary cross` - - -## Running individual or multiple named tests - -You can use the `TESTFLAGS` environment variable to run a single test. The -flag's value is passed as arguments to the `go test` command. For example, from -your local host you can run the `TestBuild` test with this command: - - $ TESTFLAGS='-test.run \^TestBuild\$' make test - -To run the same test inside your Docker development container, you do this: - - root@5f8630b873fe:/go/src/github.com/docker/docker# TESTFLAGS='-run ^TestBuild$' hack/make.sh - -## If tests under Boot2Docker fail due to disk space errors - -Running the tests requires about 2GB of memory. If you are running your -container on bare metal, that is you are not running with Boot2Docker, your -Docker development container is able to take the memory it requires directly -from your local host. - -If you are running Docker using Boot2Docker, the VM uses 2048MB by default. -This means you can exceed the memory of your VM running tests in a Boot2Docker -environment. When the test suite runs out of memory, it returns errors similar -to the following: - - server.go:1302 Error: Insertion failed because database is full: database or - disk is full - - utils_test.go:179: Error copy: exit status 1 (cp: writing - '/tmp/docker-testd5c9-[...]': No space left on device - -To increase the memory on your VM, you need to reinitialize the Boot2Docker VM -with new memory settings. - -1. Stop all running containers. - -2. View the current memory setting. - - $ boot2docker info - { - "Name": "boot2docker-vm", - "UUID": "491736fd-4075-4be7-a6f5-1d4cdcf2cc74", - "Iso": "/Users/mary/.boot2docker/boot2docker.iso", - "State": "running", - "CPUs": 8, - "Memory": 2048, - "VRAM": 8, - "CfgFile": "/Users/mary/VirtualBox VMs/boot2docker-vm/boot2docker-vm.vbox", - "BaseFolder": "/Users/mary/VirtualBox VMs/boot2docker-vm", - "OSType": "", - "Flag": 0, - "BootOrder": null, - "DockerPort": 0, - "SSHPort": 2022, - "SerialFile": "/Users/mary/.boot2docker/boot2docker-vm.sock" - } - - -3. Delete your existing `boot2docker` profile. - - $ boot2docker delete - -4. Reinitialize `boot2docker` and specify a higher memory. - - $ boot2docker init -m 5555 - -5. Verify the memory was reset. - - $ boot2docker info - -6. Restart your container and try your test again. - - -## Build and test the documentation - -The Docker documentation source files are under `docs/sources`. The content is -written using extended Markdown. We use the static generator MkDocs to build Docker's -documentation. Of course, you don't need to install this generator -to build the documentation, it is included with container. - -You should always check your documentation for grammar and spelling. The best -way to do this is with an online grammar checker. - -When you change a documentation source file, you should test your change -locally to make sure your content is there and any links work correctly. You -can build the documentation from the local host. The build starts a container -and loads the documentation into a server. As long as this container runs, you -can browse the docs. - -1. In a terminal, change to the root of your `docker-fork` repository. - - $ cd ~/repos/dry-run-test - -2. Make sure you are in your feature branch. - - $ git status - On branch dry-run-test - Your branch is up-to-date with 'origin/dry-run-test'. - nothing to commit, working directory clean - -3. Build the documentation. - - $ make docs - - When the build completes, you'll see a final output message similar to the - following: - - Successfully built ee7fe7553123 - docker run --rm -it -e AWS_S3_BUCKET -e NOCACHE -p 8000:8000 "docker-docs:dry-run-test" mkdocs serve - Running at: http://0.0.0.0:8000/ - Live reload enabled. - Hold ctrl+c to quit. - -4. Enter the URL in your browser. - - If you are running Boot2Docker, replace the default localhost address - (0.0.0.0) with your DOCKERHOST value. You can get this value at any time by - entering `boot2docker ip` at the command line. - -5. Once in the documentation, look for the red notice to verify you are seeing the correct build. - - ![Beta documentation](/project/images/red_notice.png) - -6. Navigate to your new or changed document. - -7. Review both the content and the links. - -8. Return to your terminal and exit out of the running documentation container. - - -## Where to go next - -Congratulations, you have successfully completed the basics you need to -understand the Docker test framework. In the next steps, you use what you have -learned so far to [contribute to Docker by working on an -issue](/project/make-a-contribution/). diff --git a/project/who-written-for.md~ b/project/who-written-for.md~ deleted file mode 100644 index e3b761a460..0000000000 --- a/project/who-written-for.md~ +++ /dev/null @@ -1,57 +0,0 @@ -page_title: README first -page_description: Introduction to project contribution at Docker -page_keywords: Gordon, introduction, turtle, machine, libcontainer, how to - -# README first - -This section of the documentation contains a guide for Docker users who want to -contribute code or documentation to the Docker project. As a community, we -share rules of behavior and interaction. Make sure you are familiar with the community guidelines before continuing. - -## Where and what you can contribute - -The Docker project consists of not just one but several repositories on GitHub. -So, in addition to the `docker/docker` repository, there is the -`docker/libcontainer` repo, the `docker/machine` repo, and several more. -Contribute to any of these and you contribute to the Docker project. - -Not all Docker repositories use the Go language. Also, each repository has its -own focus area. So, if you are an experienced contributor, think about -contributing to a Docker repository that has a language or a focus area you are -familiar with. - -If you are new to the open source community, to Docker, or to formal -programming, you should start out contributing to the `docker/docker` -repository. Why? Because this guide is written for that repository specifically. - -Finally, code or documentation isn't the only way to contribute. You can report -an issue, add to discussions in our community channel, write a blog post, or -take a usability test. You can even propose your own type of contribution. -Right now we don't have a lot written about this yet, so just email - if this type of contributing interests you. - -## A turtle is involved - -![Gordon](/project/images/gordon.jpeg) - -Enough said. - -## How to use this guide - -This is written for the distracted, the overworked, the sloppy reader with fair -`git` skills and a failing memory for the GitHub GUI. The guide attempts to -explain how to use the Docker environment as precisely, predictably, and -procedurally as possible. - -Users who are new to the Docker development environment should start by setting -up their environment. Then, they should try a simple code change. After that, -you should find something to work on or propose at totally new change. - -If you are a programming prodigy, you still may find this documentation useful. -Please feel free to skim past information you find obvious or boring. - -## How to get started - -Start by [getting the software you need to contribute](/project/software-required/). diff --git a/project/work-issue.md~ b/project/work-issue.md~ deleted file mode 100644 index 561bd231f2..0000000000 --- a/project/work-issue.md~ +++ /dev/null @@ -1,205 +0,0 @@ -page_title: Work on your issue -page_description: Basic workflow for Docker contributions -page_keywords: contribute, pull request, review, workflow, beginner, squash, commit - - -# Work on your issue - -The work you do for your issue depends on the specific issue you picked. -This section gives you a step-by-step workflow. Where appropriate, it provides -command examples. - -However, this is a generalized workflow, depending on your issue you may repeat -steps or even skip some. How much time the work takes depends on you --- you -could spend days or 30 minutes of your time. - -## How to work on your local branch - -Follow this workflow as you work: - -1. Review the appropriate style guide. - - If you are changing code, review the coding style guide. Changing documentation? Review the - documentation style guide. - -2. Make changes in your feature branch. - - Your feature branch you created in the last section. Here you use the - development container. If you are making a code change, you can mount your - source into a development container and iterate that way. For documentation - alone, you can work on your local host. - - Make sure you don't change files in the `vendor` directory and its - subdirectories; they contain third-party dependency code. Review if you forgot the details of - working with a container. - - -3. Test your changes as you work. - - If you have followed along with the guide, you know the `make test` target - runs the entire test suite and `make docs` builds the documentation. If you - forgot the other test targets, see the documentation for testing both code and - documentation. - -4. For code changes, add unit tests if appropriate. - - If you add new functionality or change existing functionality, you should - add a unit test also. Use the existing test files for inspiration. Aren't - sure if you need tests? Skip this step; you can add them later in the - process if necessary. - -5. Format your source files correctly. - - - - - - - - - - - - - - - - - - -
File typeHow to format
.go -

- Format .go files using the gofmt command. - For example, if you edited the `docker.go` file you would format the file - like this: -

-

$ gofmt -s -w file.go

-

- Most file editors have a plugin to format for you. Check your editor's - documentation. -

-
.md and non-.go filesWrap lines to 80 characters.
- -6. List your changes. - - $ git status - On branch 11038-fix-rhel-link - Changes not staged for commit: - (use "git add ..." to update what will be committed) - (use "git checkout -- ..." to discard changes in working directory) - - modified: docs/sources/installation/mac.md - modified: docs/sources/installation/rhel.md - - The `status` command lists what changed in the repository. Make sure you see - the changes you expect. - -7. Add your change to Git. - - $ git add docs/sources/installation/mac.md - $ git add docs/sources/installation/rhel.md - - -8. Commit your changes making sure you use the `-s` flag to sign your work. - - $ git commit -s -m "Fixing RHEL link" - -9. Push your change to your repository. - - $ git push origin - Username for 'https://github.com': moxiegirl - Password for 'https://moxiegirl@github.com': - Counting objects: 60, done. - Compressing objects: 100% (7/7), done. - Writing objects: 100% (7/7), 582 bytes | 0 bytes/s, done. - Total 7 (delta 6), reused 0 (delta 0) - To https://github.com/moxiegirl/docker.git - * [new branch] 11038-fix-rhel-link -> 11038-fix-rhel-link - Branch 11038-fix-rhel-link set up to track remote branch 11038-fix-rhel-link from origin. - - The first time you push a change, you must specify the branch. Later, you can just do this: - - git push origin - -## Review your branch on GitHub - -After you push a new branch, you should verify it on GitHub: - -1. Open your browser to GitHub. - -2. Go to your Docker fork. - -3. Select your branch from the dropdown. - - ![Find branch](/project/images/locate_branch.png) - -4. Use the "Compare" button to compare the differences between your branch and master. - - Depending how long you've been working on your branch, your branch maybe - behind Docker's upstream repository. - -5. Review the commits. - - Make sure your branch only shows the work you've done. - -## Pull and rebase frequently - -You should pull and rebase frequently as you work. - -1. Return to the terminal on your local machine. - -2. Make sure you are in your branch. - - $ git branch 11038-fix-rhel-link - -3. Fetch all the changes from the `upstream/master` branch. - - $ git fetch upstream/master - - This command says get all the changes from the `master` branch belonging to - the `upstream` remote. - -4. Rebase your local master with Docker's `upstream/master` branch. - - $ git rebase -i upstream/master - - This command starts an interactive rebase to merge code from Docker's - `upstream/master` branch into your local branch. If you aren't familiar or - comfortable with rebase, you can learn more about rebasing on the web. - -5. Rebase opens an editor with a list of commits. - - pick 1a79f55 Tweak some of the other text for grammar - pick 53e4983 Fix a link - pick 3ce07bb Add a new line about RHEL - - If you run into trouble, `git --rebase abort` removes any changes and gets - you back to where you started. - -6. Squash the `pick` keyword with `squash` on all but the first commit. - - pick 1a79f55 Tweak some of the other text for grammar - squash 53e4983 Fix a link - squash 3ce07bb Add a new line about RHEL - - After closing the file, `git` opens your editor again to edit the commit - message. - -7. Edit and save your commit message. - - Make sure you include your signature. - -8. Push any changes to your fork on GitHub. - - $ git push origin 11038-fix-rhel-link - - -## Where to go next - -At this point, you should understand how to work on an issue. In the next -section, you [learn how to make a pull request](/project/create-pr/). diff --git a/reference/api/README.md~ b/reference/api/README.md~ deleted file mode 100644 index ec1cbcb2c3..0000000000 --- a/reference/api/README.md~ +++ /dev/null @@ -1,9 +0,0 @@ -This directory holds the authoritative specifications of APIs defined and implemented by Docker. Currently this includes: - - * The remote API by which a docker node can be queried over HTTP - * The registry API by which a docker node can download and upload - images for storage and sharing - * The index search API by which a docker node can search the public - index for images to download - * The docker.io OAuth and accounts API which 3rd party services can - use to access account information diff --git a/reference/api/docker-io_api.md~ b/reference/api/docker-io_api.md~ deleted file mode 100644 index a7557bacb5..0000000000 --- a/reference/api/docker-io_api.md~ +++ /dev/null @@ -1,505 +0,0 @@ -page_title: Docker Hub API -page_description: API Documentation for the Docker Hub API -page_keywords: API, Docker, index, REST, documentation, Docker Hub, registry - -# Docker Hub API - -- This is the REST API for [Docker Hub](https://hub.docker.com). -- Authorization is done with basic auth over SSL -- Not all commands require authentication, only those noted as such. - -# Repositories - -## User Repository - -### Create a user repository - -`PUT /v1/repositories/(namespace)/(repo_name)/` - -Create a user repository with the given `namespace` and `repo_name`. - -**Example Request**: - - PUT /v1/repositories/foo/bar/ HTTP/1.1 - Host: index.docker.io - Accept: application/json - Content-Type: application/json - Authorization: Basic akmklmasadalkm== - X-Docker-Token: true - - [{"id": "9e89cc6f0bc3c38722009fe6857087b486531f9a779a0c17e3ed29dae8f12c4f"}] - -Parameters: - -- **namespace** – the namespace for the repo -- **repo_name** – the name for the repo - -**Example Response**: - - HTTP/1.1 200 - Vary: Accept - Content-Type: application/json - WWW-Authenticate: Token signature=123abc,repository="foo/bar",access=write - X-Docker-Token: signature=123abc,repository="foo/bar",access=write - X-Docker-Endpoints: registry-1.docker.io [, registry-2.docker.io] - - "" - -Status Codes: - -- **200** – Created -- **400** – Errors (invalid json, missing or invalid fields, etc) -- **401** – Unauthorized -- **403** – Account is not Active - -### Delete a user repository - -`DELETE /v1/repositories/(namespace)/(repo_name)/` - -Delete a user repository with the given `namespace` and `repo_name`. - -**Example Request**: - - DELETE /v1/repositories/foo/bar/ HTTP/1.1 - Host: index.docker.io - Accept: application/json - Content-Type: application/json - Authorization: Basic akmklmasadalkm== - X-Docker-Token: true - - "" - -Parameters: - -- **namespace** – the namespace for the repo -- **repo_name** – the name for the repo - -**Example Response**: - - HTTP/1.1 202 - Vary: Accept - Content-Type: application/json - WWW-Authenticate: Token signature=123abc,repository="foo/bar",access=delete - X-Docker-Token: signature=123abc,repository="foo/bar",access=delete - X-Docker-Endpoints: registry-1.docker.io [, registry-2.docker.io] - - "" - -Status Codes: - -- **200** – Deleted -- **202** – Accepted -- **400** – Errors (invalid json, missing or invalid fields, etc) -- **401** – Unauthorized -- **403** – Account is not Active - -## Library Repository - -### Create a library repository - -`PUT /v1/repositories/(repo_name)/` - -Create a library repository with the given `repo_name`. -This is a restricted feature only available to docker admins. - -> When namespace is missing, it is assumed to be `library` - - -**Example Request**: - - PUT /v1/repositories/foobar/ HTTP/1.1 - Host: index.docker.io - Accept: application/json - Content-Type: application/json - Authorization: Basic akmklmasadalkm== - X-Docker-Token: true - - [{"id": "9e89cc6f0bc3c38722009fe6857087b486531f9a779a0c17e3ed29dae8f12c4f"}] - -Parameters: - -- **repo_name** – the library name for the repo - -**Example Response**: - - HTTP/1.1 200 - Vary: Accept - Content-Type: application/json - WWW-Authenticate: Token signature=123abc,repository="library/foobar",access=write - X-Docker-Token: signature=123abc,repository="foo/bar",access=write - X-Docker-Endpoints: registry-1.docker.io [, registry-2.docker.io] - - "" - -Status Codes: - -- **200** – Created -- **400** – Errors (invalid json, missing or invalid fields, etc) -- **401** – Unauthorized -- **403** – Account is not Active - -### Delete a library repository - -`DELETE /v1/repositories/(repo_name)/` - -Delete a library repository with the given `repo_name`. -This is a restricted feature only available to docker admins. - -> When namespace is missing, it is assumed to be `library` - - -**Example Request**: - - DELETE /v1/repositories/foobar/ HTTP/1.1 - Host: index.docker.io - Accept: application/json - Content-Type: application/json - Authorization: Basic akmklmasadalkm== - X-Docker-Token: true - - "" - -Parameters: - -- **repo_name** – the library name for the repo - -**Example Response**: - - HTTP/1.1 202 - Vary: Accept - Content-Type: application/json - WWW-Authenticate: Token signature=123abc,repository="library/foobar",access=delete - X-Docker-Token: signature=123abc,repository="foo/bar",access=delete - X-Docker-Endpoints: registry-1.docker.io [, registry-2.docker.io] - - "" - -Status Codes: - -- **200** – Deleted -- **202** – Accepted -- **400** – Errors (invalid json, missing or invalid fields, etc) -- **401** – Unauthorized -- **403** – Account is not Active - -# Repository Images - -## User Repository Images - -### Update user repository images - -`PUT /v1/repositories/(namespace)/(repo_name)/images` - -Update the images for a user repo. - -**Example Request**: - - PUT /v1/repositories/foo/bar/images HTTP/1.1 - Host: index.docker.io - Accept: application/json - Content-Type: application/json - Authorization: Basic akmklmasadalkm== - - [{"id": "9e89cc6f0bc3c38722009fe6857087b486531f9a779a0c17e3ed29dae8f12c4f", - "checksum": "b486531f9a779a0c17e3ed29dae8f12c4f9e89cc6f0bc3c38722009fe6857087"}] - -Parameters: - -- **namespace** – the namespace for the repo -- **repo_name** – the name for the repo - -**Example Response**: - - HTTP/1.1 204 - Vary: Accept - Content-Type: application/json - - "" - -Status Codes: - -- **204** – Created -- **400** – Errors (invalid json, missing or invalid fields, etc) -- **401** – Unauthorized -- **403** – Account is not Active or permission denied - -### List user repository images - -`GET /v1/repositories/(namespace)/(repo_name)/images` - -Get the images for a user repo. - -**Example Request**: - - GET /v1/repositories/foo/bar/images HTTP/1.1 - Host: index.docker.io - Accept: application/json - -Parameters: - -- **namespace** – the namespace for the repo -- **repo_name** – the name for the repo - -**Example Response**: - - HTTP/1.1 200 - Vary: Accept - Content-Type: application/json - - [{"id": "9e89cc6f0bc3c38722009fe6857087b486531f9a779a0c17e3ed29dae8f12c4f", - "checksum": "b486531f9a779a0c17e3ed29dae8f12c4f9e89cc6f0bc3c38722009fe6857087"}, - {"id": "ertwetewtwe38722009fe6857087b486531f9a779a0c1dfddgfgsdgdsgds", - "checksum": "34t23f23fc17e3ed29dae8f12c4f9e89cc6f0bsdfgfsdgdsgdsgerwgew"}] - -Status Codes: - -- **200** – OK -- **404** – Not found - -## Library Repository Images - -### Update library repository images - -`PUT /v1/repositories/(repo_name)/images` - -Update the images for a library repo. - -**Example Request**: - - PUT /v1/repositories/foobar/images HTTP/1.1 - Host: index.docker.io - Accept: application/json - Content-Type: application/json - Authorization: Basic akmklmasadalkm== - - [{"id": "9e89cc6f0bc3c38722009fe6857087b486531f9a779a0c17e3ed29dae8f12c4f", - "checksum": "b486531f9a779a0c17e3ed29dae8f12c4f9e89cc6f0bc3c38722009fe6857087"}] - -Parameters: - -- **repo_name** – the library name for the repo - -**Example Response**: - - HTTP/1.1 204 - Vary: Accept - Content-Type: application/json - - "" - -Status Codes: - -- **204** – Created -- **400** – Errors (invalid json, missing or invalid fields, etc) -- **401** – Unauthorized -- **403** – Account is not Active or permission denied - -### List library repository images - -`GET /v1/repositories/(repo_name)/images` - -Get the images for a library repo. - -**Example Request**: - - GET /v1/repositories/foobar/images HTTP/1.1 - Host: index.docker.io - Accept: application/json - -Parameters: - -- **repo_name** – the library name for the repo - -**Example Response**: - - HTTP/1.1 200 - Vary: Accept - Content-Type: application/json - - [{"id": "9e89cc6f0bc3c38722009fe6857087b486531f9a779a0c17e3ed29dae8f12c4f", - "checksum": "b486531f9a779a0c17e3ed29dae8f12c4f9e89cc6f0bc3c38722009fe6857087"}, - {"id": "ertwetewtwe38722009fe6857087b486531f9a779a0c1dfddgfgsdgdsgds", - "checksum": "34t23f23fc17e3ed29dae8f12c4f9e89cc6f0bsdfgfsdgdsgdsgerwgew"}] - -Status Codes: - -- **200** – OK -- **404** – Not found - -# Repository Authorization - -## Library Repository - -### Authorize a token for a library - -`PUT /v1/repositories/(repo_name)/auth` - -Authorize a token for a library repo - -**Example Request**: - - PUT /v1/repositories/foobar/auth HTTP/1.1 - Host: index.docker.io - Accept: application/json - Authorization: Token signature=123abc,repository="library/foobar",access=write - -Parameters: - -- **repo_name** – the library name for the repo - -**Example Response**: - - HTTP/1.1 200 - Vary: Accept - Content-Type: application/json - - "OK" - -Status Codes: - -- **200** – OK -- **403** – Permission denied -- **404** – Not found - -## User Repository - -### Authorize a token for a user repository - -`PUT /v1/repositories/(namespace)/(repo_name)/auth` - -Authorize a token for a user repo - -**Example Request**: - - PUT /v1/repositories/foo/bar/auth HTTP/1.1 - Host: index.docker.io - Accept: application/json - Authorization: Token signature=123abc,repository="foo/bar",access=write - -Parameters: - -- **namespace** – the namespace for the repo -- **repo_name** – the name for the repo - -**Example Response**: - - HTTP/1.1 200 - Vary: Accept - Content-Type: application/json - - "OK" - -Status Codes: - -- **200** – OK -- **403** – Permission denied -- **404** – Not found - -## Users - -### User Login - -`GET /v1/users/` - -If you want to check your login, you can try this endpoint - -**Example Request**: - - GET /v1/users/ HTTP/1.1 - Host: index.docker.io - Accept: application/json - Authorization: Basic akmklmasadalkm== - -**Example Response**: - - HTTP/1.1 200 OK - Vary: Accept - Content-Type: application/json - - OK - -Status Codes: - -- **200** – no error -- **401** – Unauthorized -- **403** – Account is not Active - -### User Register - -`POST /v1/users/` - -Registering a new account. - -**Example request**: - - POST /v1/users/ HTTP/1.1 - Host: index.docker.io - Accept: application/json - Content-Type: application/json - - {"email": "sam@docker.com", - "password": "toto42", - "username": "foobar"} - -Json Parameters: - -- **email** – valid email address, that needs to be confirmed -- **username** – min 4 character, max 30 characters, must match - the regular expression [a-z0-9_]. -- **password** – min 5 characters - -**Example Response**: - - HTTP/1.1 201 OK - Vary: Accept - Content-Type: application/json - - "User Created" - -Status Codes: - -- **201** – User Created -- **400** – Errors (invalid json, missing or invalid fields, etc) - -### Update User - -`PUT /v1/users/(username)/` - -Change a password or email address for given user. If you pass in an -email, it will add it to your account, it will not remove the old -one. Passwords will be updated. - -It is up to the client to verify that that password that is sent is -the one that they want. Common approach is to have them type it -twice. - -**Example Request**: - - PUT /v1/users/fakeuser/ HTTP/1.1 - Host: index.docker.io - Accept: application/json - Content-Type: application/json - Authorization: Basic akmklmasadalkm== - - {"email": "sam@docker.com", - "password": "toto42"} - -Parameters: - -- **username** – username for the person you want to update - -**Example Response**: - - HTTP/1.1 204 - Vary: Accept - Content-Type: application/json - - "" - -Status Codes: - -- **204** – User Updated -- **400** – Errors (invalid json, missing or invalid fields, etc) -- **401** – Unauthorized -- **403** – Account is not Active -- **404** – User not found diff --git a/reference/api/docker_io_accounts_api.md~ b/reference/api/docker_io_accounts_api.md~ deleted file mode 100644 index efb86eb33a..0000000000 --- a/reference/api/docker_io_accounts_api.md~ +++ /dev/null @@ -1,270 +0,0 @@ -page_title: docker.io Accounts API -page_description: API Documentation for docker.io accounts. -page_keywords: API, Docker, accounts, REST, documentation - -# docker.io Accounts API - -## Get a single user - -`GET /api/v1.1/users/:username/` - -Get profile info for the specified user. - -Parameters: - -- **username** – username of the user whose profile info is being - requested. - -Request Headers: - -- **Authorization** – required authentication credentials of - either type HTTP Basic or OAuth Bearer Token. - -Status Codes: - -- **200** – success, user data returned. -- **401** – authentication error. -- **403** – permission error, authenticated user must be the user - whose data is being requested, OAuth access tokens must have - `profile_read` scope. -- **404** – the specified username does not exist. - -**Example request**: - - GET /api/v1.1/users/janedoe/ HTTP/1.1 - Host: www.docker.io - Accept: application/json - Authorization: Basic dXNlcm5hbWU6cGFzc3dvcmQ= - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - { - "id": 2, - "username": "janedoe", - "url": "https://www.docker.io/api/v1.1/users/janedoe/", - "date_joined": "2014-02-12T17:58:01.431312Z", - "type": "User", - "full_name": "Jane Doe", - "location": "San Francisco, CA", - "company": "Success, Inc.", - "profile_url": "https://docker.io/", - "gravatar_url": "https://secure.gravatar.com/avatar/0212b397124be4acd4e7dea9aa357.jpg?s=80&r=g&d=mm" - "email": "jane.doe@example.com", - "is_active": true - } - -## Update a single user - -`PATCH /api/v1.1/users/:username/` - -Update profile info for the specified user. - -Parameters: - -- **username** – username of the user whose profile info is being - updated. - -Json Parameters: - -- **full_name** (*string*) – (optional) the new name of the user. -- **location** (*string*) – (optional) the new location. -- **company** (*string*) – (optional) the new company of the user. -- **profile_url** (*string*) – (optional) the new profile url. -- **gravatar_email** (*string*) – (optional) the new Gravatar - email address. - -Request Headers: - -- **Authorization** – required authentication credentials of - either type HTTP Basic or OAuth Bearer Token. -- **Content-Type** – MIME Type of post data. JSON, url-encoded - form data, etc. - -Status Codes: - -- **200** – success, user data updated. -- **400** – post data validation error. -- **401** – authentication error. -- **403** – permission error, authenticated user must be the user - whose data is being updated, OAuth access tokens must have - `profile_write` scope. -- **404** – the specified username does not exist. - -**Example request**: - - PATCH /api/v1.1/users/janedoe/ HTTP/1.1 - Host: www.docker.io - Accept: application/json - Authorization: Basic dXNlcm5hbWU6cGFzc3dvcmQ= - - { - "location": "Private Island", - "profile_url": "http://janedoe.com/", - "company": "Retired", - } - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - { - "id": 2, - "username": "janedoe", - "url": "https://www.docker.io/api/v1.1/users/janedoe/", - "date_joined": "2014-02-12T17:58:01.431312Z", - "type": "User", - "full_name": "Jane Doe", - "location": "Private Island", - "company": "Retired", - "profile_url": "http://janedoe.com/", - "gravatar_url": "https://secure.gravatar.com/avatar/0212b397124be4acd4e7dea9aa357.jpg?s=80&r=g&d=mm" - "email": "jane.doe@example.com", - "is_active": true - } - -## List email addresses for a user - -`GET /api/v1.1/users/:username/emails/` - -List email info for the specified user. - -Parameters: - -- **username** – username of the user whose profile info is being - updated. - -Request Headers: - -- **Authorization** – required authentication credentials of - either type HTTP Basic or OAuth Bearer Token - -Status Codes: - -- **200** – success, user data updated. -- **401** – authentication error. -- **403** – permission error, authenticated user must be the user - whose data is being requested, OAuth access tokens must have - `email_read` scope. -- **404** – the specified username does not exist. - -**Example request**: - - GET /api/v1.1/users/janedoe/emails/ HTTP/1.1 - Host: www.docker.io - Accept: application/json - Authorization: Bearer zAy0BxC1wDv2EuF3tGs4HrI6qJp6KoL7nM - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - [ - { - "email": "jane.doe@example.com", - "verified": true, - "primary": true - } - ] - -## Add email address for a user - -`POST /api/v1.1/users/:username/emails/` - -Add a new email address to the specified user's account. The email -address must be verified separately, a confirmation email is not -automatically sent. - -Json Parameters: - -- **email** (*string*) – email address to be added. - -Request Headers: - -- **Authorization** – required authentication credentials of - either type HTTP Basic or OAuth Bearer Token. -- **Content-Type** – MIME Type of post data. JSON, url-encoded - form data, etc. - -Status Codes: - -- **201** – success, new email added. -- **400** – data validation error. -- **401** – authentication error. -- **403** – permission error, authenticated user must be the user - whose data is being requested, OAuth access tokens must have - `email_write` scope. -- **404** – the specified username does not exist. - -**Example request**: - - POST /api/v1.1/users/janedoe/emails/ HTTP/1.1 - Host: www.docker.io - Accept: application/json - Content-Type: application/json - Authorization: Bearer zAy0BxC1wDv2EuF3tGs4HrI6qJp6KoL7nM - - { - "email": "jane.doe+other@example.com" - } - -**Example response**: - - HTTP/1.1 201 Created - Content-Type: application/json - - { - "email": "jane.doe+other@example.com", - "verified": false, - "primary": false - } - -## Delete email address for a user - -`DELETE /api/v1.1/users/:username/emails/` - -Delete an email address from the specified user's account. You -cannot delete a user's primary email address. - -Json Parameters: - -- **email** (*string*) – email address to be deleted. - -Request Headers: - -- **Authorization** – required authentication credentials of - either type HTTP Basic or OAuth Bearer Token. -- **Content-Type** – MIME Type of post data. JSON, url-encoded - form data, etc. - -Status Codes: - -- **204** – success, email address removed. -- **400** – validation error. -- **401** – authentication error. -- **403** – permission error, authenticated user must be the user - whose data is being requested, OAuth access tokens must have - `email_write` scope. -- **404** – the specified username or email address does not - exist. - -**Example request**: - - DELETE /api/v1.1/users/janedoe/emails/ HTTP/1.1 - Host: www.docker.io - Accept: application/json - Content-Type: application/json - Authorization: Bearer zAy0BxC1wDv2EuF3tGs4HrI6qJp6KoL7nM - - { - "email": "jane.doe+other@example.com" - } - -**Example response**: - - HTTP/1.1 204 NO CONTENT - Content-Length: 0 diff --git a/reference/api/docker_remote_api.md~ b/reference/api/docker_remote_api.md~ deleted file mode 100644 index 30448b040d..0000000000 --- a/reference/api/docker_remote_api.md~ +++ /dev/null @@ -1,572 +0,0 @@ -page_title: Remote API -page_description: API Documentation for Docker -page_keywords: API, Docker, rcli, REST, documentation - -# Docker Remote API - - - By default the Docker daemon listens on `unix:///var/run/docker.sock` - and the client must have `root` access to interact with the daemon. - - If the Docker daemon is set to use an encrypted TCP socket (`--tls`, - or `--tlsverify`) as with Boot2Docker 1.3.0, then you need to add extra - parameters to `curl` or `wget` when making test API requests: - `curl --insecure --cert ~/.docker/cert.pem --key ~/.docker/key.pem https://boot2docker:2376/images/json` - or - `wget --no-check-certificate --certificate=$DOCKER_CERT_PATH/cert.pem --private-key=$DOCKER_CERT_PATH/key.pem https://boot2docker:2376/images/json -O - -q` - - If a group named `docker` exists on your system, docker will apply - ownership of the socket to the group. - - The API tends to be REST, but for some complex commands, like attach - or pull, the HTTP connection is hijacked to transport STDOUT, STDIN, - and STDERR. - - Since API version 1.2, the auth configuration is now handled client - side, so the client has to send the `authConfig` as a `POST` in `/images/(name)/push`. - - authConfig, set as the `X-Registry-Auth` header, is currently a Base64 - encoded (JSON) string with the following structure: - `{"username": "string", "password": "string", "email": "string", - "serveraddress" : "string", "auth": ""}`. Notice that `auth` is to be left - empty, `serveraddress` is a domain/ip without protocol, and that double - quotes (instead of single ones) are required. - - The Remote API uses an open schema model. In this model, unknown - properties in incoming messages will be ignored. - Client applications need to take this into account to ensure - they will not break when talking to newer Docker daemons. - -The current version of the API is v1.17 - -Calling `/info` is the same as calling -`/v1.17/info`. - -You can still call an old version of the API using -`/v1.16/info`. - -## v1.17 - -### Full Documentation - -[*Docker Remote API v1.17*](/reference/api/docker_remote_api_v1.17/) - -### What's new - -`POST /containers/(id)/attach` and `POST /exec/(id)/start` - -**New!** -Docker client now hints potential proxies about connection hijacking using HTTP Upgrade headers. - -`GET /containers/(id)/json` - -**New!** -This endpoint now returns the list current execs associated with the container (`ExecIDs`). - -`POST /containers/(id)/rename` - -**New!** -New endpoint to rename a container `id` to a new name. - -`POST /containers/create` -`POST /containers/(id)/start` - -**New!** -(`ReadonlyRootfs`) can be passed in the host config to mount the container's -root filesystem as read only. - -`GET /containers/(id)/stats` - -**New!** -This endpoint returns a live stream of a container's resource usage statistics. - -> **Note**: this functionality currently only works when using the *libcontainer* exec-driver. - - -## v1.16 - -### Full Documentation - -[*Docker Remote API v1.16*](/reference/api/docker_remote_api_v1.16/) - -### What's new - -`GET /info` - -**New!** -`info` now returns the number of CPUs available on the machine (`NCPU`), -total memory available (`MemTotal`), a user-friendly name describing the running Docker daemon (`Name`), a unique ID identifying the daemon (`ID`), and -a list of daemon labels (`Labels`). - -`POST /containers/create` - -**New!** -You can set the new container's MAC address explicitly. - -**New!** -Volumes are now initialized when the container is created. - -`POST /containers/(id)/copy` - -**New!** -You can now copy data which is contained in a volume. - -## v1.15 - -### Full Documentation - -[*Docker Remote API v1.15*](/reference/api/docker_remote_api_v1.15/) - -### What's new - -`POST /containers/create` - -**New!** -It is now possible to set a container's HostConfig when creating a container. -Previously this was only available when starting a container. - -## v1.14 - -### Full Documentation - -[*Docker Remote API v1.14*](/reference/api/docker_remote_api_v1.14/) - -### What's new - -`DELETE /containers/(id)` - -**New!** -When using `force`, the container will be immediately killed with SIGKILL. - -`POST /containers/(id)/start` - -**New!** -The `hostConfig` option now accepts the field `CapAdd`, which specifies a list of capabilities -to add, and the field `CapDrop`, which specifies a list of capabilities to drop. - -`POST /images/create` - -**New!** -The `fromImage` and `repo` parameters now supports the `repo:tag` format. -Consequently, the `tag` parameter is now obsolete. Using the new format and -the `tag` parameter at the same time will return an error. - -## v1.13 - -### Full Documentation - -[*Docker Remote API v1.13*](/reference/api/docker_remote_api_v1.13/) - -### What's new - -`GET /containers/(name)/json` - -**New!** -The `HostConfig.Links` field is now filled correctly - -**New!** -`Sockets` parameter added to the `/info` endpoint listing all the sockets the -daemon is configured to listen on. - -`POST /containers/(name)/start` -`POST /containers/(name)/stop` - -**New!** -`start` and `stop` will now return 304 if the container's status is not modified - -`POST /commit` - -**New!** -Added a `pause` parameter (default `true`) to pause the container during commit - -## v1.12 - -### Full Documentation - -[*Docker Remote API v1.12*](/reference/api/docker_remote_api_v1.12/) - -### What's new - -`POST /build` - -**New!** -Build now has support for the `forcerm` parameter to always remove containers - -`GET /containers/(name)/json` -`GET /images/(name)/json` - -**New!** -All the JSON keys are now in CamelCase - -**New!** -Trusted builds are now Automated Builds - `is_trusted` is now `is_automated`. - -**Removed Insert Endpoint** -The `insert` endpoint has been removed. - -## v1.11 - -### Full Documentation - -[*Docker Remote API v1.11*](/reference/api/docker_remote_api_v1.11/) - -### What's new - -`GET /_ping` - -**New!** -You can now ping the server via the `_ping` endpoint. - -`GET /events` - -**New!** -You can now use the `-until` parameter to close connection -after timestamp. - -`GET /containers/(id)/logs` - -This url is preferred method for getting container logs now. - -## v1.10 - -### Full Documentation - -[*Docker Remote API v1.10*](/reference/api/docker_remote_api_v1.10/) - -### What's new - -`DELETE /images/(name)` - -**New!** -You can now use the force parameter to force delete of an - image, even if it's tagged in multiple repositories. **New!** - You - can now use the noprune parameter to prevent the deletion of parent - images - -`DELETE /containers/(id)` - -**New!** -You can now use the force parameter to force delete a - container, even if it is currently running - -## v1.9 - -### Full Documentation - -[*Docker Remote API v1.9*](/reference/api/docker_remote_api_v1.9/) - -### What's new - -`POST /build` - -**New!** -This endpoint now takes a serialized ConfigFile which it -uses to resolve the proper registry auth credentials for pulling the -base image. Clients which previously implemented the version -accepting an AuthConfig object must be updated. - -## v1.8 - -### Full Documentation - -[*Docker Remote API v1.8*](/reference/api/docker_remote_api_v1.8/) - -### What's new - -`POST /build` - -**New!** -This endpoint now returns build status as json stream. In -case of a build error, it returns the exit status of the failed -command. - -`GET /containers/(id)/json` - -**New!** -This endpoint now returns the host config for the -container. - -`POST /images/create` - -`POST /images/(name)/insert` - -`POST /images/(name)/push` - -**New!** -progressDetail object was added in the JSON. It's now -possible to get the current value and the total of the progress -without having to parse the string. - -## v1.7 - -### Full Documentation - -[*Docker Remote API v1.7*](/reference/api/docker_remote_api_v1.7/) - -### What's new - -`GET /images/json` - -The format of the json returned from this uri changed. Instead of an -entry for each repo/tag on an image, each image is only represented -once, with a nested attribute indicating the repo/tags that apply to -that image. - -Instead of: - - HTTP/1.1 200 OK - Content-Type: application/json - - [ - { - "VirtualSize": 131506275, - "Size": 131506275, - "Created": 1365714795, - "Id": "8dbd9e392a964056420e5d58ca5cc376ef18e2de93b5cc90e868a1bbc8318c1c", - "Tag": "12.04", - "Repository": "ubuntu" - }, - { - "VirtualSize": 131506275, - "Size": 131506275, - "Created": 1365714795, - "Id": "8dbd9e392a964056420e5d58ca5cc376ef18e2de93b5cc90e868a1bbc8318c1c", - "Tag": "latest", - "Repository": "ubuntu" - }, - { - "VirtualSize": 131506275, - "Size": 131506275, - "Created": 1365714795, - "Id": "8dbd9e392a964056420e5d58ca5cc376ef18e2de93b5cc90e868a1bbc8318c1c", - "Tag": "precise", - "Repository": "ubuntu" - }, - { - "VirtualSize": 180116135, - "Size": 24653, - "Created": 1364102658, - "Id": "b750fe79269d2ec9a3c593ef05b4332b1d1a02a62b4accb2c21d589ff2f5f2dc", - "Tag": "12.10", - "Repository": "ubuntu" - }, - { - "VirtualSize": 180116135, - "Size": 24653, - "Created": 1364102658, - "Id": "b750fe79269d2ec9a3c593ef05b4332b1d1a02a62b4accb2c21d589ff2f5f2dc", - "Tag": "quantal", - "Repository": "ubuntu" - } - ] - -The returned json looks like this: - - HTTP/1.1 200 OK - Content-Type: application/json - - [ - { - "RepoTags": [ - "ubuntu:12.04", - "ubuntu:precise", - "ubuntu:latest" - ], - "Id": "8dbd9e392a964056420e5d58ca5cc376ef18e2de93b5cc90e868a1bbc8318c1c", - "Created": 1365714795, - "Size": 131506275, - "VirtualSize": 131506275 - }, - { - "RepoTags": [ - "ubuntu:12.10", - "ubuntu:quantal" - ], - "ParentId": "27cf784147099545", - "Id": "b750fe79269d2ec9a3c593ef05b4332b1d1a02a62b4accb2c21d589ff2f5f2dc", - "Created": 1364102658, - "Size": 24653, - "VirtualSize": 180116135 - } - ] - -`GET /images/viz` - -This URI no longer exists. The `images --viz` -output is now generated in the client, using the -`/images/json` data. - -## v1.6 - -### Full Documentation - -[*Docker Remote API v1.6*](/reference/api/docker_remote_api_v1.6/) - -### What's new - -`POST /containers/(id)/attach` - -**New!** -You can now split stderr from stdout. This is done by -prefixing a header to each transmission. See -[`POST /containers/(id)/attach`]( -/reference/api/docker_remote_api_v1.9/#attach-to-a-container "POST /containers/(id)/attach"). -The WebSocket attach is unchanged. Note that attach calls on the -previous API version didn't change. Stdout and stderr are merged. - -## v1.5 - -### Full Documentation - -[*Docker Remote API v1.5*](/reference/api/docker_remote_api_v1.5/) - -### What's new - -`POST /images/create` - -**New!** -You can now pass registry credentials (via an AuthConfig - object) through the X-Registry-Auth header - -`POST /images/(name)/push` - -**New!** -The AuthConfig object now needs to be passed through the - X-Registry-Auth header - -`GET /containers/json` - -**New!** -The format of the Ports entry has been changed to a list of -dicts each containing PublicPort, PrivatePort and Type describing a -port mapping. - -## v1.4 - -### Full Documentation - -[*Docker Remote API v1.4*](/reference/api/docker_remote_api_v1.4/) - -### What's new - -`POST /images/create` - -**New!** -When pulling a repo, all images are now downloaded in parallel. - -`GET /containers/(id)/top` - -**New!** -You can now use ps args with docker top, like docker top - aux - -`GET /events` - -**New!** -Image's name added in the events - -## v1.3 - -docker v0.5.0 -[51f6c4a](https://github.com/docker/docker/commit/51f6c4a7372450d164c61e0054daf0223ddbd909) - -### Full Documentation - -[*Docker Remote API v1.3*](/reference/api/docker_remote_api_v1.3/) - -### What's new - -`GET /containers/(id)/top` - -List the processes running inside a container. - -`GET /events` - -**New!** -Monitor docker's events via streaming or via polling - -Builder (/build): - - - Simplify the upload of the build context - - Simply stream a tarball instead of multipart upload with 4 - intermediary buffers - - Simpler, less memory usage, less disk usage and faster - -> **Warning**: -> The /build improvements are not reverse-compatible. Pre 1.3 clients will -> break on /build. - -List containers (/containers/json): - - - You can use size=1 to get the size of the containers - -Start containers (/containers//start): - - - You can now pass host-specific configuration (e.g., bind mounts) in - the POST body for start calls - -## v1.2 - -docker v0.4.2 -[2e7649b](https://github.com/docker/docker/commit/2e7649beda7c820793bd46766cbc2cfeace7b168) - -### Full Documentation - -[*Docker Remote API v1.2*](/reference/api/docker_remote_api_v1.2/) - -### What's new - -The auth configuration is now handled by the client. - -The client should send it's authConfig as POST on each call of -`/images/(name)/push` - -`GET /auth` - -**Deprecated.** - -`POST /auth` - -Only checks the configuration but doesn't store it on the server - - Deleting an image is now improved, will only untag the image if it - has children and remove all the untagged parents if has any. - -`POST /images//delete` - -Now returns a JSON structure with the list of images -deleted/untagged. - -## v1.1 - -docker v0.4.0 -[a8ae398](https://github.com/docker/docker/commit/a8ae398bf52e97148ee7bd0d5868de2e15bd297f) - -### Full Documentation - -[*Docker Remote API v1.1*](/reference/api/docker_remote_api_v1.1/) - -### What's new - -`POST /images/create` - -`POST /images/(name)/insert` - -`POST /images/(name)/push` - -Uses json stream instead of HTML hijack, it looks like this: - - HTTP/1.1 200 OK - Content-Type: application/json - - {"status":"Pushing..."} - {"status":"Pushing", "progress":"1/? (n/a)"} - {"error":"Invalid..."} - ... - -## v1.0 - -docker v0.3.4 -[8d73740](https://github.com/docker/docker/commit/8d73740343778651c09160cde9661f5f387b36f4) - -### Full Documentation - -[*Docker Remote API v1.0*](/reference/api/docker_remote_api_v1.0/) - -### What's new - -Initial version diff --git a/reference/api/docker_remote_api_v1.0.md~ b/reference/api/docker_remote_api_v1.0.md~ deleted file mode 100644 index 399bf7f141..0000000000 --- a/reference/api/docker_remote_api_v1.0.md~ +++ /dev/null @@ -1,986 +0,0 @@ -page_title: Remote API v1.0 -page_description: API Documentation for Docker -page_keywords: API, Docker, rcli, REST, documentation - -# Docker Remote API v1.0 - -# 1. Brief introduction - -- The Remote API is replacing rcli -- Default port in the docker daemon is 2375 -- The API tends to be REST, but for some complex commands, like attach - or pull, the HTTP connection is hijacked to transport stdout stdin - and stderr - -# 2. Endpoints - -## 2.1 Containers - -### List containers - -`GET /containers/json` - -List containers - -**Example request**: - - GET /containers/json?all=1&before=8dfafdbc3a40 HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - [ - { - "Id": "8dfafdbc3a40", - "Image": "ubuntu:latest", - "Command": "echo 1", - "Created": 1367854155, - "Status": "Exit 0" - }, - { - "Id": "9cd87474be90", - "Image": "ubuntu:latest", - "Command": "echo 222222", - "Created": 1367854155, - "Status": "Exit 0" - }, - { - "Id": "3176a2479c92", - "Image": "centos:latest", - "Command": "echo 3333333333333333", - "Created": 1367854154, - "Status": "Exit 0" - }, - { - "Id": "4cb07b47f9fb", - "Image": "fedora:latest", - "Command": "echo 444444444444444444444444444444444", - "Created": 1367854152, - "Status": "Exit 0" - } - ] - -Query Parameters: - -- **all** – 1/True/true or 0/False/false, Show all containers. - Only running containers are shown by default -- **limit** – Show `limit` last created - containers, include non-running ones. -- **since** – Show only containers created since Id, include - non-running ones. -- **before** – Show only containers created before Id, include - non-running ones. - -Status Codes: - -- **200** – no error -- **400** – bad parameter -- **500** – server error - -### Create a container - -`POST /containers/create` - -Create a container - -**Example request**: - - POST /containers/create HTTP/1.1 - Content-Type: application/json - - { - "Hostname":"", - "User":"", - "Memory":0, - "MemorySwap":0, - "AttachStdin":false, - "AttachStdout":true, - "AttachStderr":true, - "PortSpecs":null, - "Tty":false, - "OpenStdin":false, - "StdinOnce":false, - "Env":null, - "Cmd":[ - "date" - ], - "Dns":null, - "Image":"ubuntu", - "Volumes":{}, - "VolumesFrom":"" - } - -**Example response**: - - HTTP/1.1 201 Created - Content-Type: application/json - - { - "Id":"e90e34656806" - "Warnings":[] - } - -Json Parameters: - -- **config** – the container's configuration - -Status Codes: - -- **201** – no error -- **404** – no such container -- **406** – impossible to attach (container not running) -- **500** – server error - -### Inspect a container - -`GET /containers/(id)/json` - -Return low-level information on the container `id` - - -**Example request**: - - GET /containers/4fa6e0f0c678/json HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - { - "Id": "4fa6e0f0c6786287e131c3852c58a2e01cc697a68231826813597e4994f1d6e2", - "Created": "2013-05-07T14:51:42.041847+02:00", - "Path": "date", - "Args": [], - "Config": { - "Hostname": "4fa6e0f0c678", - "User": "", - "Memory": 0, - "MemorySwap": 0, - "AttachStdin": false, - "AttachStdout": true, - "AttachStderr": true, - "PortSpecs": null, - "Tty": false, - "OpenStdin": false, - "StdinOnce": false, - "Env": null, - "Cmd": [ - "date" - ], - "Dns": null, - "Image": "ubuntu", - "Volumes": {}, - "VolumesFrom": "" - }, - "State": { - "Running": false, - "Pid": 0, - "ExitCode": 0, - "StartedAt": "2013-05-07T14:51:42.087658+02:01360", - "Ghost": false - }, - "Image": "b750fe79269d2ec9a3c593ef05b4332b1d1a02a62b4accb2c21d589ff2f5f2dc", - "NetworkSettings": { - "IpAddress": "", - "IpPrefixLen": 0, - "Gateway": "", - "Bridge": "", - "PortMapping": null - }, - "SysInitPath": "/home/kitty/go/src/github.com/docker/docker/bin/docker", - "ResolvConfPath": "/etc/resolv.conf", - "Volumes": {} - } - -Status Codes: - -- **200** – no error -- **404** – no such container -- **500** – server error - -### Inspect changes on a container's filesystem - -`GET /containers/(id)/changes` - -Inspect changes on container `id`'s filesystem - -**Example request**: - - GET /containers/4fa6e0f0c678/changes HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - [ - { - "Path": "/dev", - "Kind": 0 - }, - { - "Path": "/dev/kmsg", - "Kind": 1 - }, - { - "Path": "/test", - "Kind": 1 - } - ] - -Status Codes: - -- **200** – no error -- **404** – no such container -- **500** – server error - -### Export a container - -`GET /containers/(id)/export` - -Export the contents of container `id` - -**Example request**: - - GET /containers/4fa6e0f0c678/export HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/octet-stream - - {{ TAR STREAM }} - -Status Codes: - -- **200** – no error -- **404** – no such container -- **500** – server error - -### Start a container - -`POST /containers/(id)/start` - -Start the container `id` - -**Example request**: - - POST /containers/e90e34656806/start HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - -Status Codes: - -- **200** – no error -- **404** – no such container -- **500** – server error - -### Stop a container - -`POST /containers/(id)/stop` - -Stop the container `id` - -**Example request**: - - POST /containers/e90e34656806/stop?t=5 HTTP/1.1 - -**Example response**: - - HTTP/1.1 204 OK - -Query Parameters: - -- **t** – number of seconds to wait before killing the container - -Status Codes: - -- **204** – no error -- **404** – no such container -- **500** – server error - -### Restart a container - -`POST /containers/(id)/restart` - -Restart the container `id` - -**Example request**: - - POST /containers/e90e34656806/restart?t=5 HTTP/1.1 - -**Example response**: - - HTTP/1.1 204 No Content - -Query Parameters: - -- **t** – number of seconds to wait before killing the container - -Status Codes: - -- **204** – no error -- **404** – no such container -- **500** – server error - -### Kill a container - -`POST /containers/(id)/kill` - -Kill the container `id` - -**Example request**: - - POST /containers/e90e34656806/kill HTTP/1.1 - -**Example response**: - - HTTP/1.1 204 No Content - -Status Codes: - -- **204** – no error -- **404** – no such container -- **500** – server error - -### Attach to a container - -`POST /containers/(id)/attach` - -Attach to the container `id` - -**Example request**: - - POST /containers/16253994b7c4/attach?logs=1&stream=0&stdout=1 HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/vnd.docker.raw-stream - - {{ STREAM }} - -Query Parameters: - -- **logs** – 1/True/true or 0/False/false, return logs. Defaul - false -- **stream** – 1/True/true or 0/False/false, return stream. - Default false -- **stdin** – 1/True/true or 0/False/false, if stream=true, attach - to stdin. Default false -- **stdout** – 1/True/true or 0/False/false, if logs=true, return - stdout log, if stream=true, attach to stdout. Default false -- **stderr** – 1/True/true or 0/False/false, if logs=true, return - stderr log, if stream=true, attach to stderr. Default false - -Status Codes: - -- **200** – no error -- **400** – bad parameter -- **404** – no such container -- **500** – server error - -### Attach to a container (websocket) - -`GET /containers/(id)/attach/ws` - -Attach to the container `id` via websocket - -Implements websocket protocol handshake according to [RFC 6455](http://tools.ietf.org/html/rfc6455) - -**Example request** - - GET /containers/e90e34656806/attach/ws?logs=0&stream=1&stdin=1&stdout=1&stderr=1 HTTP/1.1 - -**Example response** - - {{ STREAM }} - -Query Parameters: - -- **logs** – 1/True/true or 0/False/false, return logs. Default false -- **stream** – 1/True/true or 0/False/false, return stream. - Default false -- **stdin** – 1/True/true or 0/False/false, if stream=true, attach - to stdin. Default false -- **stdout** – 1/True/true or 0/False/false, if logs=true, return - stdout log, if stream=true, attach to stdout. Default false -- **stderr** – 1/True/true or 0/False/false, if logs=true, return - stderr log, if stream=true, attach to stderr. Default false - -Status Codes: - -- **200** – no error -- **400** – bad parameter -- **404** – no such container -- **500** – server error - -### Wait a container - -`POST /containers/(id)/wait` - -Block until container `id` stops, then returns the exit code - -**Example request**: - - POST /containers/16253994b7c4/wait HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - {"StatusCode": 0} - -Status Codes: - -- **200** – no error -- **404** – no such container -- **500** – server error - -### Remove a container - -`DELETE /containers/(id)` - -Remove the container `id` from the filesystem - -**Example request**: - - DELETE /containers/16253994b7c4?v=1 HTTP/1.1 - -**Example response**: - - HTTP/1.1 204 OK - -Query Parameters: - -- **v** – 1/True/true or 0/False/false, Remove the volumes - associated to the container. Default false - -Status Codes: - -- **204** – no error -- **400** – bad parameter -- **404** – no such container -- **500** – server error - -## 2.2 Images - -### List Images - -`GET /images/(format)` - -List images `format` could be json or viz (json default) - -**Example request**: - - GET /images/json?all=0 HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - [ - { - "Repository":"ubuntu", - "Tag":"precise", - "Id":"b750fe79269d", - "Created":1364102658 - }, - { - "Repository":"ubuntu", - "Tag":"12.04", - "Id":"b750fe79269d", - "Created":1364102658 - } - ] - -**Example request**: - - GET /images/viz HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: text/plain - - digraph docker { - "d82cbacda43a" -> "074be284591f" - "1496068ca813" -> "08306dc45919" - "08306dc45919" -> "0e7893146ac2" - "b750fe79269d" -> "1496068ca813" - base -> "27cf78414709" [style=invis] - "f71189fff3de" -> "9a33b36209ed" - "27cf78414709" -> "b750fe79269d" - "0e7893146ac2" -> "d6434d954665" - "d6434d954665" -> "d82cbacda43a" - base -> "e9aa60c60128" [style=invis] - "074be284591f" -> "f71189fff3de" - "b750fe79269d" [label="b750fe79269d\nubuntu",shape=box,fillcolor="paleturquoise",style="filled,rounded"]; - "e9aa60c60128" [label="e9aa60c60128\ncentos",shape=box,fillcolor="paleturquoise",style="filled,rounded"]; - "9a33b36209ed" [label="9a33b36209ed\nfedora",shape=box,fillcolor="paleturquoise",style="filled,rounded"]; - base [style=invisible] - } - -Query Parameters: - -- **all** – 1/True/true or 0/False/false, Show all containers. - Only running containers are shown by defaul - -Status Codes: - -- **200** – no error -- **400** – bad parameter -- **500** – server error - -### Create an image - -`POST /images/create` - -Create an image, either by pull it from the registry or by importing i - -**Example request**: - - POST /images/create?fromImage=ubuntu HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/vnd.docker.raw-stream - - {{ STREAM }} - -Query Parameters: - -- **fromImage** – name of the image to pull -- **fromSrc** – source to import, - means stdin -- **repo** – repository -- **tag** – tag -- **registry** – the registry to pull from - -Status Codes: - -- **200** – no error -- **500** – server error - -### Insert a file in an image - -`POST /images/(name)/insert` - -Insert a file from `url` in the image `name` at `path` - -**Example request**: - - POST /images/test/insert?path=/usr&url=myurl HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - - {{ TAR STREAM }} - -Query Parameters: - -- **url** – The url from where the file is taken -- **path** – The path where the file is stored - -Status Codes: - -- **200** – no error -- **500** – server error - -### Inspect an image - -`GET /images/(name)/json` - -Return low-level information on the image `name` - -**Example request**: - - GET /images/centos/json HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - { - "id":"b750fe79269d2ec9a3c593ef05b4332b1d1a02a62b4accb2c21d589ff2f5f2dc", - "parent":"27cf784147099545", - "created":"2013-03-23T22:24:18.818426-07:00", - "container":"3d67245a8d72ecf13f33dffac9f79dcdf70f75acb84d308770391510e0c23ad0", - "container_config": - { - "Hostname":"", - "User":"", - "Memory":0, - "MemorySwap":0, - "AttachStdin":false, - "AttachStdout":false, - "AttachStderr":false, - "PortSpecs":null, - "Tty":true, - "OpenStdin":true, - "StdinOnce":false, - "Env":null, - "Cmd": ["/bin/bash"], - "Dns":null, - "Image":"centos", - "Volumes":null, - "VolumesFrom":"" - } - } - -Status Codes: - -- **200** – no error -- **404** – no such image -- **500** – server error - -### Get the history of an image - -`GET /images/(name)/history` - -Return the history of the image `name` - -**Example request**: - - GET /images/fedora/history HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - [ - { - "Id": "b750fe79269d", - "Created": 1364102658, - "CreatedBy": "/bin/bash" - }, - { - "Id": "27cf78414709", - "Created": 1364068391, - "CreatedBy": "" - } - ] - -Status Codes: - -- **200** – no error -- **404** – no such image -- **500** – server error - -### Push an image on the registry - -`POST /images/(name)/push` - -Push the image `name` on the registry - - > **Example request**: - > - > POST /images/test/push HTTP/1.1 - > - > **Example response**: - - HTTP/1.1 200 OK - Content-Type: application/vnd.docker.raw-stream - - {{ STREAM }} - -Status Codes: - -- **200** – no error -- **404** – no such image -- **500** – server error - -### Tag an image into a repository - -`POST /images/(name)/tag` - -Tag the image `name` into a repository - -**Example request**: - - POST /images/test/tag?repo=myrepo&force=0&tag=v42 HTTP/1.1 - -**Example response**: - - HTTP/1.1 201 OK - -Query Parameters: - -- **repo** – The repository to tag in -- **force** – 1/True/true or 0/False/false, default false -- **tag** - The new tag name - -Status Codes: - -- **201** – no error -- **400** – bad parameter -- **404** – no such image -- **500** – server error - -### Remove an image - -`DELETE /images/(name)` - -Remove the image `name` from the filesystem - -**Example request**: - - DELETE /images/test HTTP/1.1 - -**Example response**: - - HTTP/1.1 204 No Content - -Status Codes: - -- **204** – no error -- **404** – no such image -- **500** – server error - -### Search images - -`GET /images/search` - -Search for an image on [Docker Hub](https://hub.docker.com) - -**Example request**: - - GET /images/search?term=sshd HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - [ - { - "Name":"cespare/sshd", - "Description":"" - }, - { - "Name":"johnfuller/sshd", - "Description":"" - }, - { - "Name":"dhrp/mongodb-sshd", - "Description":"" - } - ] - - :query term: term to search - :statuscode 200: no error - :statuscode 500: server error - -## 2.3 Misc - -### Build an image from Dockerfile via stdin - -`POST /build` - -Build an image from Dockerfile via stdin - -**Example request**: - - POST /build HTTP/1.1 - - {{ TAR STREAM }} - -**Example response**: - - HTTP/1.1 200 OK - - {{ STREAM }} - -Query Parameters: - -- **t** – repository name to be applied to the resulting image in - case of success - -Status Codes: - -- **200** – no error -- **500** – server error - -### Get default username and email - -`GET /auth` - -Get the default username and email - -**Example request**: - - GET /auth HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - { - "username":"hannibal", - "email":"hannibal@a-team.com" - } - -Status Codes: - -- **200** – no error -- **500** – server error - -### Check auth configuration and store i - -`POST /auth` - -Get the default username and email - -**Example request**: - - POST /auth HTTP/1.1 - Content-Type: application/json - - { - "username":"hannibal", - "password:"xxxx", - "email":"hannibal@a-team.com" - } - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: text/plain - -Status Codes: - -- **200** – no error -- **204** – no error -- **500** – server error - -### Display system-wide information - -`GET /info` - -Display system-wide information - -**Example request**: - - GET /info HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - { - "Containers":11, - "Images":16, - "Debug":false, - "NFd": 11, - "NGoroutines":21, - "MemoryLimit":true, - "SwapLimit":false - } - -Status Codes: - -- **200** – no error -- **500** – server error - -### Show the docker version information - -`GET /version` - -Show the docker version information - -**Example request**: - - GET /version HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - { - "Version":"0.2.2", - "GitCommit":"5a2a5cc+CHANGES", - "GoVersion":"go1.0.3" - } - -Status Codes: - -- **200** – no error -- **500** – server error - -### Create a new image from a container's changes - -`POST /commit` - -Create a new image from a container's changes - > - > **Example request**: - - POST /commit?container=44c004db4b17&m=message&repo=myrepo HTTP/1.1 - Content-Type: application/json - - { - "Cmd": ["cat", "/world"], - "PortSpecs":["22"] - } - -**Example response**: - - HTTP/1.1 201 OK - Content-Type: application/vnd.docker.raw-stream - - {"Id": "596069db4bf5"} - -Query Parameters: - -- **container** – source container -- **repo** – repository -- **tag** – tag -- **m** – commit message -- **author** – author (e.g., "John Hannibal Smith - <[hannibal@a-team.com](mailto:hannibal%40a-team.com)>") - -Status Codes: - -- **201** – no error -- **404** – no such container -- **500** – server error - -# 3. Going further - -## 3.1 Inside `docker run` - -As an example, the `docker run` command line makes the following API calls: - -- Create the container - -- If the status code is 404, it means the image doesn't exist: - - Try to pull it - - Then retry to create the container - -- Start the container - -- If you are not in detached mode: - - Attach to the container, using logs=1 (to have stdout and - stderr from the container's start) and stream=1 - -- If in detached mode or only stdin is attached: - - Display the container's - -## 3.2 Hijacking - -In this first version of the API, some of the endpoints, like /attach, -/pull or /push uses hijacking to transport stdin, stdout and stderr on -the same socket. This might change in the future. diff --git a/reference/api/docker_remote_api_v1.1.md~ b/reference/api/docker_remote_api_v1.1.md~ deleted file mode 100644 index 7ddb4ee0e6..0000000000 --- a/reference/api/docker_remote_api_v1.1.md~ +++ /dev/null @@ -1,998 +0,0 @@ -page_title: Remote API v1.1 -page_description: API Documentation for Docker -page_keywords: API, Docker, rcli, REST, documentation - -# Docker Remote API v1.1 - -# 1. Brief introduction - -- The Remote API is replacing rcli -- Default port in the docker daemon is 2375 -- The API tends to be REST, but for some complex commands, like attach - or pull, the HTTP connection is hijacked to transport stdout stdin - and stderr - -# 2. Endpoints - -## 2.1 Containers - -### List containers - -`GET /containers/json` - -List containers - -**Example request**: - - GET /containers/json?all=1&before=8dfafdbc3a40 HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - [ - { - "Id": "8dfafdbc3a40", - "Image": "ubuntu:latest", - "Command": "echo 1", - "Created": 1367854155, - "Status": "Exit 0" - }, - { - "Id": "9cd87474be90", - "Image": "ubuntu:latest", - "Command": "echo 222222", - "Created": 1367854155, - "Status": "Exit 0" - }, - { - "Id": "3176a2479c92", - "Image": "centos:latest", - "Command": "echo 3333333333333333", - "Created": 1367854154, - "Status": "Exit 0" - }, - { - "Id": "4cb07b47f9fb", - "Image": "fedora:latest", - "Command": "echo 444444444444444444444444444444444", - "Created": 1367854152, - "Status": "Exit 0" - } - ] - -Query Parameters: - -- **all** – 1/True/true or 0/False/false, Show all containers. - Only running containers are shown by default -- **limit** – Show `limit` last created - containers, include non-running ones. -- **since** – Show only containers created since Id, include - non-running ones. -- **before** – Show only containers created before Id, include - non-running ones. - -Status Codes: - -- **200** – no error -- **400** – bad parameter -- **500** – server error - -### Create a container - -`POST /containers/create` - -Create a container - -**Example request**: - - POST /containers/create HTTP/1.1 - Content-Type: application/json - - { - "Hostname":"", - "User":"", - "Memory":0, - "MemorySwap":0, - "AttachStdin":false, - "AttachStdout":true, - "AttachStderr":true, - "PortSpecs":null, - "Tty":false, - "OpenStdin":false, - "StdinOnce":false, - "Env":null, - "Cmd":[ - "date" - ], - "Dns":null, - "Image":"ubuntu", - "Volumes":{}, - "VolumesFrom":"" - } - -**Example response**: - - HTTP/1.1 201 Created - Content-Type: application/json - - { - "Id":"e90e34656806" - "Warnings":[] - } - -Json Parameters: - -- **config** – the container's configuration - -Status Codes: - -- **201** – no error -- **404** – no such container -- **406** – impossible to attach (container not running) -- **500** – server error - -### Inspect a container - -`GET /containers/(id)/json` - -Return low-level information on the container `id` - - -**Example request**: - - GET /containers/4fa6e0f0c678/json HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - { - "Id": "4fa6e0f0c6786287e131c3852c58a2e01cc697a68231826813597e4994f1d6e2", - "Created": "2013-05-07T14:51:42.041847+02:00", - "Path": "date", - "Args": [], - "Config": { - "Hostname": "4fa6e0f0c678", - "User": "", - "Memory": 0, - "MemorySwap": 0, - "AttachStdin": false, - "AttachStdout": true, - "AttachStderr": true, - "PortSpecs": null, - "Tty": false, - "OpenStdin": false, - "StdinOnce": false, - "Env": null, - "Cmd": [ - "date" - ], - "Dns": null, - "Image": "ubuntu", - "Volumes": {}, - "VolumesFrom": "" - }, - "State": { - "Running": false, - "Pid": 0, - "ExitCode": 0, - "StartedAt": "2013-05-07T14:51:42.087658+02:01360", - "Ghost": false - }, - "Image": "b750fe79269d2ec9a3c593ef05b4332b1d1a02a62b4accb2c21d589ff2f5f2dc", - "NetworkSettings": { - "IpAddress": "", - "IpPrefixLen": 0, - "Gateway": "", - "Bridge": "", - "PortMapping": null - }, - "SysInitPath": "/home/kitty/go/src/github.com/docker/docker/bin/docker", - "ResolvConfPath": "/etc/resolv.conf", - "Volumes": {} - } - -Status Codes: - -- **200** – no error -- **404** – no such container -- **500** – server error - -### Inspect changes on a container's filesystem - -`GET /containers/(id)/changes` - -Inspect changes on container `id`'s filesystem - -**Example request**: - - GET /containers/4fa6e0f0c678/changes HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - [ - { - "Path": "/dev", - "Kind": 0 - }, - { - "Path": "/dev/kmsg", - "Kind": 1 - }, - { - "Path": "/test", - "Kind": 1 - } - ] - -Status Codes: - -- **200** – no error -- **404** – no such container -- **500** – server error - -### Export a container - -`GET /containers/(id)/export` - -Export the contents of container `id` - -**Example request**: - - GET /containers/4fa6e0f0c678/export HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/octet-stream - - {{ STREAM }} - -Status Codes: - -- **200** – no error -- **404** – no such container -- **500** – server error - -### Start a container - -`POST /containers/(id)/start` - -Start the container `id` - -**Example request**: - - POST /containers/e90e34656806/start HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - -Status Codes: - -- **200** – no error -- **404** – no such container -- **500** – server error - -### Stop a container - -`POST /containers/(id)/stop` - -Stop the container `id` - -**Example request**: - - POST /containers/e90e34656806/stop?t=5 HTTP/1.1 - -**Example response**: - - HTTP/1.1 204 OK - -Query Parameters: - -- **t** – number of seconds to wait before killing the container - -Status Codes: - -- **204** – no error -- **404** – no such container -- **500** – server error - -### Restart a container - -`POST /containers/(id)/restart` - -Restart the container `id` - -**Example request**: - - POST /containers/e90e34656806/restart?t=5 HTTP/1.1 - -**Example response**: - - HTTP/1.1 204 No Content - -Query Parameters: - -- **t** – number of seconds to wait before killing the container - -Status Codes: - -- **204** – no error -- **404** – no such container -- **500** – server error - -### Kill a container - -`POST /containers/(id)/kill` - -Kill the container `id` - -**Example request**: - - POST /containers/e90e34656806/kill HTTP/1.1 - -**Example response**: - - HTTP/1.1 204 No Content - -Status Codes: - -- **204** – no error -- **404** – no such container -- **500** – server error - -### Attach to a container - -`POST /containers/(id)/attach` - -Attach to the container `id` - -**Example request**: - - POST /containers/16253994b7c4/attach?logs=1&stream=0&stdout=1 HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/vnd.docker.raw-stream - - {{ STREAM }} - -Query Parameters: - -- **logs** – 1/True/true or 0/False/false, return logs. Defaul - false -- **stream** – 1/True/true or 0/False/false, return stream. - Default false -- **stdin** – 1/True/true or 0/False/false, if stream=true, attach - to stdin. Default false -- **stdout** – 1/True/true or 0/False/false, if logs=true, return - stdout log, if stream=true, attach to stdout. Default false -- **stderr** – 1/True/true or 0/False/false, if logs=true, return - stderr log, if stream=true, attach to stderr. Default false - -Status Codes: - -- **200** – no error -- **400** – bad parameter -- **404** – no such container -- **500** – server error - -### Attach to a container (websocket) - -`GET /containers/(id)/attach/ws` - -Attach to the container `id` via websocket - -Implements websocket protocol handshake according to [RFC 6455](http://tools.ietf.org/html/rfc6455) - -**Example request** - - GET /containers/e90e34656806/attach/ws?logs=0&stream=1&stdin=1&stdout=1&stderr=1 HTTP/1.1 - -**Example response** - - {{ STREAM }} - -Query Parameters: - -- **logs** – 1/True/true or 0/False/false, return logs. Default false -- **stream** – 1/True/true or 0/False/false, return stream. - Default false -- **stdin** – 1/True/true or 0/False/false, if stream=true, attach - to stdin. Default false -- **stdout** – 1/True/true or 0/False/false, if logs=true, return - stdout log, if stream=true, attach to stdout. Default false -- **stderr** – 1/True/true or 0/False/false, if logs=true, return - stderr log, if stream=true, attach to stderr. Default false - -Status Codes: - -- **200** – no error -- **400** – bad parameter -- **404** – no such container -- **500** – server error - -### Wait a container - -`POST /containers/(id)/wait` - -Block until container `id` stops, then returns the exit code - -**Example request**: - - POST /containers/16253994b7c4/wait HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - {"StatusCode": 0} - -Status Codes: - -- **200** – no error -- **404** – no such container -- **500** – server error - -### Remove a container - -`DELETE /containers/(id)` - -Remove the container `id` from the filesystem - -**Example request**: - - DELETE /containers/16253994b7c4?v=1 HTTP/1.1 - -**Example response**: - - HTTP/1.1 204 OK - -Query Parameters: - -- **v** – 1/True/true or 0/False/false, Remove the volumes - associated to the container. Default false - -Status Codes: - -- **204** – no error -- **400** – bad parameter -- **404** – no such container -- **500** – server error - -## 2.2 Images - -### List Images - -`GET /images/(format)` - -List images `format` could be json or viz (json default) - -**Example request**: - - GET /images/json?all=0 HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - [ - { - "Repository":"ubuntu", - "Tag":"precise", - "Id":"b750fe79269d", - "Created":1364102658 - }, - { - "Repository":"ubuntu", - "Tag":"12.04", - "Id":"b750fe79269d", - "Created":1364102658 - } - ] - -**Example request**: - - GET /images/viz HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: text/plain - - digraph docker { - "d82cbacda43a" -> "074be284591f" - "1496068ca813" -> "08306dc45919" - "08306dc45919" -> "0e7893146ac2" - "b750fe79269d" -> "1496068ca813" - base -> "27cf78414709" [style=invis] - "f71189fff3de" -> "9a33b36209ed" - "27cf78414709" -> "b750fe79269d" - "0e7893146ac2" -> "d6434d954665" - "d6434d954665" -> "d82cbacda43a" - base -> "e9aa60c60128" [style=invis] - "074be284591f" -> "f71189fff3de" - "b750fe79269d" [label="b750fe79269d\nubuntu",shape=box,fillcolor="paleturquoise",style="filled,rounded"]; - "e9aa60c60128" [label="e9aa60c60128\ncentos",shape=box,fillcolor="paleturquoise",style="filled,rounded"]; - "9a33b36209ed" [label="9a33b36209ed\nfedora",shape=box,fillcolor="paleturquoise",style="filled,rounded"]; - base [style=invisible] - } - -Query Parameters: - -- **all** – 1/True/true or 0/False/false, Show all containers. - Only running containers are shown by defaul - -Status Codes: - -- **200** – no error -- **400** – bad parameter -- **500** – server error - -### Create an image - -`POST /images/create` - -Create an image, either by pull it from the registry or by importing i - -**Example request**: - - POST /images/create?fromImage=ubuntu HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - {"status":"Pulling..."} - {"status":"Pulling", "progress":"1/? (n/a)"} - {"error":"Invalid..."} - ... - -Query Parameters: - -- **fromImage** – name of the image to pull -- **fromSrc** – source to import, - means stdin -- **repo** – repository -- **tag** – tag -- **registry** – the registry to pull from - -Status Codes: - -- **200** – no error -- **500** – server error - -### Insert a file in an image - -`POST /images/(name)/insert` - -Insert a file from `url` in the image `name` at `path` - -**Example request**: - - POST /images/test/insert?path=/usr&url=myurl HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - {"status":"Inserting..."} - {"status":"Inserting", "progress":"1/? (n/a)"} - {"error":"Invalid..."} - ... - -Query Parameters: - -- **url** – The url from where the file is taken -- **path** – The path where the file is stored - -Status Codes: - -- **200** – no error -- **500** – server error - -### Inspect an image - -`GET /images/(name)/json` - -Return low-level information on the image `name` - -**Example request**: - - GET /images/centos/json HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - { - "id":"b750fe79269d2ec9a3c593ef05b4332b1d1a02a62b4accb2c21d589ff2f5f2dc", - "parent":"27cf784147099545", - "created":"2013-03-23T22:24:18.818426-07:00", - "container":"3d67245a8d72ecf13f33dffac9f79dcdf70f75acb84d308770391510e0c23ad0", - "container_config": - { - "Hostname":"", - "User":"", - "Memory":0, - "MemorySwap":0, - "AttachStdin":false, - "AttachStdout":false, - "AttachStderr":false, - "PortSpecs":null, - "Tty":true, - "OpenStdin":true, - "StdinOnce":false, - "Env":null, - "Cmd": ["/bin/bash"], - "Dns":null, - "Image":"centos", - "Volumes":null, - "VolumesFrom":"" - } - } - -Status Codes: - -- **200** – no error -- **404** – no such image -- **500** – server error - -### Get the history of an image - -`GET /images/(name)/history` - -Return the history of the image `name` - -**Example request**: - - GET /images/fedora/history HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - [ - { - "Id": "b750fe79269d", - "Created": 1364102658, - "CreatedBy": "/bin/bash" - }, - { - "Id": "27cf78414709", - "Created": 1364068391, - "CreatedBy": "" - } - ] - -Status Codes: - -- **200** – no error -- **404** – no such image -- **500** – server error - -### Push an image on the registry - -`POST /images/(name)/push` - -Push the image `name` on the registry - - > **Example request**: - > - > POST /images/test/push HTTP/1.1 - > - > **Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - {"status":"Pushing..."} - {"status":"Pushing", "progress":"1/? (n/a)"} - {"error":"Invalid..."} - ... - -Status Codes: - -- **200** – no error -- **404** – no such image -- **500** – server error - -### Tag an image into a repository - -`POST /images/(name)/tag` - -Tag the image `name` into a repository - -**Example request**: - - POST /images/test/tag?repo=myrepo&force=0&tag=v42 HTTP/1.1 - -**Example response**: - - HTTP/1.1 201 OK - -Query Parameters: - -- **repo** – The repository to tag in -- **force** – 1/True/true or 0/False/false, default false -- **tag** - The new tag name - -Status Codes: - -- **201** – no error -- **400** – bad parameter -- **404** – no such image -- **409** – conflict -- **500** – server error - -### Remove an image - -`DELETE /images/(name)` - -Remove the image `name` from the filesystem - -**Example request**: - - DELETE /images/test HTTP/1.1 - -**Example response**: - - HTTP/1.1 204 No Content - -Status Codes: - -- **204** – no error -- **404** – no such image -- **500** – server error - -### Search images - -`GET /images/search` - -Search for an image on [Docker Hub](https://hub.docker.com) - -**Example request**: - - GET /images/search?term=sshd HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - [ - { - "Name":"cespare/sshd", - "Description":"" - }, - { - "Name":"johnfuller/sshd", - "Description":"" - }, - { - "Name":"dhrp/mongodb-sshd", - "Description":"" - } - ] - - :query term: term to search - :statuscode 200: no error - :statuscode 500: server error - -## 2.3 Misc - -### Build an image from Dockerfile via stdin - -`POST /build` - -Build an image from Dockerfile via stdin - -**Example request**: - - POST /build HTTP/1.1 - - {{ STREAM }} - -**Example response**: - - HTTP/1.1 200 OK - - {{ STREAM }} - -Query Parameters: - -   - -- **t** – tag to be applied to the resulting image in case of - success - -Status Codes: - -- **200** – no error -- **500** – server error - -### Get default username and email - -`GET /auth` - -Get the default username and email - -**Example request**: - - GET /auth HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - { - "username":"hannibal", - "email":"hannibal@a-team.com" - } - -Status Codes: - -- **200** – no error -- **500** – server error - -### Check auth configuration and store i - -`POST /auth` - -Get the default username and email - -**Example request**: - - POST /auth HTTP/1.1 - Content-Type: application/json - - { - "username":"hannibal", - "password:"xxxx", - "email":"hannibal@a-team.com" - } - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: text/plain - -Status Codes: - -- **200** – no error -- **204** – no error -- **500** – server error - -### Display system-wide information - -`GET /info` - -Display system-wide information - -**Example request**: - - GET /info HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - { - "Containers":11, - "Images":16, - "Debug":false, - "NFd": 11, - "NGoroutines":21, - "MemoryLimit":true, - "SwapLimit":false - } - -Status Codes: - -- **200** – no error -- **500** – server error - -### Show the docker version information - -`GET /version` - -Show the docker version information - -**Example request**: - - GET /version HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - { - "Version":"0.2.2", - "GitCommit":"5a2a5cc+CHANGES", - "GoVersion":"go1.0.3" - } - -Status Codes: - -- **200** – no error -- **500** – server error - -### Create a new image from a container's changes - -`POST /commit` - -Create a new image from a container's changes - -**Example request**: - - POST /commit?container=44c004db4b17&m=message&repo=myrepo HTTP/1.1 - Content-Type: application/json - - { - "Cmd": ["cat", "/world"], - "PortSpecs":["22"] - } - -**Example response**: - - HTTP/1.1 201 OK - Content-Type: application/vnd.docker.raw-stream - - {"Id": "596069db4bf5"} - -Query Parameters: - -- **container** – source container -- **repo** – repository -- **tag** – tag -- **m** – commit message -- **author** – author (e.g., "John Hannibal Smith - <[hannibal@a-team.com](mailto:hannibal%40a-team.com)>") - -Status Codes: - -- **201** – no error -- **404** – no such container -- **500** – server error - -# 3. Going further - -## 3.1 Inside `docker run` - -Here are the steps of `docker run` : - - - Create the container - - - If the status code is 404, it means the image doesn't exist: - - Try to pull it - - Then retry to create the container - - - Start the container - - - If you are not in detached mode: - - Attach to the container, using logs=1 (to have stdout and - stderr from the container's start) and stream=1 - - - If in detached mode or only stdin is attached: - - Display the container's - -## 3.2 Hijacking - -In this version of the API, /attach uses hijacking to transport stdin, -stdout and stderr on the same socket. This might change in the future. diff --git a/reference/api/docker_remote_api_v1.10.md~ b/reference/api/docker_remote_api_v1.10.md~ deleted file mode 100644 index 7837b82edd..0000000000 --- a/reference/api/docker_remote_api_v1.10.md~ +++ /dev/null @@ -1,1347 +0,0 @@ -page_title: Remote API v1.10 -page_description: API Documentation for Docker -page_keywords: API, Docker, rcli, REST, documentation - -# Docker Remote API v1.10 - -## 1. Brief introduction - - - The Remote API has replaced rcli - - The daemon listens on `unix:///var/run/docker.sock` but you can bind - Docker to another host/port or a Unix socket. - - The API tends to be REST, but for some complex commands, like `attach` - or `pull`, the HTTP connection is hijacked to transport `stdout, stdin` - and `stderr` - -# 2. Endpoints - -## 2.1 Containers - -### List containers - -`GET /containers/json` - -List containers - -**Example request**: - - GET /containers/json?all=1&before=8dfafdbc3a40&size=1 HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - [ - { - "Id": "8dfafdbc3a40", - "Image": "ubuntu:latest", - "Command": "echo 1", - "Created": 1367854155, - "Status": "Exit 0", - "Ports": [{"PrivatePort": 2222, "PublicPort": 3333, "Type": "tcp"}], - "SizeRw": 12288, - "SizeRootFs": 0 - }, - { - "Id": "9cd87474be90", - "Image": "ubuntu:latest", - "Command": "echo 222222", - "Created": 1367854155, - "Status": "Exit 0", - "Ports": [], - "SizeRw": 12288, - "SizeRootFs": 0 - }, - { - "Id": "3176a2479c92", - "Image": "ubuntu:latest", - "Command": "echo 3333333333333333", - "Created": 1367854154, - "Status": "Exit 0", - "Ports":[], - "SizeRw":12288, - "SizeRootFs":0 - }, - { - "Id": "4cb07b47f9fb", - "Image": "ubuntu:latest", - "Command": "echo 444444444444444444444444444444444", - "Created": 1367854152, - "Status": "Exit 0", - "Ports": [], - "SizeRw": 12288, - "SizeRootFs": 0 - } - ] - -Query Parameters: - -   - -- **all** – 1/True/true or 0/False/false, Show all containers. - Only running containers are shown by default (i.e., this defaults to false) -- **limit** – Show `limit` last created containers, include non-running ones. -- **since** – Show only containers created since Id, include non-running ones. -- **before** – Show only containers created before Id, include non-running ones. -- **size** – 1/True/true or 0/False/false, Show the containers sizes - -Status Codes: - -- **200** – no error -- **400** – bad parameter -- **500** – server error - -### Create a container - -`POST /containers/create` - -Create a container - -**Example request**: - - POST /containers/create HTTP/1.1 - Content-Type: application/json - - { - "Hostname":"", - "User":"", - "Memory":0, - "MemorySwap":0, - "AttachStdin":false, - "AttachStdout":true, - "AttachStderr":true, - "PortSpecs":null, - "Tty":false, - "OpenStdin":false, - "StdinOnce":false, - "Env":null, - "Cmd":[ - "date" - ], - "Image":"ubuntu", - "Volumes":{ - "/tmp": {} - }, - "WorkingDir":"", - "NetworkDisabled": false, - "ExposedPorts":{ - "22/tcp": {} - } - } - -**Example response**: - - HTTP/1.1 201 Created - Content-Type: application/json - - { - "Id":"e90e34656806" - "Warnings":[] - } - -Json Parameters: - -- **config** – the container's configuration - -Query Parameters: - -   - -- **name** – Assign the specified name to the container. Mus - match `/?[a-zA-Z0-9_-]+`. - -Status Codes: - -- **201** – no error -- **404** – no such container -- **406** – impossible to attach (container not running) -- **500** – server error - -### Inspect a container - -`GET /containers/(id)/json` - -Return low-level information on the container `id` - -**Example request**: - - GET /containers/4fa6e0f0c678/json HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - { - "Id": "4fa6e0f0c6786287e131c3852c58a2e01cc697a68231826813597e4994f1d6e2", - "Created": "2013-05-07T14:51:42.041847+02:00", - "Path": "date", - "Args": [], - "Config": { - "Hostname": "4fa6e0f0c678", - "User": "", - "Memory": 0, - "MemorySwap": 0, - "AttachStdin": false, - "AttachStdout": true, - "AttachStderr": true, - "PortSpecs": null, - "Tty": false, - "OpenStdin": false, - "StdinOnce": false, - "Env": null, - "Cmd": [ - "date" - ], - "Image": "ubuntu", - "Volumes": {}, - "WorkingDir":"" - - }, - "State": { - "Running": false, - "Pid": 0, - "ExitCode": 0, - "StartedAt": "2013-05-07T14:51:42.087658+02:01360", - "Ghost": false - }, - "Image": "b750fe79269d2ec9a3c593ef05b4332b1d1a02a62b4accb2c21d589ff2f5f2dc", - "NetworkSettings": { - "IpAddress": "", - "IpPrefixLen": 0, - "Gateway": "", - "Bridge": "", - "PortMapping": null - }, - "SysInitPath": "/home/kitty/go/src/github.com/docker/docker/bin/docker", - "ResolvConfPath": "/etc/resolv.conf", - "Volumes": {}, - "HostConfig": { - "Binds": null, - "ContainerIDFile": "", - "LxcConf": [], - "Privileged": false, - "PortBindings": { - "80/tcp": [ - { - "HostIp": "0.0.0.0", - "HostPort": "49153" - } - ] - }, - "Links": null, - "PublishAllPorts": false - } - } - -Status Codes: - -- **200** – no error -- **404** – no such container -- **500** – server error - -### List processes running inside a container - -`GET /containers/(id)/top` - -List processes running inside the container `id` - -**Example request**: - - GET /containers/4fa6e0f0c678/top HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - { - "Titles": [ - "USER", - "PID", - "%CPU", - "%MEM", - "VSZ", - "RSS", - "TTY", - "STAT", - "START", - "TIME", - "COMMAND" - ], - "Processes": [ - ["root","20147","0.0","0.1","18060","1864","pts/4","S","10:06","0:00","bash"], - ["root","20271","0.0","0.0","4312","352","pts/4","S+","10:07","0:00","sleep","10"] - ] - } - -Query Parameters: - -   - -- **ps\_args** – ps arguments to use (e.g., aux) - -Status Codes: - -- **200** – no error -- **404** – no such container -- **500** – server error - -### Inspect changes on a container's filesystem - -`GET /containers/(id)/changes` - -Inspect changes on container `id` 's filesystem - -**Example request**: - - GET /containers/4fa6e0f0c678/changes HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - [ - { - "Path": "/dev", - "Kind": 0 - }, - { - "Path": "/dev/kmsg", - "Kind": 1 - }, - { - "Path": "/test", - "Kind": 1 - } - ] - -Status Codes: - -- **200** – no error -- **404** – no such container -- **500** – server error - -### Export a container - -`GET /containers/(id)/export` - -Export the contents of container `id` - -**Example request**: - - GET /containers/4fa6e0f0c678/export HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/octet-stream - - {{ TAR STREAM }} - -Status Codes: - -- **200** – no error -- **404** – no such container -- **500** – server error - -### Start a container - -`POST /containers/(id)/start` - -Start the container `id` - -**Example request**: - - POST /containers/(id)/start HTTP/1.1 - Content-Type: application/json - - { - "Binds":["/tmp:/tmp"], - "LxcConf":[{"Key":"lxc.utsname","Value":"docker"}], - "PortBindings":{ "22/tcp": [{ "HostPort": "11022" }] }, - "PublishAllPorts":false, - "Privileged":false, - "Dns": ["8.8.8.8"], - "VolumesFrom": ["parent", "other:ro"] - } - -**Example response**: - - HTTP/1.1 204 No Content - Content-Type: text/plain - -Json Parameters: - -   - -- **hostConfig** – the container's host configuration (optional) - -Status Codes: - -- **204** – no error -- **404** – no such container -- **500** – server error - -### Stop a container - -`POST /containers/(id)/stop` - -Stop the container `id` - -**Example request**: - - POST /containers/e90e34656806/stop?t=5 HTTP/1.1 - -**Example response**: - - HTTP/1.1 204 OK - -Query Parameters: - -- **t** – number of seconds to wait before killing the container - -Status Codes: - -- **204** – no error -- **404** – no such container -- **500** – server error - -### Restart a container - -`POST /containers/(id)/restart` - -Restart the container `id` - -**Example request**: - - POST /containers/e90e34656806/restart?t=5 HTTP/1.1 - -**Example response**: - - HTTP/1.1 204 No Content - -Query Parameters: - -- **t** – number of seconds to wait before killing the container - -Status Codes: - -- **204** – no error -- **404** – no such container -- **500** – server error - -### Kill a container - -`POST /containers/(id)/kill` - -Kill the container `id` - -**Example request**: - - POST /containers/e90e34656806/kill HTTP/1.1 - -**Example response**: - - HTTP/1.1 204 No Content - -Query Parameters - -- **signal** - Signal to send to the container: integer or string like "SIGINT". - When not set, SIGKILL is assumed and the call will wait for the container to exit. - -Status Codes: - -- **204** – no error -- **404** – no such container -- **500** – server error - -### Attach to a container - -`POST /containers/(id)/attach` - -Attach to the container `id` - -**Example request**: - - POST /containers/16253994b7c4/attach?logs=1&stream=0&stdout=1 HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/vnd.docker.raw-stream - - {{ STREAM }} - -Query Parameters: - -- **logs** – 1/True/true or 0/False/false, return logs. Defaul - false -- **stream** – 1/True/true or 0/False/false, return stream. - Default false -- **stdin** – 1/True/true or 0/False/false, if stream=true, attach - to stdin. Default false -- **stdout** – 1/True/true or 0/False/false, if logs=true, return - stdout log, if stream=true, attach to stdout. Default false -- **stderr** – 1/True/true or 0/False/false, if logs=true, return - stderr log, if stream=true, attach to stderr. Default false - -Status Codes: - -- **200** – no error -- **400** – bad parameter -- **404** – no such container -- **500** – server error - - **Stream details**: - - When using the TTY setting is enabled in - [`POST /containers/create` -](/reference/api/docker_remote_api_v1.9/#create-a-container "POST /containers/create"), - the stream is the raw data from the process PTY and client's stdin. - When the TTY is disabled, then the stream is multiplexed to separate - stdout and stderr. - - The format is a **Header** and a **Payload** (frame). - - **HEADER** - - The header will contain the information on which stream write the - stream (stdout or stderr). It also contain the size of the - associated frame encoded on the last 4 bytes (uint32). - - It is encoded on the first 8 bytes like this: - - header := [8]byte{STREAM_TYPE, 0, 0, 0, SIZE1, SIZE2, SIZE3, SIZE4} - - `STREAM_TYPE` can be: - -- 0: stdin (will be written on stdout) -- 1: stdout -- 2: stderr - - `SIZE1, SIZE2, SIZE3, SIZE4` are the 4 bytes of - the uint32 size encoded as big endian. - - **PAYLOAD** - - The payload is the raw stream. - - **IMPLEMENTATION** - - The simplest way to implement the Attach protocol is the following: - - 1. Read 8 bytes - 2. chose stdout or stderr depending on the first byte - 3. Extract the frame size from the last 4 byets - 4. Read the extracted size and output it on the correct output - 5. Goto 1) - -### Attach to a container (websocket) - -`GET /containers/(id)/attach/ws` - -Attach to the container `id` via websocket - -Implements websocket protocol handshake according to [RFC 6455](http://tools.ietf.org/html/rfc6455) - -**Example request** - - GET /containers/e90e34656806/attach/ws?logs=0&stream=1&stdin=1&stdout=1&stderr=1 HTTP/1.1 - -**Example response** - - {{ STREAM }} - -Query Parameters: - -- **logs** – 1/True/true or 0/False/false, return logs. Default false -- **stream** – 1/True/true or 0/False/false, return stream. - Default false -- **stdin** – 1/True/true or 0/False/false, if stream=true, attach - to stdin. Default false -- **stdout** – 1/True/true or 0/False/false, if logs=true, return - stdout log, if stream=true, attach to stdout. Default false -- **stderr** – 1/True/true or 0/False/false, if logs=true, return - stderr log, if stream=true, attach to stderr. Default false - -Status Codes: - -- **200** – no error -- **400** – bad parameter -- **404** – no such container -- **500** – server error - -### Wait a container - -`POST /containers/(id)/wait` - -Block until container `id` stops, then returns - the exit code - -**Example request**: - - POST /containers/16253994b7c4/wait HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - {"StatusCode": 0} - -Status Codes: - -- **200** – no error -- **404** – no such container -- **500** – server error - -### Remove a container - - `DELETE /containers/(id*) -: Remove the container `id` from the filesystem - -**Example request**: - - DELETE /containers/16253994b7c4?v=1 HTTP/1.1 - -**Example response**: - - HTTP/1.1 204 No Content - -Query Parameters: - -- **v** – 1/True/true or 0/False/false, Remove the volumes - associated to the container. Default false -- **force** – 1/True/true or 0/False/false, Removes the container - even if it was running. Default false - -Status Codes: - -- **204** – no error -- **400** – bad parameter -- **404** – no such container -- **500** – server error - -### Copy files or folders from a container - -`POST /containers/(id)/copy` - -Copy files or folders of container `id` - -**Example request**: - - POST /containers/4fa6e0f0c678/copy HTTP/1.1 - Content-Type: application/json - - { - "Resource": "test.txt" - } - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/octet-stream - - {{ TAR STREAM }} - -Status Codes: - -- **200** – no error -- **404** – no such container -- **500** – server error - -### 2.2 Images - -### List Images - -`GET /images/json` - -**Example request**: - - GET /images/json?all=0 HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - [ - { - "RepoTags": [ - "ubuntu:12.04", - "ubuntu:precise", - "ubuntu:latest" - ], - "Id": "8dbd9e392a964056420e5d58ca5cc376ef18e2de93b5cc90e868a1bbc8318c1c", - "Created": 1365714795, - "Size": 131506275, - "VirtualSize": 131506275 - }, - { - "RepoTags": [ - "ubuntu:12.10", - "ubuntu:quantal" - ], - "ParentId": "27cf784147099545", - "Id": "b750fe79269d2ec9a3c593ef05b4332b1d1a02a62b4accb2c21d589ff2f5f2dc", - "Created": 1364102658, - "Size": 24653, - "VirtualSize": 180116135 - } - ] - -### Create an image - -`POST /images/create` - -Create an image, either by pull it from the registry or by importing - i - -**Example request**: - - POST /images/create?fromImage=ubuntu HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - {"status": "Pulling..."} - {"status": "Pulling", "progress": "1 B/ 100 B", "progressDetail": {"current": 1, "total": 100}} - {"error": "Invalid..."} - ... - - When using this endpoint to pull an image from the registry, the - `X-Registry-Auth` header can be used to include - a base64-encoded AuthConfig object. - -Query Parameters: - -- **fromImage** – name of the image to pull -- **fromSrc** – source to import, - means stdin -- **repo** – repository -- **tag** – tag -- **registry** – the registry to pull from - -Request Headers: - -- **X-Registry-Auth** – base64-encoded AuthConfig object - -Status Codes: - -- **200** – no error -- **500** – server error - -### Insert a file in an image - -`POST /images/(name)/insert` - -Insert a file from `url` in the image - `name` at `path` - -**Example request**: - - POST /images/test/insert?path=/usr&url=myurl HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - {"status":"Inserting..."} - {"status":"Inserting", "progress":"1/? (n/a)", "progressDetail":{"current":1}} - {"error":"Invalid..."} - ... - -Query Parameters: - -- **url** – The url from where the file is taken -- **path** – The path where the file is stored - -Status Codes: - -- **200** – no error -- **500** – server error - -### Inspect an image - -`GET /images/(name)/json` - -Return low-level information on the image `name` - -**Example request**: - - GET /images/ubuntu/json HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - { - "id":"b750fe79269d2ec9a3c593ef05b4332b1d1a02a62b4accb2c21d589ff2f5f2dc", - "parent":"27cf784147099545", - "created":"2013-03-23T22:24:18.818426-07:00", - "container":"3d67245a8d72ecf13f33dffac9f79dcdf70f75acb84d308770391510e0c23ad0", - "container_config": - { - "Hostname":"", - "User":"", - "Memory":0, - "MemorySwap":0, - "AttachStdin":false, - "AttachStdout":false, - "AttachStderr":false, - "PortSpecs":null, - "Tty":true, - "OpenStdin":true, - "StdinOnce":false, - "Env":null, - "Cmd": ["/bin/bash"] - "Image":"ubuntu", - "Volumes":null, - "WorkingDir":"" - }, - "Size": 6824592 - } - -Status Codes: - -- **200** – no error -- **404** – no such image -- **500** – server error - -### Get the history of an image - -`GET /images/(name)/history` - -Return the history of the image `name` - -**Example request**: - - GET /images/ubuntu/history HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - [ - { - "Id": "b750fe79269d", - "Created": 1364102658, - "CreatedBy": "/bin/bash" - }, - { - "Id": "27cf78414709", - "Created": 1364068391, - "CreatedBy": "" - } - ] - -Status Codes: - -- **200** – no error -- **404** – no such image -- **500** – server error - -### Push an image on the registry - -`POST /images/(name)/push` - -Push the image `name` on the registry - -**Example request**: - - POST /images/test/push HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - {"status": "Pushing..."} - {"status": "Pushing", "progress": "1/? (n/a)", "progressDetail": {"current": 1}}} - {"error": "Invalid..."} - ... - - If you wish to push an image on to a private registry, that image must already have been tagged - into a repository which references that registry host name and port. This repository name should - then be used in the URL. This mirrors the flow of the CLI. - -**Example request**: - - POST /images/registry.acme.com:5000/test/push HTTP/1.1 - - -Query Parameters: - -- **tag** – the tag to associate with the image on the registry, optional - -Request Headers: - -- **X-Registry-Auth** – include a base64-encoded AuthConfig object. - -Status Codes: - -- **200** – no error -- **404** – no such image -- **500** – server error - -### Tag an image into a repository - -`POST /images/(name)/tag` - -Tag the image `name` into a repository - -**Example request**: - - POST /images/test/tag?repo=myrepo&force=0&tag=v42 HTTP/1.1 - -**Example response**: - - HTTP/1.1 201 OK - -Query Parameters: - -- **repo** – The repository to tag in -- **force** – 1/True/true or 0/False/false, default false -- **tag** - The new tag name - -Status Codes: - -- **201** – no error -- **400** – bad parameter -- **404** – no such image -- **409** – conflict -- **500** – server error - -### Remove an image - - `DELETE /images/(name*) -: Remove the image `name` from the filesystem - -**Example request**: - - DELETE /images/test HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-type: application/json - - [ - {"Untagged": "3e2f21a89f"}, - {"Deleted": "3e2f21a89f"}, - {"Deleted": "53b4f83ac9"} - ] - -Query Parameters: - -- **force** – 1/True/true or 0/False/false, default false -- **noprune** – 1/True/true or 0/False/false, default false - -Status Codes: - -- **200** – no error -- **404** – no such image -- **409** – conflict -- **500** – server error - -### Search images - -`GET /images/search` - -Search for an image on [Docker Hub](https://hub.docker.com). - -> **Note**: -> The response keys have changed from API v1.6 to reflect the JSON -> sent by the registry server to the docker daemon's request. - -**Example request**: - - GET /images/search?term=sshd HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - [ - { - "description": "", - "is_official": false, - "is_trusted": false, - "name": "wma55/u1210sshd", - "star_count": 0 - }, - { - "description": "", - "is_official": false, - "is_trusted": false, - "name": "jdswinbank/sshd", - "star_count": 0 - }, - { - "description": "", - "is_official": false, - "is_trusted": false, - "name": "vgauthier/sshd", - "star_count": 0 - } - ... - ] - -Query Parameters: - -- **term** – term to search - -Status Codes: - -- **200** – no error -- **500** – server error - -### 2.3 Misc - -### Build an image from Dockerfile via stdin - -`POST /build` - -Build an image from Dockerfile via stdin - -**Example request**: - - POST /build HTTP/1.1 - - {{ TAR STREAM }} - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - {"stream": "Step 1..."} - {"stream": "..."} - {"error": "Error...", "errorDetail": {"code": 123, "message": "Error..."}} - - The stream must be a tar archive compressed with one of the - following algorithms: identity (no compression), gzip, bzip2, xz. - - The archive must include a file called `Dockerfile` - at its root. It may include any number of other files, - which will be accessible in the build context (See the [*ADD build - command*](/reference/builder/#add)). - -Query Parameters: - -- **t** – repository name (and optionally a tag) to be applied to - the resulting image in case of success -- **remote** – git or HTTP/HTTPS URI build source -- **q** – suppress verbose build output -- **nocache** – do not use the cache when building the image -- **rm** - remove intermediate containers after a successful build - - Request Headers: - -- **Content-type** – should be set to `"application/tar"`. -- **X-Registry-Config** – base64-encoded ConfigFile objec - -Status Codes: - -- **200** – no error -- **500** – server error - -### Check auth configuration - -`POST /auth` - -Get the default username and email - -**Example request**: - - POST /auth HTTP/1.1 - Content-Type: application/json - - { - "username":" hannibal", - "password: "xxxx", - "email": "hannibal@a-team.com", - "serveraddress": "https://index.docker.io/v1/" - } - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: text/plain - -Status Codes: - -- **200** – no error -- **204** – no error -- **500** – server error - -### Display system-wide information - -`GET /info` - -Display system-wide information - -**Example request**: - - GET /info HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - { - "Containers":11, - "Images":16, - "Debug":false, - "NFd": 11, - "NGoroutines":21, - "MemoryLimit":true, - "SwapLimit":false, - "IPv4Forwarding":true - } - -Status Codes: - -- **200** – no error -- **500** – server error - -### Show the docker version information - -`GET /version` - -Show the docker version information - -**Example request**: - - GET /version HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - { - "Version":"0.2.2", - "GitCommit":"5a2a5cc+CHANGES", - "GoVersion":"go1.0.3" - } - -Status Codes: - -- **200** – no error -- **500** – server error - -### Create a new image from a container's changes - -`POST /commit` - -Create a new image from a container's changes - -**Example request**: - - POST /commit?container=44c004db4b17&m=message&repo=myrepo HTTP/1.1 - Content-Type: application/json - - { - "Hostname":"", - "User":"", - "Memory":0, - "MemorySwap":0, - "AttachStdin":false, - "AttachStdout":true, - "AttachStderr":true, - "PortSpecs":null, - "Tty":false, - "OpenStdin":false, - "StdinOnce":false, - "Env":null, - "Cmd":[ - "date" - ], - "Volumes":{ - "/tmp": {} - }, - "WorkingDir":"", - "NetworkDisabled": false, - "ExposedPorts":{ - "22/tcp": {} - } - } - -**Example response**: - - HTTP/1.1 201 OK - Content-Type: application/vnd.docker.raw-stream - - {"Id": "596069db4bf5"} - - -Json Parameters: - - - -- **config** - the container's configuration - -Query Parameters: - -- **container** – source container -- **repo** – repository -- **tag** – tag -- **m** – commit message -- **author** – author (e.g., "John Hannibal Smith - <[hannibal@a-team.com](mailto:hannibal%40a-team.com)>") - -Status Codes: - -- **201** – no error -- **404** – no such container -- **500** – server error - -### Monitor Docker's events - -`GET /events` - -Get events from docker, either in real time via streaming, or via -polling (using since). - -Docker containers will report the following events: - - create, destroy, die, export, kill, pause, restart, start, stop, unpause - -and Docker images will report: - - untag, delete - -**Example request**: - - GET /events?since=1374067924 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - {"status": "create", "id": "dfdf82bd3881","from": "ubuntu:latest", "time":1374067924} - {"status": "start", "id": "dfdf82bd3881","from": "ubuntu:latest", "time":1374067924} - {"status": "stop", "id": "dfdf82bd3881","from": "ubuntu:latest", "time":1374067966} - {"status": "destroy", "id": "dfdf82bd3881","from": "ubuntu:latest", "time":1374067970} - -Query Parameters: - -- **since** – timestamp used for polling - -Status Codes: - -- **200** – no error -- **500** – server error - -### Get a tarball containing all images and tags in a repository - -`GET /images/(name)/get` - -Get a tarball containing all images and metadata for the repository - specified by `name`. - -See the [image tarball format](#image-tarball-format) for more details. - -**Example request** - - GET /images/ubuntu/get - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/x-tar - - Binary data stream - -Status Codes: - -- **200** – no error -- **500** – server error - -### Load a tarball with a set of images and tags into docker - -`POST /images/load` - -Load a set of images and tags into the docker repository. - -See the [image tarball format](#image-tarball-format) for more details. - -**Example request** - - POST /images/load - - Tarball in body - -**Example response**: - - HTTP/1.1 200 OK - -Status Codes: - -- **200** – no error -- **500** – server error - -### Image tarball format - -An image tarball contains one directory per image layer (named using its long ID), -each containing three files: - -1. `VERSION`: currently `1.0` - the file format version -2. `json`: detailed layer information, similar to `docker inspect layer_id` -3. `layer.tar`: A tarfile containing the filesystem changes in this layer - -The `layer.tar` file will contain `aufs` style `.wh..wh.aufs` files and directories -for storing attribute changes and deletions. - -If the tarball defines a repository, there will also be a `repositories` file at -the root that contains a list of repository and tag names mapped to layer IDs. - -``` -{"hello-world": - {"latest": "565a9d68a73f6706862bfe8409a7f659776d4d60a8d096eb4a3cbce6999cc2a1"} -} -``` - -# 3. Going further - -## 3.1 Inside `docker run` - -Here are the steps of `docker run` : - - - Create the container - - - If the status code is 404, it means the image doesn't exist: - - Try to pull it - - Then retry to create the container - - - Start the container - - - If you are not in detached mode: - - Attach to the container, using logs=1 (to have stdout and - stderr from the container's start) and stream=1 - - - If in detached mode or only stdin is attached: - - Display the container's id - -## 3.2 Hijacking - -In this version of the API, /attach, uses hijacking to transport stdin, -stdout and stderr on the same socket. This might change in the future. - -## 3.3 CORS Requests - -To enable cross origin requests to the remote api add the flag -"--api-enable-cors" when running docker in daemon mode. - - $ docker -d -H="192.168.1.9:2375" --api-enable-cors diff --git a/reference/api/docker_remote_api_v1.11.md~ b/reference/api/docker_remote_api_v1.11.md~ deleted file mode 100644 index 6bcabfc791..0000000000 --- a/reference/api/docker_remote_api_v1.11.md~ +++ /dev/null @@ -1,1378 +0,0 @@ -page_title: Remote API v1.11 -page_description: API Documentation for Docker -page_keywords: API, Docker, rcli, REST, documentation - -# Docker Remote API v1.11 - -## 1. Brief introduction - - - The Remote API has replaced `rcli`. - - The daemon listens on `unix:///var/run/docker.sock` but you can bind - Docker to another host/port or a Unix socket. - - The API tends to be REST, but for some complex commands, like `attach` - or `pull`, the HTTP connection is hijacked to transport `STDOUT`, `STDIN` - and `STDERR`. - -# 2. Endpoints - -## 2.1 Containers - -### List containers - -`GET /containers/json` - -List containers - -**Example request**: - - GET /containers/json?all=1&before=8dfafdbc3a40&size=1 HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - [ - { - "Id": "8dfafdbc3a40", - "Image": "ubuntu:latest", - "Command": "echo 1", - "Created": 1367854155, - "Status": "Exit 0", - "Ports": [{"PrivatePort": 2222, "PublicPort": 3333, "Type": "tcp"}], - "SizeRw": 12288, - "SizeRootFs": 0 - }, - { - "Id": "9cd87474be90", - "Image": "ubuntu:latest", - "Command": "echo 222222", - "Created": 1367854155, - "Status": "Exit 0", - "Ports": [], - "SizeRw": 12288, - "SizeRootFs": 0 - }, - { - "Id": "3176a2479c92", - "Image": "ubuntu:latest", - "Command": "echo 3333333333333333", - "Created": 1367854154, - "Status": "Exit 0", - "Ports":[], - "SizeRw":12288, - "SizeRootFs":0 - }, - { - "Id": "4cb07b47f9fb", - "Image": "ubuntu:latest", - "Command": "echo 444444444444444444444444444444444", - "Created": 1367854152, - "Status": "Exit 0", - "Ports": [], - "SizeRw": 12288, - "SizeRootFs": 0 - } - ] - -Query Parameters: - -   - -- **all** – 1/True/true or 0/False/false, Show all containers. - Only running containers are shown by default (i.e., this defaults to false) -- **limit** – Show `limit` last created containers, include non-running ones. -- **since** – Show only containers created since Id, include non-running ones. -- **before** – Show only containers created before Id, include non-running ones. -- **size** – 1/True/true or 0/False/false, Show the containers sizes - -Status Codes: - -- **200** – no error -- **400** – bad parameter -- **500** – server error - -### Create a container - -`POST /containers/create` - -Create a container - -**Example request**: - - POST /containers/create HTTP/1.1 - Content-Type: application/json - - { - "Hostname":"", - "User":"", - "Memory":0, - "MemorySwap":0, - "AttachStdin":false, - "AttachStdout":true, - "AttachStderr":true, - "PortSpecs":null, - "Tty":false, - "OpenStdin":false, - "StdinOnce":false, - "Env":null, - "Cmd":[ - "date" - ], - "Image":"ubuntu", - "Volumes":{ - "/tmp": {} - }, - "VolumesFrom":"", - "WorkingDir":"", - "DisableNetwork": false, - "ExposedPorts":{ - "22/tcp": {} - } - } - -**Example response**: - - HTTP/1.1 201 Created - Content-Type: application/json - - { - "Id":"e90e34656806" - "Warnings":[] - } - -Json Parameters: - -- **config** – the container's configuration - -Query Parameters: - -- **name** – Assign the specified name to the container. Mus - match `/?[a-zA-Z0-9_-]+`. - -Status Codes: - -- **201** – no error -- **404** – no such container -- **406** – impossible to attach (container not running) -- **500** – server error - -### Inspect a container - -`GET /containers/(id)/json` - -Return low-level information on the container `id` - - -**Example request**: - - GET /containers/4fa6e0f0c678/json HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - { - "Id": "4fa6e0f0c6786287e131c3852c58a2e01cc697a68231826813597e4994f1d6e2", - "Created": "2013-05-07T14:51:42.041847+02:00", - "Path": "date", - "Args": [], - "Config": { - "Hostname": "4fa6e0f0c678", - "User": "", - "Memory": 0, - "MemorySwap": 0, - "AttachStdin": false, - "AttachStdout": true, - "AttachStderr": true, - "PortSpecs": null, - "Tty": false, - "OpenStdin": false, - "StdinOnce": false, - "Env": null, - "Cmd": [ - "date" - ], - "Dns": null, - "Image": "ubuntu", - "Volumes": {}, - "VolumesFrom": "", - "WorkingDir": "" - }, - "State": { - "Running": false, - "Pid": 0, - "ExitCode": 0, - "StartedAt": "2013-05-07T14:51:42.087658+02:01360", - "Ghost": false - }, - "Image": "b750fe79269d2ec9a3c593ef05b4332b1d1a02a62b4accb2c21d589ff2f5f2dc", - "NetworkSettings": { - "IpAddress": "", - "IpPrefixLen": 0, - "Gateway": "", - "Bridge": "", - "PortMapping": null - }, - "SysInitPath": "/home/kitty/go/src/github.com/docker/docker/bin/docker", - "ResolvConfPath": "/etc/resolv.conf", - "Volumes": {}, - "HostConfig": { - "Binds": null, - "ContainerIDFile": "", - "LxcConf": [], - "Privileged": false, - "PortBindings": { - "80/tcp": [ - { - "HostIp": "0.0.0.0", - "HostPort": "49153" - } - ] - }, - "Links": null, - "PublishAllPorts": false - } - } - -Status Codes: - -- **200** – no error -- **404** – no such container -- **500** – server error - -### List processes running inside a container - -`GET /containers/(id)/top` - -List processes running inside the container `id` - -**Example request**: - - GET /containers/4fa6e0f0c678/top HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - { - "Titles": [ - "USER", - "PID", - "%CPU", - "%MEM", - "VSZ", - "RSS", - "TTY", - "STAT", - "START", - "TIME", - "COMMAND" - ], - "Processes": [ - ["root","20147","0.0","0.1","18060","1864","pts/4","S","10:06","0:00","bash"], - ["root","20271","0.0","0.0","4312","352","pts/4","S+","10:07","0:00","sleep","10"] - ] - } - -Query Parameters: - -- **ps_args** – ps arguments to use (e.g., aux) - -Status Codes: - -- **200** – no error -- **404** – no such container -- **500** – server error - -### Get container logs - -`GET /containers/(id)/logs` - -Get stdout and stderr logs from the container ``id`` - -**Example request**: - - GET /containers/4fa6e0f0c678/logs?stderr=1&stdout=1×tamps=1&follow=1 HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/vnd.docker.raw-stream - - {{ STREAM }} - -Query Parameters: - -   - -- **follow** – 1/True/true or 0/False/false, return stream. - Default false -- **stdout** – 1/True/true or 0/False/false, if logs=true, return - stdout log. Default false -- **stderr** – 1/True/true or 0/False/false, if logs=true, return - stderr log. Default false -- **timestamps** – 1/True/true or 0/False/false, if logs=true, prin - timestamps for every log line. Default false - -Status Codes: - -- **200** – no error -- **404** – no such container -- **500** – server error - -### Inspect changes on a container's filesystem - -`GET /containers/(id)/changes` - -Inspect changes on container `id`'s filesystem - -**Example request**: - - GET /containers/4fa6e0f0c678/changes HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - [ - { - "Path": "/dev", - "Kind": 0 - }, - { - "Path": "/dev/kmsg", - "Kind": 1 - }, - { - "Path": "/test", - "Kind": 1 - } - ] - -Status Codes: - -- **200** – no error -- **404** – no such container -- **500** – server error - -### Export a container - -`GET /containers/(id)/export` - -Export the contents of container `id` - -**Example request**: - - GET /containers/4fa6e0f0c678/export HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/octet-stream - - {{ TAR STREAM }} - -Status Codes: - -- **200** – no error -- **404** – no such container -- **500** – server error - -### Start a container - -`POST /containers/(id)/start` - -Start the container `id` - -**Example request**: - - POST /containers/(id)/start HTTP/1.1 - Content-Type: application/json - - { - "Binds":["/tmp:/tmp"], - "LxcConf":[{"Key":"lxc.utsname","Value":"docker"}], - "PortBindings":{ "22/tcp": [{ "HostPort": "11022" }] }, - "PublishAllPorts":false, - "Privileged":false, - "Dns": ["8.8.8.8"], - "VolumesFrom": ["parent", "other:ro"] - } - -**Example response**: - - HTTP/1.1 204 No Content - Content-Type: text/plain - -Json Parameters: - -   - -- **hostConfig** – the container's host configuration (optional) - -Status Codes: - -- **204** – no error -- **404** – no such container -- **500** – server error - -### Stop a container - -`POST /containers/(id)/stop` - -Stop the container `id` - -**Example request**: - - POST /containers/e90e34656806/stop?t=5 HTTP/1.1 - -**Example response**: - - HTTP/1.1 204 No Content - -Query Parameters: - -- **t** – number of seconds to wait before killing the container - -Status Codes: - -- **204** – no error -- **404** – no such container -- **500** – server error - -### Restart a container - -`POST /containers/(id)/restart` - -Restart the container `id` - -**Example request**: - - POST /containers/e90e34656806/restart?t=5 HTTP/1.1 - -**Example response**: - - HTTP/1.1 204 No Content - -Query Parameters: - -- **t** – number of seconds to wait before killing the container - -Status Codes: - -- **204** – no error -- **404** – no such container -- **500** – server error - -### Kill a container - -`POST /containers/(id)/kill` - -Kill the container `id` - -**Example request**: - - POST /containers/e90e34656806/kill HTTP/1.1 - -**Example response**: - - HTTP/1.1 204 No Content - -Query Parameters - -- **signal** - Signal to send to the container: integer or string like "SIGINT". - When not set, SIGKILL is assumed and the call will wait for the container to exit. - -Status Codes: - -- **204** – no error -- **404** – no such container -- **500** – server error - -### Attach to a container - -`POST /containers/(id)/attach` - -Attach to the container `id` - -**Example request**: - - POST /containers/16253994b7c4/attach?logs=1&stream=0&stdout=1 HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/vnd.docker.raw-stream - - {{ STREAM }} - -Query Parameters: - -- **logs** – 1/True/true or 0/False/false, return logs. Defaul - false -- **stream** – 1/True/true or 0/False/false, return stream. - Default false -- **stdin** – 1/True/true or 0/False/false, if stream=true, attach - to stdin. Default false -- **stdout** – 1/True/true or 0/False/false, if logs=true, return - stdout log, if stream=true, attach to stdout. Default false -- **stderr** – 1/True/true or 0/False/false, if logs=true, return - stderr log, if stream=true, attach to stderr. Default false - -Status Codes: - -- **200** – no error -- **400** – bad parameter -- **404** – no such container -- **500** – server error - - **Stream details**: - - When using the TTY setting is enabled in - [`POST /containers/create` - ](/reference/api/docker_remote_api_v1.9/#create-a-container "POST /containers/create"), - the stream is the raw data from the process PTY and client's stdin. - When the TTY is disabled, then the stream is multiplexed to separate - stdout and stderr. - - The format is a **Header** and a **Payload** (frame). - - **HEADER** - - The header will contain the information on which stream write the - stream (stdout or stderr). It also contain the size of the - associated frame encoded on the last 4 bytes (uint32). - - It is encoded on the first 8 bytes like this: - - header := [8]byte{STREAM_TYPE, 0, 0, 0, SIZE1, SIZE2, SIZE3, SIZE4} - - `STREAM_TYPE` can be: - -- 0: stdin (will be written on stdout) -- 1: stdout -- 2: stderr - - `SIZE1, SIZE2, SIZE3, SIZE4` are the 4 bytes of - the uint32 size encoded as big endian. - - **PAYLOAD** - - The payload is the raw stream. - - **IMPLEMENTATION** - - The simplest way to implement the Attach protocol is the following: - - 1. Read 8 bytes - 2. chose stdout or stderr depending on the first byte - 3. Extract the frame size from the last 4 byets - 4. Read the extracted size and output it on the correct output - 5. Goto 1) - -### Attach to a container (websocket) - -`GET /containers/(id)/attach/ws` - -Attach to the container `id` via websocket - -Implements websocket protocol handshake according to [RFC 6455](http://tools.ietf.org/html/rfc6455) - -**Example request** - - GET /containers/e90e34656806/attach/ws?logs=0&stream=1&stdin=1&stdout=1&stderr=1 HTTP/1.1 - -**Example response** - - {{ STREAM }} - -Query Parameters: - -- **logs** – 1/True/true or 0/False/false, return logs. Default false -- **stream** – 1/True/true or 0/False/false, return stream. - Default false -- **stdin** – 1/True/true or 0/False/false, if stream=true, attach - to stdin. Default false -- **stdout** – 1/True/true or 0/False/false, if logs=true, return - stdout log, if stream=true, attach to stdout. Default false -- **stderr** – 1/True/true or 0/False/false, if logs=true, return - stderr log, if stream=true, attach to stderr. Default false - -Status Codes: - -- **200** – no error -- **400** – bad parameter -- **404** – no such container -- **500** – server error - -### Wait a container - -`POST /containers/(id)/wait` - -Block until container `id` stops, then returns the exit code - -**Example request**: - - POST /containers/16253994b7c4/wait HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - {"StatusCode": 0} - -Status Codes: - -- **200** – no error -- **404** – no such container -- **500** – server error - -### Remove a container - -`DELETE /containers/(id)` - -Remove the container `id` from the filesystem - -**Example request**: - - DELETE /containers/16253994b7c4?v=1 HTTP/1.1 - -**Example response**: - - HTTP/1.1 204 No Content - -Query Parameters: - -- **v** – 1/True/true or 0/False/false, Remove the volumes - associated to the container. Default false -- **force** – 1/True/true or 0/False/false, Removes the container - even if it was running. Default false - -Status Codes: - -- **204** – no error -- **400** – bad parameter -- **404** – no such container -- **500** – server error - -### Copy files or folders from a container - -`POST /containers/(id)/copy` - -Copy files or folders of container `id` - -**Example request**: - - POST /containers/4fa6e0f0c678/copy HTTP/1.1 - Content-Type: application/json - - { - "Resource": "test.txt" - } - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/octet-stream - - {{ TAR STREAM }} - -Status Codes: - -- **200** – no error -- **404** – no such container -- **500** – server error - -## 2.2 Images - -### List Images - -`GET /images/json` - -**Example request**: - - GET /images/json?all=0 HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - [ - { - "RepoTags": [ - "ubuntu:12.04", - "ubuntu:precise", - "ubuntu:latest" - ], - "Id": "8dbd9e392a964056420e5d58ca5cc376ef18e2de93b5cc90e868a1bbc8318c1c", - "Created": 1365714795, - "Size": 131506275, - "VirtualSize": 131506275 - }, - { - "RepoTags": [ - "ubuntu:12.10", - "ubuntu:quantal" - ], - "ParentId": "27cf784147099545", - "Id": "b750fe79269d2ec9a3c593ef05b4332b1d1a02a62b4accb2c21d589ff2f5f2dc", - "Created": 1364102658, - "Size": 24653, - "VirtualSize": 180116135 - } - ] - -### Create an image - -`POST /images/create` - -Create an image, either by pull it from the registry or by importing i - -**Example request**: - - POST /images/create?fromImage=ubuntu HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - {"status": "Pulling..."} - {"status": "Pulling", "progress": "1 B/ 100 B", "progressDetail": {"current": 1, "total": 100}} - {"error": "Invalid..."} - ... - - When using this endpoint to pull an image from the registry, the - `X-Registry-Auth` header can be used to include - a base64-encoded AuthConfig object. - -Query Parameters: - -- **fromImage** – name of the image to pull -- **fromSrc** – source to import, - means stdin -- **repo** – repository -- **tag** – tag -- **registry** – the registry to pull from - -Request Headers: - -- **X-Registry-Auth** – base64-encoded AuthConfig object - -Status Codes: - -- **200** – no error -- **500** – server error - -### Inspect an image - -`GET /images/(name)/json` - -Return low-level information on the image `name` - -**Example request**: - - GET /images/ubuntu/json HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - { - "id":"b750fe79269d2ec9a3c593ef05b4332b1d1a02a62b4accb2c21d589ff2f5f2dc", - "parent":"27cf784147099545", - "created":"2013-03-23T22:24:18.818426-07:00", - "container":"3d67245a8d72ecf13f33dffac9f79dcdf70f75acb84d308770391510e0c23ad0", - "container_config": - { - "Hostname":"", - "User":"", - "Memory":0, - "MemorySwap":0, - "AttachStdin":false, - "AttachStdout":false, - "AttachStderr":false, - "PortSpecs":null, - "Tty":true, - "OpenStdin":true, - "StdinOnce":false, - "Env":null, - "Cmd": ["/bin/bash"], - "Dns":null, - "Image":"ubuntu", - "Volumes":null, - "VolumesFrom":"", - "WorkingDir":"" - }, - "Size": 6824592 - } - -Status Codes: - -- **200** – no error -- **404** – no such image -- **500** – server error - -### Get the history of an image - -`GET /images/(name)/history` - -Return the history of the image `name` - -**Example request**: - - GET /images/ubuntu/history HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - [ - { - "Id": "b750fe79269d", - "Created": 1364102658, - "CreatedBy": "/bin/bash" - }, - { - "Id": "27cf78414709", - "Created": 1364068391, - "CreatedBy": "" - } - ] - -Status Codes: - -- **200** – no error -- **404** – no such image -- **500** – server error - -### Push an image on the registry - -`POST /images/(name)/push` - -Push the image `name` on the registry - -**Example request**: - - POST /images/test/push HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - {"status": "Pushing..."} - {"status": "Pushing", "progress": "1/? (n/a)", "progressDetail": {"current": 1}}} - {"error": "Invalid..."} - ... - - If you wish to push an image on to a private registry, that image must already have been tagged - into a repository which references that registry host name and port. This repository name should - then be used in the URL. This mirrors the flow of the CLI. - -**Example request**: - - POST /images/registry.acme.com:5000/test/push HTTP/1.1 - - -Query Parameters: - -- **tag** – the tag to associate with the image on the registry, optional - -Request Headers: - -- **X-Registry-Auth** – include a base64-encoded AuthConfig object. - -Status Codes: - -- **200** – no error -- **404** – no such image -- **500** – server error - -### Tag an image into a repository - -`POST /images/(name)/tag` - -Tag the image `name` into a repository - -**Example request**: - - POST /images/test/tag?repo=myrepo&force=0&tag=v42 HTTP/1.1 - -**Example response**: - - HTTP/1.1 201 OK - -Query Parameters: - -- **repo** – The repository to tag in -- **force** – 1/True/true or 0/False/false, default false -- **tag** - The new tag name - -Status Codes: - -- **201** – no error -- **400** – bad parameter -- **404** – no such image -- **409** – conflict -- **500** – server error - -### Remove an image - -`DELETE /images/(name)` - -Remove the image `name` from the filesystem - -**Example request**: - - DELETE /images/test HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-type: application/json - - [ - {"Untagged": "3e2f21a89f"}, - {"Deleted": "3e2f21a89f"}, - {"Deleted": "53b4f83ac9"} - ] - -Query Parameters: - -- **force** – 1/True/true or 0/False/false, default false -- **noprune** – 1/True/true or 0/False/false, default false - -Status Codes: - -- **200** – no error -- **404** – no such image -- **409** – conflict -- **500** – server error - -### Search images - -`GET /images/search` - -Search for an image on [Docker Hub](https://hub.docker.com). - -> **Note**: -> The response keys have changed from API v1.6 to reflect the JSON -> sent by the registry server to the docker daemon's request. - -**Example request**: - - GET /images/search?term=sshd HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - [ - { - "description": "", - "is_official": false, - "is_trusted": false, - "name": "wma55/u1210sshd", - "star_count": 0 - }, - { - "description": "", - "is_official": false, - "is_trusted": false, - "name": "jdswinbank/sshd", - "star_count": 0 - }, - { - "description": "", - "is_official": false, - "is_trusted": false, - "name": "vgauthier/sshd", - "star_count": 0 - } - ... - ] - -Query Parameters: - -- **term** – term to search - -Status Codes: - -- **200** – no error -- **500** – server error - -## 2.3 Misc - -### Build an image from Dockerfile via stdin - -`POST /build` - -Build an image from Dockerfile via stdin - -**Example request**: - - POST /build HTTP/1.1 - - {{ TAR STREAM }} - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - {"stream": "Step 1..."} - {"stream": "..."} - {"error": "Error...", "errorDetail": {"code": 123, "message": "Error..."}} - - The stream must be a tar archive compressed with one of the - following algorithms: identity (no compression), gzip, bzip2, xz. - - The archive must include a file called `Dockerfile` - at its root. It may include any number of other files, - which will be accessible in the build context (See the [*ADD build - command*](/reference/builder/#dockerbuilder)). - -Query Parameters: - -- **t** – repository name (and optionally a tag) to be applied to - the resulting image in case of success -- **remote** – git or HTTP/HTTPS URI build source -- **q** – suppress verbose build output -- **nocache** – do not use the cache when building the image -- **rm** - remove intermediate containers after a successful build - - Request Headers: - -- **Content-type** – should be set to `"application/tar"`. -- **X-Registry-Config** – base64-encoded ConfigFile objec - -Status Codes: - -- **200** – no error -- **500** – server error - -### Check auth configuration - -`POST /auth` - -Get the default username and email - -**Example request**: - - POST /auth HTTP/1.1 - Content-Type: application/json - - { - "username":" hannibal", - "password: "xxxx", - "email": "hannibal@a-team.com", - "serveraddress": "https://index.docker.io/v1/" - } - -**Example response**: - - HTTP/1.1 200 OK - -Status Codes: - -- **200** – no error -- **204** – no error -- **500** – server error - -### Display system-wide information - -`GET /info` - -Display system-wide information - -**Example request**: - - GET /info HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - { - "Containers": 11, - "Images": 16, - "Driver": "btrfs", - "ExecutionDriver": "native-0.1", - "KernelVersion": "3.12.0-1-amd64" - "Debug": false, - "NFd": 11, - "NGoroutines": 21, - "NEventsListener": 0, - "InitPath": "/usr/bin/docker", - "IndexServerAddress": ["https://index.docker.io/v1/"], - "MemoryLimit": true, - "SwapLimit": false, - "IPv4Forwarding": true - } - -Status Codes: - -- **200** – no error -- **500** – server error - -### Show the docker version information - -`GET /version` - -Show the docker version information - -**Example request**: - - GET /version HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - { - "Version":"0.2.2", - "GitCommit":"5a2a5cc+CHANGES", - "GoVersion":"go1.0.3" - } - -Status Codes: - -- **200** – no error -- **500** – server error - -### Ping the docker server - -`GET /_ping` - -Ping the docker server - -**Example request**: - - GET /_ping HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: text/plain - - OK - -Status Codes: - -- **200** - no error -- **500** - server error - -### Create a new image from a container's changes - -`POST /commit` - -Create a new image from a container's changes - -**Example request**: - - POST /commit?container=44c004db4b17&m=message&repo=myrepo HTTP/1.1 - Content-Type: application/json - - { - "Hostname":"", - "User":"", - "Memory":0, - "MemorySwap":0, - "AttachStdin":false, - "AttachStdout":true, - "AttachStderr":true, - "PortSpecs":null, - "Tty":false, - "OpenStdin":false, - "StdinOnce":false, - "Env":null, - "Cmd":[ - "date" - ], - "Volumes":{ - "/tmp": {} - }, - "WorkingDir":"", - "DisableNetwork": false, - "ExposedPorts":{ - "22/tcp": {} - } - } - -**Example response**: - - HTTP/1.1 201 Created - Content-Type: application/vnd.docker.raw-stream - - {"Id": "596069db4bf5"} - -Json Parameters: - -- **config** - the container's configuration - -Query Parameters: - -- **container** – source container -- **repo** – repository -- **tag** – tag -- **m** – commit message -- **author** – author (e.g., "John Hannibal Smith - <[hannibal@a-team.com](mailto:hannibal%40a-team.com)>") - -Status Codes: - -- **201** – no error -- **404** – no such container -- **500** – server error - -### Monitor Docker's events - -`GET /events` - -Get container events from docker, either in real time via streaming, or via -polling (using since). - -Docker containers will report the following events: - - create, destroy, die, export, kill, pause, restart, start, stop, unpause - -and Docker images will report: - - untag, delete - -**Example request**: - - GET /events?since=1374067924 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - {"status": "create", "id": "dfdf82bd3881","from": "ubuntu:latest", "time":1374067924} - {"status": "start", "id": "dfdf82bd3881","from": "ubuntu:latest", "time":1374067924} - {"status": "stop", "id": "dfdf82bd3881","from": "ubuntu:latest", "time":1374067966} - {"status": "destroy", "id": "dfdf82bd3881","from": "ubuntu:latest", "time":1374067970} - -Query Parameters: - -- **since** – timestamp used for polling -- **until** – timestamp used for polling - -Status Codes: - -- **200** – no error -- **500** – server error - -### Get a tarball containing all images and tags in a repository - -`GET /images/(name)/get` - -Get a tarball containing all images and metadata for the repository -specified by `name`. - -See the [image tarball format](#image-tarball-format) for more details. - -**Example request** - - GET /images/ubuntu/get - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/x-tar - - Binary data stream - -Status Codes: - -- **200** – no error -- **500** – server error - -### Load a tarball with a set of images and tags into docker - -`POST /images/load` - -Load a set of images and tags into the docker repository. - -See the [image tarball format](#image-tarball-format) for more details. - -**Example request** - - POST /images/load - - Tarball in body - -**Example response**: - - HTTP/1.1 200 OK - -Status Codes: - -- **200** – no error -- **500** – server error - -### Image tarball format - -An image tarball contains one directory per image layer (named using its long ID), -each containing three files: - -1. `VERSION`: currently `1.0` - the file format version -2. `json`: detailed layer information, similar to `docker inspect layer_id` -3. `layer.tar`: A tarfile containing the filesystem changes in this layer - -The `layer.tar` file will contain `aufs` style `.wh..wh.aufs` files and directories -for storing attribute changes and deletions. - -If the tarball defines a repository, there will also be a `repositories` file at -the root that contains a list of repository and tag names mapped to layer IDs. - -``` -{"hello-world": - {"latest": "565a9d68a73f6706862bfe8409a7f659776d4d60a8d096eb4a3cbce6999cc2a1"} -} -``` - -# 3. Going further - -## 3.1 Inside `docker run` - -As an example, the `docker run` command line makes the following API calls: - -- Create the container - -- If the status code is 404, it means the image doesn't exist: - - Try to pull it - - Then retry to create the container - -- Start the container - -- If you are not in detached mode: - - Attach to the container, using logs=1 (to have stdout and - stderr from the container's start) and stream=1 - -- If in detached mode or only stdin is attached: - - Display the container's id - -## 3.2 Hijacking - -In this version of the API, /attach, uses hijacking to transport stdin, -stdout and stderr on the same socket. This might change in the future. - -## 3.3 CORS Requests - -To enable cross origin requests to the remote api add the flag -"--api-enable-cors" when running docker in daemon mode. - - $ docker -d -H="192.168.1.9:2375" --api-enable-cors diff --git a/reference/api/docker_remote_api_v1.12.md~ b/reference/api/docker_remote_api_v1.12.md~ deleted file mode 100644 index 58f3bc3a30..0000000000 --- a/reference/api/docker_remote_api_v1.12.md~ +++ /dev/null @@ -1,1443 +0,0 @@ -page_title: Remote API v1.12 -page_description: API Documentation for Docker -page_keywords: API, Docker, rcli, REST, documentation - -# Docker Remote API v1.12 - -## 1. Brief introduction - - - The Remote API has replaced `rcli`. - - The daemon listens on `unix:///var/run/docker.sock` but you can - [Bind Docker to another host/port or a Unix socket]( - /articles/basics/#bind-docker-to-another-hostport-or-a-unix-socket). - - The API tends to be REST, but for some complex commands, like `attach` - or `pull`, the HTTP connection is hijacked to transport `STDOUT`, - `STDIN` and `STDERR`. - -# 2. Endpoints - -## 2.1 Containers - -### List containers - -`GET /containers/json` - -List containers - -**Example request**: - - GET /containers/json?all=1&before=8dfafdbc3a40&size=1 HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - [ - { - "Id": "8dfafdbc3a40", - "Image": "ubuntu:latest", - "Command": "echo 1", - "Created": 1367854155, - "Status": "Exit 0", - "Ports": [{"PrivatePort": 2222, "PublicPort": 3333, "Type": "tcp"}], - "SizeRw": 12288, - "SizeRootFs": 0 - }, - { - "Id": "9cd87474be90", - "Image": "ubuntu:latest", - "Command": "echo 222222", - "Created": 1367854155, - "Status": "Exit 0", - "Ports": [], - "SizeRw": 12288, - "SizeRootFs": 0 - }, - { - "Id": "3176a2479c92", - "Image": "ubuntu:latest", - "Command": "echo 3333333333333333", - "Created": 1367854154, - "Status": "Exit 0", - "Ports":[], - "SizeRw":12288, - "SizeRootFs":0 - }, - { - "Id": "4cb07b47f9fb", - "Image": "ubuntu:latest", - "Command": "echo 444444444444444444444444444444444", - "Created": 1367854152, - "Status": "Exit 0", - "Ports": [], - "SizeRw": 12288, - "SizeRootFs": 0 - } - ] - -Query Parameters: - -- **all** – 1/True/true or 0/False/false, Show all containers. - Only running containers are shown by defaul -- **limit** – Show `limit` last created - containers, include non-running ones. -- **since** – Show only containers created since Id, include - non-running ones. -- **before** – Show only containers created before Id, include - non-running ones. -- **size** – 1/True/true or 0/False/false, Show the containers - sizes -- **filters** – a JSON encoded value of the filters (a map[string][]string) - to process on the images list. - -Status Codes: - -- **200** – no error -- **400** – bad parameter -- **500** – server error - -### Create a container - -`POST /containers/create` - -Create a container - -**Example request**: - - POST /containers/create HTTP/1.1 - Content-Type: application/json - - { - "Hostname":"", - "Domainname": "", - "User":"", - "Memory":0, - "MemorySwap":0, - "CpuShares": 512, - "Cpuset": "0,1", - "AttachStdin":false, - "AttachStdout":true, - "AttachStderr":true, - "PortSpecs":null, - "Tty":false, - "OpenStdin":false, - "StdinOnce":false, - "Env":null, - "Cmd":[ - "date" - ], - "Image":"ubuntu", - "Volumes":{ - "/tmp": {} - }, - "WorkingDir":"", - "NetworkDisabled": false, - "ExposedPorts":{ - "22/tcp": {} - } - } - -**Example response**: - - HTTP/1.1 201 Created - Content-Type: application/json - - { - "Id":"e90e34656806" - "Warnings":[] - } - -Json Parameters: - -- **config** – the container's configuration - -Query Parameters: - -   - -- **name** – Assign the specified name to the container. Mus - match `/?[a-zA-Z0-9_-]+`. - -Status Codes: - -- **201** – no error -- **404** – no such container -- **406** – impossible to attach (container not running) -- **500** – server error - -### Inspect a container - -`GET /containers/(id)/json` - -Return low-level information on the container `id` - - -**Example request**: - - GET /containers/4fa6e0f0c678/json HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - { - "Id": "4fa6e0f0c6786287e131c3852c58a2e01cc697a68231826813597e4994f1d6e2", - "Created": "2013-05-07T14:51:42.041847+02:00", - "Path": "date", - "Args": [], - "Config": { - "Hostname": "4fa6e0f0c678", - "User": "", - "Memory": 0, - "MemorySwap": 0, - "AttachStdin": false, - "AttachStdout": true, - "AttachStderr": true, - "PortSpecs": null, - "Tty": false, - "OpenStdin": false, - "StdinOnce": false, - "Env": null, - "Cmd": [ - "date" - ], - "Dns": null, - "Image": "ubuntu", - "Volumes": {}, - "VolumesFrom": "", - "WorkingDir": "" - }, - "State": { - "Running": false, - "Pid": 0, - "ExitCode": 0, - "StartedAt": "2013-05-07T14:51:42.087658+02:01360", - "Ghost": false - }, - "Image": "b750fe79269d2ec9a3c593ef05b4332b1d1a02a62b4accb2c21d589ff2f5f2dc", - "NetworkSettings": { - "IpAddress": "", - "IpPrefixLen": 0, - "Gateway": "", - "Bridge": "", - "PortMapping": null - }, - "SysInitPath": "/home/kitty/go/src/github.com/docker/docker/bin/docker", - "ResolvConfPath": "/etc/resolv.conf", - "Volumes": {}, - "HostConfig": { - "Binds": null, - "ContainerIDFile": "", - "LxcConf": [], - "Privileged": false, - "PortBindings": { - "80/tcp": [ - { - "HostIp": "0.0.0.0", - "HostPort": "49153" - } - ] - }, - "Links": null, - "PublishAllPorts": false - } - } - -Status Codes: - -- **200** – no error -- **404** – no such container -- **500** – server error - -### List processes running inside a container - -`GET /containers/(id)/top` - -List processes running inside the container `id` - -**Example request**: - - GET /containers/4fa6e0f0c678/top HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - { - "Titles": [ - "USER", - "PID", - "%CPU", - "%MEM", - "VSZ", - "RSS", - "TTY", - "STAT", - "START", - "TIME", - "COMMAND" - ], - "Processes": [ - ["root","20147","0.0","0.1","18060","1864","pts/4","S","10:06","0:00","bash"], - ["root","20271","0.0","0.0","4312","352","pts/4","S+","10:07","0:00","sleep","10"] - ] - } - -Query Parameters: - -- **ps_args** – ps arguments to use (e.g., aux) - -Status Codes: - -- **200** – no error -- **404** – no such container -- **500** – server error - -### Get container logs - -`GET /containers/(id)/logs` - -Get stdout and stderr logs from the container ``id`` - -**Example request**: - - GET /containers/4fa6e0f0c678/logs?stderr=1&stdout=1×tamps=1&follow=1 HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/vnd.docker.raw-stream - - {{ STREAM }} - -Query Parameters: - -   - -- **follow** – 1/True/true or 0/False/false, return stream. - Default false -- **stdout** – 1/True/true or 0/False/false, if logs=true, return - stdout log. Default false -- **stderr** – 1/True/true or 0/False/false, if logs=true, return - stderr log. Default false -- **timestamps** – 1/True/true or 0/False/false, if logs=true, prin - timestamps for every log line. Default false - -Status Codes: - -- **200** – no error -- **404** – no such container -- **500** – server error - -### Inspect changes on a container's filesystem - -`GET /containers/(id)/changes` - -Inspect changes on container `id`'s filesystem - -**Example request**: - - GET /containers/4fa6e0f0c678/changes HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - [ - { - "Path": "/dev", - "Kind": 0 - }, - { - "Path": "/dev/kmsg", - "Kind": 1 - }, - { - "Path": "/test", - "Kind": 1 - } - ] - -Status Codes: - -- **200** – no error -- **404** – no such container -- **500** – server error - -### Export a container - -`GET /containers/(id)/export` - -Export the contents of container `id` - -**Example request**: - - GET /containers/4fa6e0f0c678/export HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/octet-stream - - {{ TAR STREAM }} - -Status Codes: - -- **200** – no error -- **404** – no such container -- **500** – server error - -### Start a container - -`POST /containers/(id)/start` - -Start the container `id` - -**Example request**: - - POST /containers/(id)/start HTTP/1.1 - Content-Type: application/json - - { - "Binds":["/tmp:/tmp"], - "Links":["redis3:redis"], - "LxcConf":[{"Key":"lxc.utsname","Value":"docker"}], - "PortBindings":{ "22/tcp": [{ "HostPort": "11022" }] }, - "PublishAllPorts":false, - "Privileged":false, - "Dns": ["8.8.8.8"], - "VolumesFrom": ["parent", "other:ro"] - } - -**Example response**: - - HTTP/1.1 204 No Content - Content-Type: text/plain - -Json Parameters: - -   - -- **hostConfig** – the container's host configuration (optional) - -Status Codes: - -- **204** – no error -- **404** – no such container -- **500** – server error - -### Stop a container - -`POST /containers/(id)/stop` - -Stop the container `id` - -**Example request**: - - POST /containers/e90e34656806/stop?t=5 HTTP/1.1 - -**Example response**: - - HTTP/1.1 204 No Content - -Query Parameters: - -- **t** – number of seconds to wait before killing the container - -Status Codes: - -- **204** – no error -- **404** – no such container -- **500** – server error - -### Restart a container - -`POST /containers/(id)/restart` - -Restart the container `id` - -**Example request**: - - POST /containers/e90e34656806/restart?t=5 HTTP/1.1 - -**Example response**: - - HTTP/1.1 204 No Content - -Query Parameters: - -- **t** – number of seconds to wait before killing the container - -Status Codes: - -- **204** – no error -- **404** – no such container -- **500** – server error - -### Kill a container - -`POST /containers/(id)/kill` - -Kill the container `id` - -**Example request**: - - POST /containers/e90e34656806/kill HTTP/1.1 - -**Example response**: - - HTTP/1.1 204 No Content - -Query Parameters - -- **signal** - Signal to send to the container: integer or string like "SIGINT". - When not set, SIGKILL is assumed and the call will wait for the container to exit. - -Status Codes: - -- **204** – no error -- **404** – no such container -- **500** – server error - -### Pause a container - -`POST /containers/(id)/pause` - -Pause the container `id` - -**Example request**: - - POST /containers/e90e34656806/pause HTTP/1.1 - -**Example response**: - - HTTP/1.1 204 No Content - -Status Codes: - -- **204** – no error -- **404** – no such container -- **500** – server error - -### Unpause a container - -`POST /containers/(id)/unpause` - -Unpause the container `id` - -**Example request**: - - POST /containers/e90e34656806/unpause HTTP/1.1 - -**Example response**: - - HTTP/1.1 204 No Content - -Status Codes: - -- **204** – no error -- **404** – no such container -- **500** – server error - -### Attach to a container - -`POST /containers/(id)/attach` - -Attach to the container `id` - -**Example request**: - - POST /containers/16253994b7c4/attach?logs=1&stream=0&stdout=1 HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/vnd.docker.raw-stream - - {{ STREAM }} - -Query Parameters: - -- **logs** – 1/True/true or 0/False/false, return logs. Default false -- **stream** – 1/True/true or 0/False/false, return stream. Default false -- **stdin** – 1/True/true or 0/False/false, if stream=true, attach to stdin. - Default false -- **stdout** – 1/True/true or 0/False/false, if logs=true, return - stdout log, if stream=true, attach to stdout. Default false -- **stderr** – 1/True/true or 0/False/false, if logs=true, return - stderr log, if stream=true, attach to stderr. Default false - -Status Codes: - -- **200** – no error -- **400** – bad parameter -- **404** – no such container -- **500** – server error - - **Stream details**: - - When using the TTY setting is enabled in - [`POST /containers/create` - ](/reference/api/docker_remote_api_v1.9/#create-a-container "POST /containers/create"), - the stream is the raw data from the process PTY and client's stdin. - When the TTY is disabled, then the stream is multiplexed to separate - stdout and stderr. - - The format is a **Header** and a **Payload** (frame). - - **HEADER** - - The header will contain the information on which stream write the - stream (stdout or stderr). It also contain the size of the - associated frame encoded on the last 4 bytes (uint32). - - It is encoded on the first 8 bytes like this: - - header := [8]byte{STREAM_TYPE, 0, 0, 0, SIZE1, SIZE2, SIZE3, SIZE4} - - `STREAM_TYPE` can be: - -- 0: stdin (will be written on stdout) -- 1: stdout -- 2: stderr - - `SIZE1, SIZE2, SIZE3, SIZE4` are the 4 bytes of - the uint32 size encoded as big endian. - - **PAYLOAD** - - The payload is the raw stream. - - **IMPLEMENTATION** - - The simplest way to implement the Attach protocol is the following: - - 1. Read 8 bytes - 2. chose stdout or stderr depending on the first byte - 3. Extract the frame size from the last 4 byets - 4. Read the extracted size and output it on the correct output - 5. Goto 1 - -### Attach to a container (websocket) - -`GET /containers/(id)/attach/ws` - -Attach to the container `id` via websocket - -Implements websocket protocol handshake according to [RFC 6455](http://tools.ietf.org/html/rfc6455) - -**Example request** - - GET /containers/e90e34656806/attach/ws?logs=0&stream=1&stdin=1&stdout=1&stderr=1 HTTP/1.1 - -**Example response** - - {{ STREAM }} - -Query Parameters: - -- **logs** – 1/True/true or 0/False/false, return logs. Default false -- **stream** – 1/True/true or 0/False/false, return stream. - Default false -- **stdin** – 1/True/true or 0/False/false, if stream=true, attach - to stdin. Default false -- **stdout** – 1/True/true or 0/False/false, if logs=true, return - stdout log, if stream=true, attach to stdout. Default false -- **stderr** – 1/True/true or 0/False/false, if logs=true, return - stderr log, if stream=true, attach to stderr. Default false - -Status Codes: - -- **200** – no error -- **400** – bad parameter -- **404** – no such container -- **500** – server error - -### Wait a container - -`POST /containers/(id)/wait` - -Block until container `id` stops, then returns the exit code - -**Example request**: - - POST /containers/16253994b7c4/wait HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - {"StatusCode": 0} - -Status Codes: - -- **200** – no error -- **404** – no such container -- **500** – server error - -### Remove a container - -`DELETE /containers/(id)` - -Remove the container `id` from the filesystem - -**Example request**: - - DELETE /containers/16253994b7c4?v=1 HTTP/1.1 - -**Example response**: - - HTTP/1.1 204 No Content - -Query Parameters: - -- **v** – 1/True/true or 0/False/false, Remove the volumes - associated to the container. Default false -- **force** – 1/True/true or 0/False/false, Removes the container - even if it was running. Default false - -Status Codes: - -- **204** – no error -- **400** – bad parameter -- **404** – no such container -- **500** – server error - -### Copy files or folders from a container - -`POST /containers/(id)/copy` - -Copy files or folders of container `id` - -**Example request**: - - POST /containers/4fa6e0f0c678/copy HTTP/1.1 - Content-Type: application/json - - { - "Resource": "test.txt" - } - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/octet-stream - - {{ TAR STREAM }} - -Status Codes: - -- **200** – no error -- **404** – no such container -- **500** – server error - -## 2.2 Images - -### List Images - -`GET /images/json` - -**Example request**: - - GET /images/json?all=0 HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - [ - { - "RepoTags": [ - "ubuntu:12.04", - "ubuntu:precise", - "ubuntu:latest" - ], - "Id": "8dbd9e392a964056420e5d58ca5cc376ef18e2de93b5cc90e868a1bbc8318c1c", - "Created": 1365714795, - "Size": 131506275, - "VirtualSize": 131506275 - }, - { - "RepoTags": [ - "ubuntu:12.10", - "ubuntu:quantal" - ], - "ParentId": "27cf784147099545", - "Id": "b750fe79269d2ec9a3c593ef05b4332b1d1a02a62b4accb2c21d589ff2f5f2dc", - "Created": 1364102658, - "Size": 24653, - "VirtualSize": 180116135 - } - ] - - -Query Parameters: - -   - -- **all** – 1/True/true or 0/False/false, default false -- **filters** – a json encoded value of the filters (a map[string][]string) to process on the images list. Available filters: - - dangling=true - - - -### Create an image - -`POST /images/create` - -Create an image, either by pull it from the registry or by importing i - -**Example request**: - - POST /images/create?fromImage=ubuntu HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - {"status": "Pulling..."} - {"status": "Pulling", "progress": "1 B/ 100 B", "progressDetail": {"current": 1, "total": 100}} - {"error": "Invalid..."} - ... - - When using this endpoint to pull an image from the registry, the - `X-Registry-Auth` header can be used to include - a base64-encoded AuthConfig object. - -Query Parameters: - -- **fromImage** – name of the image to pull -- **fromSrc** – source to import, - means stdin -- **repo** – repository -- **tag** – tag -- **registry** – the registry to pull from - -Request Headers: - -- **X-Registry-Auth** – base64-encoded AuthConfig object - -Status Codes: - -- **200** – no error -- **500** – server error - - - -### Inspect an image - -`GET /images/(name)/json` - -Return low-level information on the image `name` - -**Example request**: - - GET /images/ubuntu/json HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - { - "Created": "2013-03-23T22:24:18.818426-07:00", - "Container": "3d67245a8d72ecf13f33dffac9f79dcdf70f75acb84d308770391510e0c23ad0", - "ContainerConfig": - { - "Hostname": "", - "User": "", - "Memory": 0, - "MemorySwap": 0, - "AttachStdin": false, - "AttachStdout": false, - "AttachStderr": false, - "PortSpecs": null, - "Tty": true, - "OpenStdin": true, - "StdinOnce": false, - "Env": null, - "Cmd": ["/bin/bash"], - "Dns": null, - "Image": "ubuntu", - "Volumes": null, - "VolumesFrom": "", - "WorkingDir": "" - }, - "Id": "b750fe79269d2ec9a3c593ef05b4332b1d1a02a62b4accb2c21d589ff2f5f2dc", - "Parent": "27cf784147099545", - "Size": 6824592 - } - -Status Codes: - -- **200** – no error -- **404** – no such image -- **500** – server error - -### Get the history of an image - -`GET /images/(name)/history` - -Return the history of the image `name` - -**Example request**: - - GET /images/ubuntu/history HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - [ - { - "Id": "b750fe79269d", - "Created": 1364102658, - "CreatedBy": "/bin/bash" - }, - { - "Id": "27cf78414709", - "Created": 1364068391, - "CreatedBy": "" - } - ] - -Status Codes: - -- **200** – no error -- **404** – no such image -- **500** – server error - -### Push an image on the registry - -`POST /images/(name)/push` - -Push the image `name` on the registry - -**Example request**: - - POST /images/test/push HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - {"status": "Pushing..."} - {"status": "Pushing", "progress": "1/? (n/a)", "progressDetail": {"current": 1}}} - {"error": "Invalid..."} - ... - - If you wish to push an image on to a private registry, that image must already have been tagged - into a repository which references that registry host name and port. This repository name should - then be used in the URL. This mirrors the flow of the CLI. - -**Example request**: - - POST /images/registry.acme.com:5000/test/push HTTP/1.1 - - -Query Parameters: - -- **tag** – the tag to associate with the image on the registry, optional - -Request Headers: - -- **X-Registry-Auth** – include a base64-encoded AuthConfig object. - -Status Codes: - -- **200** – no error -- **404** – no such image -- **500** – server error - -### Tag an image into a repository - -`POST /images/(name)/tag` - -Tag the image `name` into a repository - -**Example request**: - - POST /images/test/tag?repo=myrepo&force=0&tag=v42 HTTP/1.1 - -**Example response**: - - HTTP/1.1 201 OK - -Query Parameters: - -- **repo** – The repository to tag in -- **force** – 1/True/true or 0/False/false, default false -- **tag** - The new tag name - -Status Codes: - -- **201** – no error -- **400** – bad parameter -- **404** – no such image -- **409** – conflict -- **500** – server error - -### Remove an image - -`DELETE /images/(name)` - -Remove the image `name` from the filesystem - -**Example request**: - - DELETE /images/test HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-type: application/json - - [ - {"Untagged": "3e2f21a89f"}, - {"Deleted": "3e2f21a89f"}, - {"Deleted": "53b4f83ac9"} - ] - -Query Parameters: - -- **force** – 1/True/true or 0/False/false, default false -- **noprune** – 1/True/true or 0/False/false, default false - -Status Codes: - -- **200** – no error -- **404** – no such image -- **409** – conflict -- **500** – server error - -### Search images - -`GET /images/search` - -Search for an image on [Docker Hub](https://hub.docker.com). - -> **Note**: -> The response keys have changed from API v1.6 to reflect the JSON -> sent by the registry server to the docker daemon's request. - -**Example request**: - - GET /images/search?term=sshd HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - [ - { - "description": "", - "is_official": false, - "is_automated": false, - "name": "wma55/u1210sshd", - "star_count": 0 - }, - { - "description": "", - "is_official": false, - "is_automated": false, - "name": "jdswinbank/sshd", - "star_count": 0 - }, - { - "description": "", - "is_official": false, - "is_automated": false, - "name": "vgauthier/sshd", - "star_count": 0 - } - ... - ] - -Query Parameters: - -- **term** – term to search - -Status Codes: - -- **200** – no error -- **500** – server error - -## 2.3 Misc - -### Build an image from Dockerfile via stdin - -`POST /build` - -Build an image from Dockerfile via stdin - -**Example request**: - - POST /build HTTP/1.1 - - {{ TAR STREAM }} - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - {"stream": "Step 1..."} - {"stream": "..."} - {"error": "Error...", "errorDetail": {"code": 123, "message": "Error..."}} - - The stream must be a tar archive compressed with one of the - following algorithms: identity (no compression), gzip, bzip2, xz. - - The archive must include a file called `Dockerfile` - at its root. It may include any number of other files, - which will be accessible in the build context (See the [*ADD build - command*](/reference/builder/#dockerbuilder)). - -Query Parameters: - -- **t** – repository name (and optionally a tag) to be applied to - the resulting image in case of success -- **remote** – git or HTTP/HTTPS URI build source -- **q** – suppress verbose build output -- **nocache** – do not use the cache when building the image -- **rm** - remove intermediate containers after a successful build (default behavior) -- **forcerm** - always remove intermediate containers (includes rm) - - Request Headers: - -- **Content-type** – should be set to `"application/tar"`. -- **X-Registry-Config** – base64-encoded ConfigFile objec - -Status Codes: - -- **200** – no error -- **500** – server error - -### Check auth configuration - -`POST /auth` - -Get the default username and email - -**Example request**: - - POST /auth HTTP/1.1 - Content-Type: application/json - - { - "username":" hannibal", - "password: "xxxx", - "email": "hannibal@a-team.com", - "serveraddress": "https://index.docker.io/v1/" - } - -**Example response**: - - HTTP/1.1 200 OK - -Status Codes: - -- **200** – no error -- **204** – no error -- **500** – server error - -### Display system-wide information - -`GET /info` - -Display system-wide information - -**Example request**: - - GET /info HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - { - "Containers": 11, - "Images": 16, - "Driver": "btrfs", - "ExecutionDriver": "native-0.1", - "KernelVersion": "3.12.0-1-amd64" - "Debug": false, - "NFd": 11, - "NGoroutines": 21, - "NEventsListener": 0, - "InitPath": "/usr/bin/docker", - "IndexServerAddress": ["https://index.docker.io/v1/"], - "MemoryLimit": true, - "SwapLimit": false, - "IPv4Forwarding": true - } - -Status Codes: - -- **200** – no error -- **500** – server error - -### Show the docker version information - -`GET /version` - -Show the docker version information - -**Example request**: - - GET /version HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - { - "ApiVersion": "1.12", - "Version": "0.2.2", - "GitCommit": "5a2a5cc+CHANGES", - "GoVersion": "go1.0.3" - } - -Status Codes: - -- **200** – no error -- **500** – server error - -### Ping the docker server - -`GET /_ping` - -Ping the docker server - -**Example request**: - - GET /_ping HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: text/plain - - OK - -Status Codes: - -- **200** - no error -- **500** - server error - -### Create a new image from a container's changes - -`POST /commit` - -Create a new image from a container's changes - -**Example request**: - - POST /commit?container=44c004db4b17&comment=message&repo=myrepo HTTP/1.1 - Content-Type: application/json - - { - "Hostname": "", - "Domainname": "", - "User": "", - "Memory": 0, - "MemorySwap": 0, - "CpuShares": 512, - "Cpuset": "0,1", - "AttachStdin": false, - "AttachStdout": true, - "AttachStderr": true, - "PortSpecs": null, - "Tty": false, - "OpenStdin": false, - "StdinOnce": false, - "Env": null, - "Cmd": [ - "date" - ], - "Volumes": { - "/tmp": {} - }, - "WorkingDir": "", - "NetworkDisabled": false, - "ExposedPorts": { - "22/tcp": {} - } - } - -**Example response**: - - HTTP/1.1 201 Created - Content-Type: application/vnd.docker.raw-stream - - {"Id": "596069db4bf5"} - -Json Parameters: - -- **config** - the container's configuration - -Query Parameters: - -- **container** – source container -- **repo** – repository -- **tag** – tag -- **comment** – commit message -- **author** – author (e.g., "John Hannibal Smith - <[hannibal@a-team.com](mailto:hannibal%40a-team.com)>") - -Status Codes: - -- **201** – no error -- **404** – no such container -- **500** – server error - -### Monitor Docker's events - -`GET /events` - -Get container events from docker, either in real time via streaming, or via -polling (using since). - -Docker containers will report the following events: - - create, destroy, die, export, kill, pause, restart, start, stop, unpause - -and Docker images will report: - - untag, delete - -**Example request**: - - GET /events?since=1374067924 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - {"status": "create", "id": "dfdf82bd3881","from": "ubuntu:latest", "time":1374067924} - {"status": "start", "id": "dfdf82bd3881","from": "ubuntu:latest", "time":1374067924} - {"status": "stop", "id": "dfdf82bd3881","from": "ubuntu:latest", "time":1374067966} - {"status": "destroy", "id": "dfdf82bd3881","from": "ubuntu:latest", "time":1374067970} - -Query Parameters: - -- **since** – timestamp used for polling -- **until** – timestamp used for polling - -Status Codes: - -- **200** – no error -- **500** – server error - -### Get a tarball containing all images and tags in a repository - -`GET /images/(name)/get` - -Get a tarball containing all images and metadata for the repository -specified by `name`. - -See the [image tarball format](#image-tarball-format) for more details. - -**Example request** - - GET /images/ubuntu/get - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/x-tar - - Binary data stream - -Status Codes: - -- **200** – no error -- **500** – server error - -### Load a tarball with a set of images and tags into docker - -`POST /images/load` - -Load a set of images and tags into the docker repository. -See the [image tarball format](#image-tarball-format) for more details. - -**Example request** - - POST /images/load - - Tarball in body - -**Example response**: - - HTTP/1.1 200 OK - -Status Codes: - -- **200** – no error -- **500** – server error - -### Image tarball format - -An image tarball contains one directory per image layer (named using its long ID), -each containing three files: - -1. `VERSION`: currently `1.0` - the file format version -2. `json`: detailed layer information, similar to `docker inspect layer_id` -3. `layer.tar`: A tarfile containing the filesystem changes in this layer - -The `layer.tar` file will contain `aufs` style `.wh..wh.aufs` files and directories -for storing attribute changes and deletions. - -If the tarball defines a repository, there will also be a `repositories` file at -the root that contains a list of repository and tag names mapped to layer IDs. - -``` -{"hello-world": - {"latest": "565a9d68a73f6706862bfe8409a7f659776d4d60a8d096eb4a3cbce6999cc2a1"} -} -``` - -# 3. Going further - -## 3.1 Inside `docker run` - -As an example, the `docker run` command line makes the following API calls: - -- Create the container - -- If the status code is 404, it means the image doesn't exist: - - Try to pull it - - Then retry to create the container - -- Start the container - -- If you are not in detached mode: - - Attach to the container, using logs=1 (to have stdout and - stderr from the container's start) and stream=1 - -- If in detached mode or only stdin is attached: - - Display the container's id - -## 3.2 Hijacking - -In this version of the API, /attach, uses hijacking to transport stdin, -stdout and stderr on the same socket. This might change in the future. - -## 3.3 CORS Requests - -To enable cross origin requests to the remote api add the flag -"--api-enable-cors" when running docker in daemon mode. - - $ docker -d -H="192.168.1.9:2375" --api-enable-cors diff --git a/reference/api/docker_remote_api_v1.13.md~ b/reference/api/docker_remote_api_v1.13.md~ deleted file mode 100644 index 1590978f0c..0000000000 --- a/reference/api/docker_remote_api_v1.13.md~ +++ /dev/null @@ -1,1433 +0,0 @@ -page_title: Remote API v1.13 -page_description: API Documentation for Docker -page_keywords: API, Docker, rcli, REST, documentation - -# Docker Remote API v1.13 - -## 1. Brief introduction - - - The Remote API has replaced `rcli`. - - The daemon listens on `unix:///var/run/docker.sock` but you can - [Bind Docker to another host/port or a Unix socket]( - /articles/basics/#bind-docker-to-another-hostport-or-a-unix-socket). - - The API tends to be REST, but for some complex commands, like `attach` - or `pull`, the HTTP connection is hijacked to transport `STDOUT`, - `STDIN` and `STDERR`. - -# 2. Endpoints - -## 2.1 Containers - -### List containers - -`GET /containers/json` - -List containers - -**Example request**: - - GET /containers/json?all=1&before=8dfafdbc3a40&size=1 HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - [ - { - "Id": "8dfafdbc3a40", - "Image": "ubuntu:latest", - "Command": "echo 1", - "Created": 1367854155, - "Status": "Exit 0", - "Ports": [{"PrivatePort": 2222, "PublicPort": 3333, "Type": "tcp"}], - "SizeRw": 12288, - "SizeRootFs": 0 - }, - { - "Id": "9cd87474be90", - "Image": "ubuntu:latest", - "Command": "echo 222222", - "Created": 1367854155, - "Status": "Exit 0", - "Ports": [], - "SizeRw": 12288, - "SizeRootFs": 0 - }, - { - "Id": "3176a2479c92", - "Image": "ubuntu:latest", - "Command": "echo 3333333333333333", - "Created": 1367854154, - "Status": "Exit 0", - "Ports":[], - "SizeRw":12288, - "SizeRootFs":0 - }, - { - "Id": "4cb07b47f9fb", - "Image": "ubuntu:latest", - "Command": "echo 444444444444444444444444444444444", - "Created": 1367854152, - "Status": "Exit 0", - "Ports": [], - "SizeRw": 12288, - "SizeRootFs": 0 - } - ] - -Query Parameters: - -- **all** – 1/True/true or 0/False/false, Show all containers. - Only running containers are shown by default (i.e., this defaults to false) -- **limit** – Show `limit` last created containers, include non-running ones. -- **since** – Show only containers created since Id, include non-running ones. -- **before** – Show only containers created before Id, include non-running ones. -- **size** – 1/True/true or 0/False/false, Show the containers sizes - -Status Codes: - -- **200** – no error -- **400** – bad parameter -- **500** – server error - -### Create a container - -`POST /containers/create` - -Create a container - -**Example request**: - - POST /containers/create HTTP/1.1 - Content-Type: application/json - - { - "Hostname":"", - "Domainname": "", - "User":"", - "Memory":0, - "MemorySwap":0, - "CpuShares": 512, - "Cpuset": "0,1", - "AttachStdin":false, - "AttachStdout":true, - "AttachStderr":true, - "PortSpecs":null, - "Tty":false, - "OpenStdin":false, - "StdinOnce":false, - "Env":null, - "Cmd":[ - "date" - ], - "Image":"ubuntu", - "Volumes":{ - "/tmp": {} - }, - "WorkingDir":"", - "NetworkDisabled": false, - "ExposedPorts":{ - "22/tcp": {} - } - } - -**Example response**: - - HTTP/1.1 201 Created - Content-Type: application/json - - { - "Id":"e90e34656806" - "Warnings":[] - } - -Json Parameters: - -- **config** – the container's configuration - -Query Parameters: - -   - -- **name** – Assign the specified name to the container. Mus - match `/?[a-zA-Z0-9_-]+`. - -Status Codes: - -- **201** – no error -- **404** – no such container -- **406** – impossible to attach (container not running) -- **500** – server error - -### Inspect a container - -`GET /containers/(id)/json` - -Return low-level information on the container `id` - - -**Example request**: - - GET /containers/4fa6e0f0c678/json HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - { - "Id": "4fa6e0f0c6786287e131c3852c58a2e01cc697a68231826813597e4994f1d6e2", - "Created": "2013-05-07T14:51:42.041847+02:00", - "Path": "date", - "Args": [], - "Config": { - "Hostname": "4fa6e0f0c678", - "User": "", - "Memory": 0, - "MemorySwap": 0, - "AttachStdin": false, - "AttachStdout": true, - "AttachStderr": true, - "PortSpecs": null, - "Tty": false, - "OpenStdin": false, - "StdinOnce": false, - "Env": null, - "Cmd": [ - "date" - ], - "Dns": null, - "Image": "ubuntu", - "Volumes": {}, - "VolumesFrom": "", - "WorkingDir": "" - }, - "State": { - "Running": false, - "Pid": 0, - "ExitCode": 0, - "StartedAt": "2013-05-07T14:51:42.087658+02:01360", - "Ghost": false - }, - "Image": "b750fe79269d2ec9a3c593ef05b4332b1d1a02a62b4accb2c21d589ff2f5f2dc", - "NetworkSettings": { - "IpAddress": "", - "IpPrefixLen": 0, - "Gateway": "", - "Bridge": "", - "PortMapping": null - }, - "SysInitPath": "/home/kitty/go/src/github.com/docker/docker/bin/docker", - "ResolvConfPath": "/etc/resolv.conf", - "Volumes": {}, - "HostConfig": { - "Binds": null, - "ContainerIDFile": "", - "LxcConf": [], - "Privileged": false, - "PortBindings": { - "80/tcp": [ - { - "HostIp": "0.0.0.0", - "HostPort": "49153" - } - ] - }, - "Links": ["/name:alias"], - "PublishAllPorts": false - } - } - -Status Codes: - -- **200** – no error -- **404** – no such container -- **500** – server error - -### List processes running inside a container - -`GET /containers/(id)/top` - -List processes running inside the container `id` - -**Example request**: - - GET /containers/4fa6e0f0c678/top HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - { - "Titles": [ - "USER", - "PID", - "%CPU", - "%MEM", - "VSZ", - "RSS", - "TTY", - "STAT", - "START", - "TIME", - "COMMAND" - ], - "Processes": [ - ["root","20147","0.0","0.1","18060","1864","pts/4","S","10:06","0:00","bash"], - ["root","20271","0.0","0.0","4312","352","pts/4","S+","10:07","0:00","sleep","10"] - ] - } - -Query Parameters: - -- **ps_args** – ps arguments to use (e.g., aux) - -Status Codes: - -- **200** – no error -- **404** – no such container -- **500** – server error - -### Get container logs - -`GET /containers/(id)/logs` - -Get stdout and stderr logs from the container ``id`` - -**Example request**: - - GET /containers/4fa6e0f0c678/logs?stderr=1&stdout=1×tamps=1&follow=1&tail=10 HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/vnd.docker.raw-stream - - {{ STREAM }} - -Query Parameters: - -- **follow** – 1/True/true or 0/False/false, return stream. Default false -- **stdout** – 1/True/true or 0/False/false, show stdout log. Default false -- **stderr** – 1/True/true or 0/False/false, show stderr log. Default false -- **timestamps** – 1/True/true or 0/False/false, print timestamps for every - log line. Default false -- **tail** – Output specified number of lines at the end of logs: `all` or - ``. Default all - -Status Codes: - -- **200** – no error -- **404** – no such container -- **500** – server error - -### Inspect changes on a container's filesystem - -`GET /containers/(id)/changes` - -Inspect changes on container `id`'s filesystem - -**Example request**: - - GET /containers/4fa6e0f0c678/changes HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - [ - { - "Path": "/dev", - "Kind": 0 - }, - { - "Path": "/dev/kmsg", - "Kind": 1 - }, - { - "Path": "/test", - "Kind": 1 - } - ] - -Status Codes: - -- **200** – no error -- **404** – no such container -- **500** – server error - -### Export a container - -`GET /containers/(id)/export` - -Export the contents of container `id` - -**Example request**: - - GET /containers/4fa6e0f0c678/export HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/octet-stream - - {{ TAR STREAM }} - -Status Codes: - -- **200** – no error -- **404** – no such container -- **500** – server error - -### Start a container - -`POST /containers/(id)/start` - -Start the container `id` - -**Example request**: - - POST /containers/(id)/start HTTP/1.1 - Content-Type: application/json - - { - "Binds":["/tmp:/tmp"], - "Links":["redis3:redis"], - "LxcConf":[{"Key":"lxc.utsname","Value":"docker"}], - "PortBindings":{ "22/tcp": [{ "HostPort": "11022" }] }, - "PublishAllPorts":false, - "Privileged":false, - "Dns": ["8.8.8.8"], - "VolumesFrom": ["parent", "other:ro"] - } - -**Example response**: - - HTTP/1.1 204 No Content - Content-Type: text/plain - -Json Parameters: - -   - -- **hostConfig** – the container's host configuration (optional) - -Status Codes: - -- **204** – no error -- **304** – container already started -- **404** – no such container -- **500** – server error - -### Stop a container - -`POST /containers/(id)/stop` - -Stop the container `id` - -**Example request**: - - POST /containers/e90e34656806/stop?t=5 HTTP/1.1 - -**Example response**: - - HTTP/1.1 204 No Content - -Query Parameters: - -- **t** – number of seconds to wait before killing the container - -Status Codes: - -- **204** – no error -- **304** – container already stopped -- **404** – no such container -- **500** – server error - -### Restart a container - -`POST /containers/(id)/restart` - -Restart the container `id` - -**Example request**: - - POST /containers/e90e34656806/restart?t=5 HTTP/1.1 - -**Example response**: - - HTTP/1.1 204 No Content - -Query Parameters: - -- **t** – number of seconds to wait before killing the container - -Status Codes: - -- **204** – no error -- **404** – no such container -- **500** – server error - -### Kill a container - -`POST /containers/(id)/kill` - -Kill the container `id` - -**Example request**: - - POST /containers/e90e34656806/kill HTTP/1.1 - -**Example response**: - - HTTP/1.1 204 No Content - -Query Parameters - -- **signal** - Signal to send to the container: integer or string like "SIGINT". - When not set, SIGKILL is assumed and the call will wait for the container to exit. - -Status Codes: - -- **204** – no error -- **404** – no such container -- **500** – server error - -### Pause a container - -`POST /containers/(id)/pause` - -Pause the container `id` - -**Example request**: - - POST /containers/e90e34656806/pause HTTP/1.1 - -**Example response**: - - HTTP/1.1 204 No Content - -Status Codes: - -- **204** – no error -- **404** – no such container -- **500** – server error - -### Unpause a container - -`POST /containers/(id)/unpause` - -Unpause the container `id` - -**Example request**: - - POST /containers/e90e34656806/unpause HTTP/1.1 - -**Example response**: - - HTTP/1.1 204 No Content - -Status Codes: - -- **204** – no error -- **404** – no such container -- **500** – server error - -### Attach to a container - -`POST /containers/(id)/attach` - -Attach to the container `id` - -**Example request**: - - POST /containers/16253994b7c4/attach?logs=1&stream=0&stdout=1 HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/vnd.docker.raw-stream - - {{ STREAM }} - -Query Parameters: - -- **logs** – 1/True/true or 0/False/false, return logs. Default false -- **stream** – 1/True/true or 0/False/false, return stream. Default false -- **stdin** – 1/True/true or 0/False/false, if stream=true, attach to stdin. - Default false -- **stdout** – 1/True/true or 0/False/false, if logs=true, return - stdout log, if stream=true, attach to stdout. Default false -- **stderr** – 1/True/true or 0/False/false, if logs=true, return - stderr log, if stream=true, attach to stderr. Default false - -Status Codes: - -- **200** – no error -- **400** – bad parameter -- **404** – no such container -- **500** – server error - - **Stream details**: - - When using the TTY setting is enabled in - [`POST /containers/create` - ](/reference/api/docker_remote_api_v1.9/#create-a-container "POST /containers/create"), - the stream is the raw data from the process PTY and client's stdin. - When the TTY is disabled, then the stream is multiplexed to separate - stdout and stderr. - - The format is a **Header** and a **Payload** (frame). - - **HEADER** - - The header will contain the information on which stream write the - stream (stdout or stderr). It also contain the size of the - associated frame encoded on the last 4 bytes (uint32). - - It is encoded on the first 8 bytes like this: - - header := [8]byte{STREAM_TYPE, 0, 0, 0, SIZE1, SIZE2, SIZE3, SIZE4} - - `STREAM_TYPE` can be: - -- 0: stdin (will be written on stdout) -- 1: stdout -- 2: stderr - - `SIZE1, SIZE2, SIZE3, SIZE4` are the 4 bytes of - the uint32 size encoded as big endian. - - **PAYLOAD** - - The payload is the raw stream. - - **IMPLEMENTATION** - - The simplest way to implement the Attach protocol is the following: - - 1. Read 8 bytes - 2. chose stdout or stderr depending on the first byte - 3. Extract the frame size from the last 4 byets - 4. Read the extracted size and output it on the correct output - 5. Goto 1 - -### Attach to a container (websocket) - -`GET /containers/(id)/attach/ws` - -Attach to the container `id` via websocket - -Implements websocket protocol handshake according to [RFC 6455](http://tools.ietf.org/html/rfc6455) - -**Example request** - - GET /containers/e90e34656806/attach/ws?logs=0&stream=1&stdin=1&stdout=1&stderr=1 HTTP/1.1 - -**Example response** - - {{ STREAM }} - -Query Parameters: - -- **logs** – 1/True/true or 0/False/false, return logs. Default false -- **stream** – 1/True/true or 0/False/false, return stream. - Default false -- **stdin** – 1/True/true or 0/False/false, if stream=true, attach - to stdin. Default false -- **stdout** – 1/True/true or 0/False/false, if logs=true, return - stdout log, if stream=true, attach to stdout. Default false -- **stderr** – 1/True/true or 0/False/false, if logs=true, return - stderr log, if stream=true, attach to stderr. Default false - -Status Codes: - -- **200** – no error -- **400** – bad parameter -- **404** – no such container -- **500** – server error - -### Wait a container - -`POST /containers/(id)/wait` - -Block until container `id` stops, then returns the exit code - -**Example request**: - - POST /containers/16253994b7c4/wait HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - {"StatusCode": 0} - -Status Codes: - -- **200** – no error -- **404** – no such container -- **500** – server error - -### Remove a container - -`DELETE /containers/(id)` - -Remove the container `id` from the filesystem - -**Example request**: - - DELETE /containers/16253994b7c4?v=1 HTTP/1.1 - -**Example response**: - - HTTP/1.1 204 No Content - -Query Parameters: - -- **v** – 1/True/true or 0/False/false, Remove the volumes - associated to the container. Default false -- **force** – 1/True/true or 0/False/false, Removes the container - even if it was running. Default false - -Status Codes: - -- **204** – no error -- **400** – bad parameter -- **404** – no such container -- **500** – server error - -### Copy files or folders from a container - -`POST /containers/(id)/copy` - -Copy files or folders of container `id` - -**Example request**: - - POST /containers/4fa6e0f0c678/copy HTTP/1.1 - Content-Type: application/json - - { - "Resource": "test.txt" - } - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/octet-stream - - {{ TAR STREAM }} - -Status Codes: - -- **200** – no error -- **404** – no such container -- **500** – server error - -## 2.2 Images - -### List Images - -`GET /images/json` - -**Example request**: - - GET /images/json?all=0 HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - [ - { - "RepoTags": [ - "ubuntu:12.04", - "ubuntu:precise", - "ubuntu:latest" - ], - "Id": "8dbd9e392a964056420e5d58ca5cc376ef18e2de93b5cc90e868a1bbc8318c1c", - "Created": 1365714795, - "Size": 131506275, - "VirtualSize": 131506275 - }, - { - "RepoTags": [ - "ubuntu:12.10", - "ubuntu:quantal" - ], - "ParentId": "27cf784147099545", - "Id": "b750fe79269d2ec9a3c593ef05b4332b1d1a02a62b4accb2c21d589ff2f5f2dc", - "Created": 1364102658, - "Size": 24653, - "VirtualSize": 180116135 - } - ] - - -Query Parameters: - -- **all** – 1/True/true or 0/False/false, default false -- **filters** – a json encoded value of the filters (a map[string][]string) to process on the images list. Available filters: - - dangling=true - -### Create an image - -`POST /images/create` - -Create an image, either by pulling it from the registry or by importing it - -**Example request**: - - POST /images/create?fromImage=ubuntu HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - {"status": "Pulling..."} - {"status": "Pulling", "progress": "1 B/ 100 B", "progressDetail": {"current": 1, "total": 100}} - {"error": "Invalid..."} - ... - - When using this endpoint to pull an image from the registry, the - `X-Registry-Auth` header can be used to include - a base64-encoded AuthConfig object. - -Query Parameters: - -- **fromImage** – name of the image to pull -- **fromSrc** – source to import, - means stdin -- **repo** – repository -- **tag** – tag -- **registry** – the registry to pull from - -Request Headers: - -- **X-Registry-Auth** – base64-encoded AuthConfig object - -Status Codes: - -- **200** – no error -- **500** – server error - - - -### Inspect an image - -`GET /images/(name)/json` - -Return low-level information on the image `name` - -**Example request**: - - GET /images/ubuntu/json HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - { - "Created": "2013-03-23T22:24:18.818426-07:00", - "Container": "3d67245a8d72ecf13f33dffac9f79dcdf70f75acb84d308770391510e0c23ad0", - "ContainerConfig": - { - "Hostname": "", - "User": "", - "Memory": 0, - "MemorySwap": 0, - "AttachStdin": false, - "AttachStdout": false, - "AttachStderr": false, - "PortSpecs": null, - "Tty": true, - "OpenStdin": true, - "StdinOnce": false, - "Env": null, - "Cmd": ["/bin/bash"], - "Dns": null, - "Image": "ubuntu", - "Volumes": null, - "VolumesFrom": "", - "WorkingDir": "" - }, - "Id": "b750fe79269d2ec9a3c593ef05b4332b1d1a02a62b4accb2c21d589ff2f5f2dc", - "Parent": "27cf784147099545", - "Size": 6824592 - } - -Status Codes: - -- **200** – no error -- **404** – no such image -- **500** – server error - -### Get the history of an image - -`GET /images/(name)/history` - -Return the history of the image `name` - -**Example request**: - - GET /images/ubuntu/history HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - [ - { - "Id": "b750fe79269d", - "Created": 1364102658, - "CreatedBy": "/bin/bash" - }, - { - "Id": "27cf78414709", - "Created": 1364068391, - "CreatedBy": "" - } - ] - -Status Codes: - -- **200** – no error -- **404** – no such image -- **500** – server error - -### Push an image on the registry - -`POST /images/(name)/push` - -Push the image `name` on the registry - -**Example request**: - - POST /images/test/push HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - {"status": "Pushing..."} - {"status": "Pushing", "progress": "1/? (n/a)", "progressDetail": {"current": 1}}} - {"error": "Invalid..."} - ... - - If you wish to push an image on to a private registry, that image must already have been tagged - into a repository which references that registry host name and port. This repository name should - then be used in the URL. This mirrors the flow of the CLI. - -**Example request**: - - POST /images/registry.acme.com:5000/test/push HTTP/1.1 - - -Query Parameters: - -- **tag** – the tag to associate with the image on the registry, optional - -Request Headers: - -- **X-Registry-Auth** – include a base64-encoded AuthConfig object. - -Status Codes: - -- **200** – no error -- **404** – no such image -- **500** – server error - -### Tag an image into a repository - -`POST /images/(name)/tag` - -Tag the image `name` into a repository - -**Example request**: - - POST /images/test/tag?repo=myrepo&force=0&tag=v42 HTTP/1.1 - -**Example response**: - - HTTP/1.1 201 OK - -Query Parameters: - -- **repo** – The repository to tag in -- **force** – 1/True/true or 0/False/false, default false -- **tag** - The new tag name - -Status Codes: - -- **201** – no error -- **400** – bad parameter -- **404** – no such image -- **409** – conflict -- **500** – server error - -### Remove an image - -`DELETE /images/(name)` - -Remove the image `name` from the filesystem - -**Example request**: - - DELETE /images/test HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-type: application/json - - [ - {"Untagged": "3e2f21a89f"}, - {"Deleted": "3e2f21a89f"}, - {"Deleted": "53b4f83ac9"} - ] - -Query Parameters: - -- **force** – 1/True/true or 0/False/false, default false -- **noprune** – 1/True/true or 0/False/false, default false - -Status Codes: - -- **200** – no error -- **404** – no such image -- **409** – conflict -- **500** – server error - -### Search images - -`GET /images/search` - -Search for an image on [Docker Hub](https://hub.docker.com). - -> **Note**: -> The response keys have changed from API v1.6 to reflect the JSON -> sent by the registry server to the docker daemon's request. - -**Example request**: - - GET /images/search?term=sshd HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - [ - { - "description": "", - "is_official": false, - "is_automated": false, - "name": "wma55/u1210sshd", - "star_count": 0 - }, - { - "description": "", - "is_official": false, - "is_automated": false, - "name": "jdswinbank/sshd", - "star_count": 0 - }, - { - "description": "", - "is_official": false, - "is_automated": false, - "name": "vgauthier/sshd", - "star_count": 0 - } - ... - ] - -Query Parameters: - -- **term** – term to search - -Status Codes: - -- **200** – no error -- **500** – server error - -## 2.3 Misc - -### Build an image from Dockerfile via stdin - -`POST /build` - -Build an image from Dockerfile via stdin - -**Example request**: - - POST /build HTTP/1.1 - - {{ TAR STREAM }} - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - {"stream": "Step 1..."} - {"stream": "..."} - {"error": "Error...", "errorDetail": {"code": 123, "message": "Error..."}} - - The stream must be a tar archive compressed with one of the - following algorithms: identity (no compression), gzip, bzip2, xz. - - The archive must include a file called `Dockerfile` - at its root. It may include any number of other files, - which will be accessible in the build context (See the [*ADD build - command*](/reference/builder/#dockerbuilder)). - -Query Parameters: - -- **t** – repository name (and optionally a tag) to be applied to - the resulting image in case of success -- **remote** – git or HTTP/HTTPS URI build source -- **q** – suppress verbose build output -- **nocache** – do not use the cache when building the image -- **rm** - remove intermediate containers after a successful build (default behavior) -- **forcerm** - always remove intermediate containers (includes rm) - - Request Headers: - -- **Content-type** – should be set to `"application/tar"`. -- **X-Registry-Config** – base64-encoded ConfigFile objec - -Status Codes: - -- **200** – no error -- **500** – server error - -### Check auth configuration - -`POST /auth` - -Get the default username and email - -**Example request**: - - POST /auth HTTP/1.1 - Content-Type: application/json - - { - "username":" hannibal", - "password: "xxxx", - "email": "hannibal@a-team.com", - "serveraddress": "https://index.docker.io/v1/" - } - -**Example response**: - - HTTP/1.1 200 OK - -Status Codes: - -- **200** – no error -- **204** – no error -- **500** – server error - -### Display system-wide information - -`GET /info` - -Display system-wide information - -**Example request**: - - GET /info HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - { - "Containers": 11, - "Images": 16, - "Driver": "btrfs", - "ExecutionDriver": "native-0.1", - "KernelVersion": "3.12.0-1-amd64" - "Debug": false, - "NFd": 11, - "NGoroutines": 21, - "NEventsListener": 0, - "InitPath": "/usr/bin/docker", - "IndexServerAddress": ["https://index.docker.io/v1/"], - "MemoryLimit": true, - "SwapLimit": false, - "IPv4Forwarding": true - } - -Status Codes: - -- **200** – no error -- **500** – server error - -### Show the docker version information - -`GET /version` - -Show the docker version information - -**Example request**: - - GET /version HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - { - "ApiVersion": "1.12", - "Version": "0.2.2", - "GitCommit": "5a2a5cc+CHANGES", - "GoVersion": "go1.0.3" - } - -Status Codes: - -- **200** – no error -- **500** – server error - -### Ping the docker server - -`GET /_ping` - -Ping the docker server - -**Example request**: - - GET /_ping HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: text/plain - - OK - -Status Codes: - -- **200** - no error -- **500** - server error - -### Create a new image from a container's changes - -`POST /commit` - -Create a new image from a container's changes - -**Example request**: - - POST /commit?container=44c004db4b17&comment=message&repo=myrepo HTTP/1.1 - Content-Type: application/json - - { - "Hostname": "", - "Domainname": "", - "User": "", - "Memory": 0, - "MemorySwap": 0, - "CpuShares": 512, - "Cpuset": "0,1", - "AttachStdin": false, - "AttachStdout": true, - "AttachStderr": true, - "PortSpecs": null, - "Tty": false, - "OpenStdin": false, - "StdinOnce": false, - "Env": null, - "Cmd": [ - "date" - ], - "Volumes": { - "/tmp": {} - }, - "WorkingDir": "", - "NetworkDisabled": false, - "ExposedPorts": { - "22/tcp": {} - } - } - -**Example response**: - - HTTP/1.1 201 Created - Content-Type: application/vnd.docker.raw-stream - - {"Id": "596069db4bf5"} - -Json Parameters: - -- **config** - the container's configuration - -Query Parameters: - -- **container** – source container -- **repo** – repository -- **tag** – tag -- **comment** – commit message -- **author** – author (e.g., "John Hannibal Smith - <[hannibal@a-team.com](mailto:hannibal%40a-team.com)>") - -Status Codes: - -- **201** – no error -- **404** – no such container -- **500** – server error - -### Monitor Docker's events - -`GET /events` - -Get container events from docker, either in real time via streaming, or via -polling (using since). - -Docker containers will report the following events: - - create, destroy, die, export, kill, pause, restart, start, stop, unpause - -and Docker images will report: - - untag, delete - -**Example request**: - - GET /events?since=1374067924 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - {"status": "create", "id": "dfdf82bd3881","from": "ubuntu:latest", "time":1374067924} - {"status": "start", "id": "dfdf82bd3881","from": "ubuntu:latest", "time":1374067924} - {"status": "stop", "id": "dfdf82bd3881","from": "ubuntu:latest", "time":1374067966} - {"status": "destroy", "id": "dfdf82bd3881","from": "ubuntu:latest", "time":1374067970} - -Query Parameters: - -- **since** – timestamp used for polling -- **until** – timestamp used for polling - -Status Codes: - -- **200** – no error -- **500** – server error - -### Get a tarball containing all images and tags in a repository - -`GET /images/(name)/get` - -Get a tarball containing all images and metadata for the repository -specified by `name`. - -See the [image tarball format](#image-tarball-format) for more details. - -**Example request** - - GET /images/ubuntu/get - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/x-tar - - Binary data stream - -Status Codes: - -- **200** – no error -- **500** – server error - -### Load a tarball with a set of images and tags into docker - -`POST /images/load` - -Load a set of images and tags into the docker repository. - -See the [image tarball format](#image-tarball-format) for more details. - -**Example request** - - POST /images/load - - Tarball in body - -**Example response**: - - HTTP/1.1 200 OK - -Status Codes: - -- **200** – no error -- **500** – server error - -### Image tarball format - -An image tarball contains one directory per image layer (named using its long ID), -each containing three files: - -1. `VERSION`: currently `1.0` - the file format version -2. `json`: detailed layer information, similar to `docker inspect layer_id` -3. `layer.tar`: A tarfile containing the filesystem changes in this layer - -The `layer.tar` file will contain `aufs` style `.wh..wh.aufs` files and directories -for storing attribute changes and deletions. - -If the tarball defines a repository, there will also be a `repositories` file at -the root that contains a list of repository and tag names mapped to layer IDs. - -``` -{"hello-world": - {"latest": "565a9d68a73f6706862bfe8409a7f659776d4d60a8d096eb4a3cbce6999cc2a1"} -} -``` - -# 3. Going further - -## 3.1 Inside `docker run` - -As an example, the `docker run` command line makes the following API calls: - -- Create the container - -- If the status code is 404, it means the image doesn't exist: - - Try to pull it - - Then retry to create the container - -- Start the container - -- If you are not in detached mode: - - Attach to the container, using logs=1 (to have stdout and - stderr from the container's start) and stream=1 - -- If in detached mode or only stdin is attached: - - Display the container's id - -## 3.2 Hijacking - -In this version of the API, /attach, uses hijacking to transport stdin, -stdout and stderr on the same socket. This might change in the future. - -## 3.3 CORS Requests - -To enable cross origin requests to the remote api add the flag -"--api-enable-cors" when running docker in daemon mode. - - $ docker -d -H="192.168.1.9:2375" --api-enable-cors diff --git a/reference/api/docker_remote_api_v1.14.md~ b/reference/api/docker_remote_api_v1.14.md~ deleted file mode 100644 index f4e1b3edc5..0000000000 --- a/reference/api/docker_remote_api_v1.14.md~ +++ /dev/null @@ -1,1442 +0,0 @@ -page_title: Remote API v1.14 -page_description: API Documentation for Docker -page_keywords: API, Docker, rcli, REST, documentation - -# Docker Remote API v1.14 - -## 1. Brief introduction - - - The Remote API has replaced `rcli`. - - The daemon listens on `unix:///var/run/docker.sock` but you can - [Bind Docker to another host/port or a Unix socket]( - /articles/basics/#bind-docker-to-another-hostport-or-a-unix-socket). - - The API tends to be REST, but for some complex commands, like `attach` - or `pull`, the HTTP connection is hijacked to transport `STDOUT`, - `STDIN` and `STDERR`. - -# 2. Endpoints - -## 2.1 Containers - -### List containers - -`GET /containers/json` - -List containers - -**Example request**: - - GET /containers/json?all=1&before=8dfafdbc3a40&size=1 HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - [ - { - "Id": "8dfafdbc3a40", - "Image": "ubuntu:latest", - "Command": "echo 1", - "Created": 1367854155, - "Status": "Exit 0", - "Ports": [{"PrivatePort": 2222, "PublicPort": 3333, "Type": "tcp"}], - "SizeRw": 12288, - "SizeRootFs": 0 - }, - { - "Id": "9cd87474be90", - "Image": "ubuntu:latest", - "Command": "echo 222222", - "Created": 1367854155, - "Status": "Exit 0", - "Ports": [], - "SizeRw": 12288, - "SizeRootFs": 0 - }, - { - "Id": "3176a2479c92", - "Image": "ubuntu:latest", - "Command": "echo 3333333333333333", - "Created": 1367854154, - "Status": "Exit 0", - "Ports":[], - "SizeRw":12288, - "SizeRootFs":0 - }, - { - "Id": "4cb07b47f9fb", - "Image": "ubuntu:latest", - "Command": "echo 444444444444444444444444444444444", - "Created": 1367854152, - "Status": "Exit 0", - "Ports": [], - "SizeRw": 12288, - "SizeRootFs": 0 - } - ] - -Query Parameters: - -- **all** – 1/True/true or 0/False/false, Show all containers. - Only running containers are shown by default (i.e., this defaults to false) -- **limit** – Show `limit` last created containers, include non-running ones. -- **since** – Show only containers created since Id, include non-running ones. -- **before** – Show only containers created before Id, include non-running ones. -- **size** – 1/True/true or 0/False/false, Show the containers sizes -- **filters** - a json encoded value of the filters (a map[string][]string) to process on the containers list. Available filters: - - exited=<int> -- containers with exit code of <int> - - status=(restarting|running|paused|exited) - -Status Codes: - -- **200** – no error -- **400** – bad parameter -- **500** – server error - -### Create a container - -`POST /containers/create` - -Create a container - -**Example request**: - - POST /containers/create HTTP/1.1 - Content-Type: application/json - - { - "Hostname":"", - "Domainname": "", - "User":"", - "Memory":0, - "MemorySwap":0, - "CpuShares": 512, - "Cpuset": "0,1", - "AttachStdin":false, - "AttachStdout":true, - "AttachStderr":true, - "PortSpecs":null, - "Tty":false, - "OpenStdin":false, - "StdinOnce":false, - "Env":null, - "Cmd":[ - "date" - ], - "Image":"ubuntu", - "Volumes":{ - "/tmp": {} - }, - "WorkingDir":"", - "NetworkDisabled": false, - "ExposedPorts":{ - "22/tcp": {} - }, - "RestartPolicy": { "Name": "always" } - } - -**Example response**: - - HTTP/1.1 201 Created - Content-Type: application/json - - { - "Id":"e90e34656806" - "Warnings":[] - } - -Json Parameters: - -- **RestartPolicy** – The behavior to apply when the container exits. The - value is an object with a `Name` property of either `"always"` to - always restart or `"on-failure"` to restart only when the container - exit code is non-zero. If `on-failure` is used, `MaximumRetryCount` - controls the number of times to retry before giving up. - The default is not to restart. (optional) - An ever increasing delay (double the previous delay, starting at 100mS) - is added before each restart to prevent flooding the server. -- **config** – the container's configuration - -Query Parameters: - -- **name** – Assign the specified name to the container. Must match `/?[a-zA-Z0-9_-]+`. - -Status Codes: - -- **201** – no error -- **404** – no such container -- **406** – impossible to attach (container not running) -- **500** – server error - -### Inspect a container - -`GET /containers/(id)/json` - -Return low-level information on the container `id` - - -**Example request**: - - GET /containers/4fa6e0f0c678/json HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - { - "Id": "4fa6e0f0c6786287e131c3852c58a2e01cc697a68231826813597e4994f1d6e2", - "Created": "2013-05-07T14:51:42.041847+02:00", - "Path": "date", - "Args": [], - "Config": { - "Hostname": "4fa6e0f0c678", - "User": "", - "Memory": 0, - "MemorySwap": 0, - "AttachStdin": false, - "AttachStdout": true, - "AttachStderr": true, - "PortSpecs": null, - "Tty": false, - "OpenStdin": false, - "StdinOnce": false, - "Env": null, - "Cmd": [ - "date" - ], - "Dns": null, - "Image": "ubuntu", - "Volumes": {}, - "VolumesFrom": "", - "WorkingDir": "" - }, - "State": { - "Running": false, - "Pid": 0, - "ExitCode": 0, - "StartedAt": "2013-05-07T14:51:42.087658+02:01360", - "Ghost": false - }, - "Image": "b750fe79269d2ec9a3c593ef05b4332b1d1a02a62b4accb2c21d589ff2f5f2dc", - "NetworkSettings": { - "IpAddress": "", - "IpPrefixLen": 0, - "Gateway": "", - "Bridge": "", - "PortMapping": null - }, - "SysInitPath": "/home/kitty/go/src/github.com/docker/docker/bin/docker", - "ResolvConfPath": "/etc/resolv.conf", - "Volumes": {}, - "HostConfig": { - "Binds": null, - "ContainerIDFile": "", - "LxcConf": [], - "Privileged": false, - "PortBindings": { - "80/tcp": [ - { - "HostIp": "0.0.0.0", - "HostPort": "49153" - } - ] - }, - "Links": ["/name:alias"], - "PublishAllPorts": false, - "CapAdd": ["NET_ADMIN"], - "CapDrop": ["MKNOD"] - } - } - -Status Codes: - -- **200** – no error -- **404** – no such container -- **500** – server error - -### List processes running inside a container - -`GET /containers/(id)/top` - -List processes running inside the container `id` - -**Example request**: - - GET /containers/4fa6e0f0c678/top HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - { - "Titles": [ - "USER", - "PID", - "%CPU", - "%MEM", - "VSZ", - "RSS", - "TTY", - "STAT", - "START", - "TIME", - "COMMAND" - ], - "Processes": [ - ["root","20147","0.0","0.1","18060","1864","pts/4","S","10:06","0:00","bash"], - ["root","20271","0.0","0.0","4312","352","pts/4","S+","10:07","0:00","sleep","10"] - ] - } - -Query Parameters: - -- **ps_args** – ps arguments to use (e.g., aux) - -Status Codes: - -- **200** – no error -- **404** – no such container -- **500** – server error - -### Get container logs - -`GET /containers/(id)/logs` - -Get stdout and stderr logs from the container ``id`` - -**Example request**: - - GET /containers/4fa6e0f0c678/logs?stderr=1&stdout=1×tamps=1&follow=1&tail=10 HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/vnd.docker.raw-stream - - {{ STREAM }} - -Query Parameters: - -- **follow** – 1/True/true or 0/False/false, return stream. Default false -- **stdout** – 1/True/true or 0/False/false, show stdout log. Default false -- **stderr** – 1/True/true or 0/False/false, show stderr log. Default false -- **timestamps** – 1/True/true or 0/False/false, print timestamps for every - log line. Default false -- **tail** – Output specified number of lines at the end of logs: `all` or - ``. Default all - -Status Codes: - -- **200** – no error -- **404** – no such container -- **500** – server error - -### Inspect changes on a container's filesystem - -`GET /containers/(id)/changes` - -Inspect changes on container `id`'s filesystem - -**Example request**: - - GET /containers/4fa6e0f0c678/changes HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - [ - { - "Path": "/dev", - "Kind": 0 - }, - { - "Path": "/dev/kmsg", - "Kind": 1 - }, - { - "Path": "/test", - "Kind": 1 - } - ] - -Status Codes: - -- **200** – no error -- **404** – no such container -- **500** – server error - -### Export a container - -`GET /containers/(id)/export` - -Export the contents of container `id` - -**Example request**: - - GET /containers/4fa6e0f0c678/export HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/octet-stream - - {{ TAR STREAM }} - -Status Codes: - -- **200** – no error -- **404** – no such container -- **500** – server error - -### Start a container - -`POST /containers/(id)/start` - -Start the container `id` - -**Example request**: - - POST /containers/(id)/start HTTP/1.1 - Content-Type: application/json - - { - "Binds":["/tmp:/tmp"], - "Links":["redis3:redis"], - "LxcConf":[{"Key":"lxc.utsname","Value":"docker"}], - "PortBindings":{ "22/tcp": [{ "HostPort": "11022" }] }, - "PublishAllPorts":false, - "Privileged":false, - "Dns": ["8.8.8.8"], - "VolumesFrom": ["parent", "other:ro"], - "CapAdd": ["NET_ADMIN"], - "CapDrop": ["MKNOD"] - } - -**Example response**: - - HTTP/1.1 204 No Content - -Json Parameters: - -- **hostConfig** – the container's host configuration (optional) - -Status Codes: - -- **204** – no error -- **304** – container already started -- **404** – no such container -- **500** – server error - -### Stop a container - -`POST /containers/(id)/stop` - -Stop the container `id` - -**Example request**: - - POST /containers/e90e34656806/stop?t=5 HTTP/1.1 - -**Example response**: - - HTTP/1.1 204 No Content - -Query Parameters: - -- **t** – number of seconds to wait before killing the container - -Status Codes: - -- **204** – no error -- **304** – container already stopped -- **404** – no such container -- **500** – server error - -### Restart a container - -`POST /containers/(id)/restart` - -Restart the container `id` - -**Example request**: - - POST /containers/e90e34656806/restart?t=5 HTTP/1.1 - -**Example response**: - - HTTP/1.1 204 No Content - -Query Parameters: - -- **t** – number of seconds to wait before killing the container - -Status Codes: - -- **204** – no error -- **404** – no such container -- **500** – server error - -### Kill a container - -`POST /containers/(id)/kill` - -Kill the container `id` - -**Example request**: - - POST /containers/e90e34656806/kill HTTP/1.1 - -**Example response**: - - HTTP/1.1 204 No Content - -Query Parameters - -- **signal** - Signal to send to the container: integer or string like "SIGINT". - When not set, SIGKILL is assumed and the call will wait for the container to exit. - -Status Codes: - -- **204** – no error -- **404** – no such container -- **500** – server error - -### Pause a container - -`POST /containers/(id)/pause` - -Pause the container `id` - -**Example request**: - - POST /containers/e90e34656806/pause HTTP/1.1 - -**Example response**: - - HTTP/1.1 204 No Content - -Status Codes: - -- **204** – no error -- **404** – no such container -- **500** – server error - -### Unpause a container - -`POST /containers/(id)/unpause` - -Unpause the container `id` - -**Example request**: - - POST /containers/e90e34656806/unpause HTTP/1.1 - -**Example response**: - - HTTP/1.1 204 No Content - -Status Codes: - -- **204** – no error -- **404** – no such container -- **500** – server error - -### Attach to a container - -`POST /containers/(id)/attach` - -Attach to the container `id` - -**Example request**: - - POST /containers/16253994b7c4/attach?logs=1&stream=0&stdout=1 HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/vnd.docker.raw-stream - - {{ STREAM }} - -Query Parameters: - -- **logs** – 1/True/true or 0/False/false, return logs. Default false -- **stream** – 1/True/true or 0/False/false, return stream. Default false -- **stdin** – 1/True/true or 0/False/false, if stream=true, attach to stdin. - Default false -- **stdout** – 1/True/true or 0/False/false, if logs=true, return - stdout log, if stream=true, attach to stdout. Default false -- **stderr** – 1/True/true or 0/False/false, if logs=true, return - stderr log, if stream=true, attach to stderr. Default false - -Status Codes: - -- **200** – no error -- **400** – bad parameter -- **404** – no such container -- **500** – server error - - **Stream details**: - - When using the TTY setting is enabled in - [`POST /containers/create` - ](/reference/api/docker_remote_api_v1.9/#create-a-container "POST /containers/create"), - the stream is the raw data from the process PTY and client's stdin. - When the TTY is disabled, then the stream is multiplexed to separate - stdout and stderr. - - The format is a **Header** and a **Payload** (frame). - - **HEADER** - - The header will contain the information on which stream write the - stream (stdout or stderr). It also contain the size of the - associated frame encoded on the last 4 bytes (uint32). - - It is encoded on the first 8 bytes like this: - - header := [8]byte{STREAM_TYPE, 0, 0, 0, SIZE1, SIZE2, SIZE3, SIZE4} - - `STREAM_TYPE` can be: - -- 0: stdin (will be written on stdout) -- 1: stdout -- 2: stderr - - `SIZE1, SIZE2, SIZE3, SIZE4` are the 4 bytes of - the uint32 size encoded as big endian. - - **PAYLOAD** - - The payload is the raw stream. - - **IMPLEMENTATION** - - The simplest way to implement the Attach protocol is the following: - - 1. Read 8 bytes - 2. chose stdout or stderr depending on the first byte - 3. Extract the frame size from the last 4 byets - 4. Read the extracted size and output it on the correct output - 5. Goto 1 - -### Attach to a container (websocket) - -`GET /containers/(id)/attach/ws` - -Attach to the container `id` via websocket - -Implements websocket protocol handshake according to [RFC 6455](http://tools.ietf.org/html/rfc6455) - -**Example request** - - GET /containers/e90e34656806/attach/ws?logs=0&stream=1&stdin=1&stdout=1&stderr=1 HTTP/1.1 - -**Example response** - - {{ STREAM }} - -Query Parameters: - -- **logs** – 1/True/true or 0/False/false, return logs. Default false -- **stream** – 1/True/true or 0/False/false, return stream. - Default false -- **stdin** – 1/True/true or 0/False/false, if stream=true, attach - to stdin. Default false -- **stdout** – 1/True/true or 0/False/false, if logs=true, return - stdout log, if stream=true, attach to stdout. Default false -- **stderr** – 1/True/true or 0/False/false, if logs=true, return - stderr log, if stream=true, attach to stderr. Default false - -Status Codes: - -- **200** – no error -- **400** – bad parameter -- **404** – no such container -- **500** – server error - -### Wait a container - -`POST /containers/(id)/wait` - -Block until container `id` stops, then returns the exit code - -**Example request**: - - POST /containers/16253994b7c4/wait HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - {"StatusCode": 0} - -Status Codes: - -- **200** – no error -- **404** – no such container -- **500** – server error - -### Remove a container - -`DELETE /containers/(id)` - -Remove the container `id` from the filesystem - -**Example request**: - - DELETE /containers/16253994b7c4?v=1 HTTP/1.1 - -**Example response**: - - HTTP/1.1 204 No Content - -Query Parameters: - -- **v** – 1/True/true or 0/False/false, Remove the volumes - associated to the container. Default false -- **force** - 1/True/true or 0/False/false, Kill then remove the container. - Default false - -Status Codes: - -- **204** – no error -- **400** – bad parameter -- **404** – no such container -- **500** – server error - -### Copy files or folders from a container - -`POST /containers/(id)/copy` - -Copy files or folders of container `id` - -**Example request**: - - POST /containers/4fa6e0f0c678/copy HTTP/1.1 - Content-Type: application/json - - { - "Resource": "test.txt" - } - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/octet-stream - - {{ TAR STREAM }} - -Status Codes: - -- **200** – no error -- **404** – no such container -- **500** – server error - -## 2.2 Images - -### List Images - -`GET /images/json` - -**Example request**: - - GET /images/json?all=0 HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - [ - { - "RepoTags": [ - "ubuntu:12.04", - "ubuntu:precise", - "ubuntu:latest" - ], - "Id": "8dbd9e392a964056420e5d58ca5cc376ef18e2de93b5cc90e868a1bbc8318c1c", - "Created": 1365714795, - "Size": 131506275, - "VirtualSize": 131506275 - }, - { - "RepoTags": [ - "ubuntu:12.10", - "ubuntu:quantal" - ], - "ParentId": "27cf784147099545", - "Id": "b750fe79269d2ec9a3c593ef05b4332b1d1a02a62b4accb2c21d589ff2f5f2dc", - "Created": 1364102658, - "Size": 24653, - "VirtualSize": 180116135 - } - ] - - -Query Parameters: - -- **all** – 1/True/true or 0/False/false, default false -- **filters** – a json encoded value of the filters (a map[string][]string) to process on the images list. Available filters: - - dangling=true - -### Create an image - -`POST /images/create` - -Create an image, either by pulling it from the registry or by importing it - -**Example request**: - - POST /images/create?fromImage=ubuntu HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - {"status": "Pulling..."} - {"status": "Pulling", "progress": "1 B/ 100 B", "progressDetail": {"current": 1, "total": 100}} - {"error": "Invalid..."} - ... - - When using this endpoint to pull an image from the registry, the - `X-Registry-Auth` header can be used to include - a base64-encoded AuthConfig object. - -Query Parameters: - -- **fromImage** – name of the image to pull -- **fromSrc** – source to import, - means stdin -- **repo** – repository -- **tag** – tag -- **registry** – the registry to pull from - -Request Headers: - -- **X-Registry-Auth** – base64-encoded AuthConfig object - -Status Codes: - -- **200** – no error -- **500** – server error - - - -### Inspect an image - -`GET /images/(name)/json` - -Return low-level information on the image `name` - -**Example request**: - - GET /images/ubuntu/json HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - { - "Created": "2013-03-23T22:24:18.818426-07:00", - "Container": "3d67245a8d72ecf13f33dffac9f79dcdf70f75acb84d308770391510e0c23ad0", - "ContainerConfig": - { - "Hostname": "", - "User": "", - "Memory": 0, - "MemorySwap": 0, - "AttachStdin": false, - "AttachStdout": false, - "AttachStderr": false, - "PortSpecs": null, - "Tty": true, - "OpenStdin": true, - "StdinOnce": false, - "Env": null, - "Cmd": ["/bin/bash"], - "Dns": null, - "Image": "ubuntu", - "Volumes": null, - "VolumesFrom": "", - "WorkingDir": "" - }, - "Id": "b750fe79269d2ec9a3c593ef05b4332b1d1a02a62b4accb2c21d589ff2f5f2dc", - "Parent": "27cf784147099545", - "Size": 6824592 - } - -Status Codes: - -- **200** – no error -- **404** – no such image -- **500** – server error - -### Get the history of an image - -`GET /images/(name)/history` - -Return the history of the image `name` - -**Example request**: - - GET /images/ubuntu/history HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - [ - { - "Id": "b750fe79269d", - "Created": 1364102658, - "CreatedBy": "/bin/bash" - }, - { - "Id": "27cf78414709", - "Created": 1364068391, - "CreatedBy": "" - } - ] - -Status Codes: - -- **200** – no error -- **404** – no such image -- **500** – server error - -### Push an image on the registry - -`POST /images/(name)/push` - -Push the image `name` on the registry - -**Example request**: - - POST /images/test/push HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - {"status": "Pushing..."} - {"status": "Pushing", "progress": "1/? (n/a)", "progressDetail": {"current": 1}}} - {"error": "Invalid..."} - ... - - If you wish to push an image on to a private registry, that image must already have been tagged - into a repository which references that registry host name and port. This repository name should - then be used in the URL. This mirrors the flow of the CLI. - -**Example request**: - - POST /images/registry.acme.com:5000/test/push HTTP/1.1 - - -Query Parameters: - -- **tag** – the tag to associate with the image on the registry, optional - -Request Headers: - -- **X-Registry-Auth** – include a base64-encoded AuthConfig object. - -Status Codes: - -- **200** – no error -- **404** – no such image -- **500** – server error - -### Tag an image into a repository - -`POST /images/(name)/tag` - -Tag the image `name` into a repository - -**Example request**: - - POST /images/test/tag?repo=myrepo&force=0&tag=v42 HTTP/1.1 - -**Example response**: - - HTTP/1.1 201 OK - -Query Parameters: - -- **repo** – The repository to tag in -- **force** – 1/True/true or 0/False/false, default false -- **tag** - The new tag name - -Status Codes: - -- **201** – no error -- **400** – bad parameter -- **404** – no such image -- **409** – conflict -- **500** – server error - -### Remove an image - -`DELETE /images/(name)` - -Remove the image `name` from the filesystem - -**Example request**: - - DELETE /images/test HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-type: application/json - - [ - {"Untagged": "3e2f21a89f"}, - {"Deleted": "3e2f21a89f"}, - {"Deleted": "53b4f83ac9"} - ] - -Query Parameters: - -- **force** – 1/True/true or 0/False/false, default false -- **noprune** – 1/True/true or 0/False/false, default false - -Status Codes: - -- **200** – no error -- **404** – no such image -- **409** – conflict -- **500** – server error - -### Search images - -`GET /images/search` - -Search for an image on [Docker Hub](https://hub.docker.com). - -> **Note**: -> The response keys have changed from API v1.6 to reflect the JSON -> sent by the registry server to the docker daemon's request. - -**Example request**: - - GET /images/search?term=sshd HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - [ - { - "description": "", - "is_official": false, - "is_automated": false, - "name": "wma55/u1210sshd", - "star_count": 0 - }, - { - "description": "", - "is_official": false, - "is_automated": false, - "name": "jdswinbank/sshd", - "star_count": 0 - }, - { - "description": "", - "is_official": false, - "is_automated": false, - "name": "vgauthier/sshd", - "star_count": 0 - } - ... - ] - -Query Parameters: - -- **term** – term to search - -Status Codes: - -- **200** – no error -- **500** – server error - -## 2.3 Misc - -### Build an image from Dockerfile via stdin - -`POST /build` - -Build an image from Dockerfile via stdin - -**Example request**: - - POST /build HTTP/1.1 - - {{ TAR STREAM }} - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - {"stream": "Step 1..."} - {"stream": "..."} - {"error": "Error...", "errorDetail": {"code": 123, "message": "Error..."}} - - The stream must be a tar archive compressed with one of the - following algorithms: identity (no compression), gzip, bzip2, xz. - - The archive must include a file called `Dockerfile` - at its root. It may include any number of other files, - which will be accessible in the build context (See the [*ADD build - command*](/reference/builder/#dockerbuilder)). - -Query Parameters: - -- **t** – repository name (and optionally a tag) to be applied to - the resulting image in case of success -- **remote** – git or HTTP/HTTPS URI build source -- **q** – suppress verbose build output -- **nocache** – do not use the cache when building the image -- **rm** - remove intermediate containers after a successful build (default behavior) -- **forcerm** - always remove intermediate containers (includes rm) - - Request Headers: - -- **Content-type** – should be set to `"application/tar"`. -- **X-Registry-Config** – base64-encoded ConfigFile objec - -Status Codes: - -- **200** – no error -- **500** – server error - -### Check auth configuration - -`POST /auth` - -Get the default username and email - -**Example request**: - - POST /auth HTTP/1.1 - Content-Type: application/json - - { - "username":" hannibal", - "password: "xxxx", - "email": "hannibal@a-team.com", - "serveraddress": "https://index.docker.io/v1/" - } - -**Example response**: - - HTTP/1.1 200 OK - -Status Codes: - -- **200** – no error -- **204** – no error -- **500** – server error - -### Display system-wide information - -`GET /info` - -Display system-wide information - -**Example request**: - - GET /info HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - { - "Containers": 11, - "Images": 16, - "Driver": "btrfs", - "ExecutionDriver": "native-0.1", - "KernelVersion": "3.12.0-1-amd64" - "Debug": false, - "NFd": 11, - "NGoroutines": 21, - "NEventsListener": 0, - "InitPath": "/usr/bin/docker", - "IndexServerAddress": ["https://index.docker.io/v1/"], - "MemoryLimit": true, - "SwapLimit": false, - "IPv4Forwarding": true - } - -Status Codes: - -- **200** – no error -- **500** – server error - -### Show the docker version information - -`GET /version` - -Show the docker version information - -**Example request**: - - GET /version HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - { - "ApiVersion": "1.12", - "Version": "0.2.2", - "GitCommit": "5a2a5cc+CHANGES", - "GoVersion": "go1.0.3" - } - -Status Codes: - -- **200** – no error -- **500** – server error - -### Ping the docker server - -`GET /_ping` - -Ping the docker server - -**Example request**: - - GET /_ping HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: text/plain - - OK - -Status Codes: - -- **200** - no error -- **500** - server error - -### Create a new image from a container's changes - -`POST /commit` - -Create a new image from a container's changes - -**Example request**: - - POST /commit?container=44c004db4b17&comment=message&repo=myrepo HTTP/1.1 - Content-Type: application/json - - { - "Hostname": "", - "Domainname": "", - "User": "", - "Memory": 0, - "MemorySwap": 0, - "CpuShares": 512, - "Cpuset": "0,1", - "AttachStdin": false, - "AttachStdout": true, - "AttachStderr": true, - "PortSpecs": null, - "Tty": false, - "OpenStdin": false, - "StdinOnce": false, - "Env": null, - "Cmd": [ - "date" - ], - "Volumes": { - "/tmp": {} - }, - "WorkingDir": "", - "NetworkDisabled": false, - "ExposedPorts": { - "22/tcp": {} - } - } - -**Example response**: - - HTTP/1.1 201 Created - Content-Type: application/vnd.docker.raw-stream - - {"Id": "596069db4bf5"} - -Json Parameters: - -- **config** - the container's configuration - -Query Parameters: - -- **container** – source container -- **repo** – repository -- **tag** – tag -- **comment** – commit message -- **author** – author (e.g., "John Hannibal Smith - <[hannibal@a-team.com](mailto:hannibal%40a-team.com)>") - -Status Codes: - -- **201** – no error -- **404** – no such container -- **500** – server error - -### Monitor Docker's events - -`GET /events` - -Get container events from docker, either in real time via streaming, or via -polling (using since). - -Docker containers will report the following events: - - create, destroy, die, export, kill, pause, restart, start, stop, unpause - -and Docker images will report: - - untag, delete - -**Example request**: - - GET /events?since=1374067924 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - {"status": "create", "id": "dfdf82bd3881","from": "ubuntu:latest", "time":1374067924} - {"status": "start", "id": "dfdf82bd3881","from": "ubuntu:latest", "time":1374067924} - {"status": "stop", "id": "dfdf82bd3881","from": "ubuntu:latest", "time":1374067966} - {"status": "destroy", "id": "dfdf82bd3881","from": "ubuntu:latest", "time":1374067970} - -Query Parameters: - -- **since** – timestamp used for polling -- **until** – timestamp used for polling - -Status Codes: - -- **200** – no error -- **500** – server error - -### Get a tarball containing all images and tags in a repository - -`GET /images/(name)/get` - -Get a tarball containing all images and metadata for the repository -specified by `name`. - -See the [image tarball format](#image-tarball-format) for more details. - -**Example request** - - GET /images/ubuntu/get - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/x-tar - - Binary data stream - -Status Codes: - -- **200** – no error -- **500** – server error - -### Load a tarball with a set of images and tags into docker - -`POST /images/load` - -Load a set of images and tags into the docker repository. -See the [image tarball format](#image-tarball-format) for more details. - -**Example request** - - POST /images/load - - Tarball in body - -**Example response**: - - HTTP/1.1 200 OK - -Status Codes: - -- **200** – no error -- **500** – server error - -### Image tarball format - -An image tarball contains one directory per image layer (named using its long ID), -each containing three files: - -1. `VERSION`: currently `1.0` - the file format version -2. `json`: detailed layer information, similar to `docker inspect layer_id` -3. `layer.tar`: A tarfile containing the filesystem changes in this layer - -The `layer.tar` file will contain `aufs` style `.wh..wh.aufs` files and directories -for storing attribute changes and deletions. - -If the tarball defines a repository, there will also be a `repositories` file at -the root that contains a list of repository and tag names mapped to layer IDs. - -``` -{"hello-world": - {"latest": "565a9d68a73f6706862bfe8409a7f659776d4d60a8d096eb4a3cbce6999cc2a1"} -} -``` - -# 3. Going further - -## 3.1 Inside `docker run` - -As an example, the `docker run` command line makes the following API calls: - -- Create the container - -- If the status code is 404, it means the image doesn't exist: - - Try to pull it - - Then retry to create the container - -- Start the container - -- If you are not in detached mode: - - Attach to the container, using logs=1 (to have stdout and - stderr from the container's start) and stream=1 - -- If in detached mode or only stdin is attached: - - Display the container's id - -## 3.2 Hijacking - -In this version of the API, /attach, uses hijacking to transport stdin, -stdout and stderr on the same socket. This might change in the future. - -## 3.3 CORS Requests - -To enable cross origin requests to the remote api add the flag -"--api-enable-cors" when running docker in daemon mode. - - $ docker -d -H="192.168.1.9:2375" --api-enable-cors diff --git a/reference/api/docker_remote_api_v1.15.md~ b/reference/api/docker_remote_api_v1.15.md~ deleted file mode 100644 index a956d454ac..0000000000 --- a/reference/api/docker_remote_api_v1.15.md~ +++ /dev/null @@ -1,1732 +0,0 @@ -page_title: Remote API v1.15 -page_description: API Documentation for Docker -page_keywords: API, Docker, rcli, REST, documentation - -# Docker Remote API v1.15 - -## 1. Brief introduction - - - The Remote API has replaced `rcli`. - - The daemon listens on `unix:///var/run/docker.sock` but you can - [Bind Docker to another host/port or a Unix socket]( - /articles/basics/#bind-docker-to-another-hostport-or-a-unix-socket). - - The API tends to be REST, but for some complex commands, like `attach` - or `pull`, the HTTP connection is hijacked to transport `STDOUT`, - `STDIN` and `STDERR`. - -# 2. Endpoints - -## 2.1 Containers - -### List containers - -`GET /containers/json` - -List containers - -**Example request**: - - GET /containers/json?all=1&before=8dfafdbc3a40&size=1 HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - [ - { - "Id": "8dfafdbc3a40", - "Image": "ubuntu:latest", - "Command": "echo 1", - "Created": 1367854155, - "Status": "Exit 0", - "Ports": [{"PrivatePort": 2222, "PublicPort": 3333, "Type": "tcp"}], - "SizeRw": 12288, - "SizeRootFs": 0 - }, - { - "Id": "9cd87474be90", - "Image": "ubuntu:latest", - "Command": "echo 222222", - "Created": 1367854155, - "Status": "Exit 0", - "Ports": [], - "SizeRw": 12288, - "SizeRootFs": 0 - }, - { - "Id": "3176a2479c92", - "Image": "ubuntu:latest", - "Command": "echo 3333333333333333", - "Created": 1367854154, - "Status": "Exit 0", - "Ports":[], - "SizeRw":12288, - "SizeRootFs":0 - }, - { - "Id": "4cb07b47f9fb", - "Image": "ubuntu:latest", - "Command": "echo 444444444444444444444444444444444", - "Created": 1367854152, - "Status": "Exit 0", - "Ports": [], - "SizeRw": 12288, - "SizeRootFs": 0 - } - ] - -Query Parameters: - -- **all** – 1/True/true or 0/False/false, Show all containers. - Only running containers are shown by default (i.e., this defaults to false) -- **limit** – Show `limit` last created - containers, include non-running ones. -- **since** – Show only containers created since Id, include - non-running ones. -- **before** – Show only containers created before Id, include - non-running ones. -- **size** – 1/True/true or 0/False/false, Show the containers - sizes -- **filters** - a json encoded value of the filters (a map[string][]string) to process on the containers list. Available filters: - - exited=<int> -- containers with exit code of <int> - - status=(restarting|running|paused|exited) - -Status Codes: - -- **200** – no error -- **400** – bad parameter -- **500** – server error - -### Create a container - -`POST /containers/create` - -Create a container - -**Example request**: - - POST /containers/create HTTP/1.1 - Content-Type: application/json - - { - "Hostname": "", - "Domainname": "", - "User": "", - "Memory": 0, - "MemorySwap": 0, - "CpuShares": 512, - "Cpuset": "0,1", - "AttachStdin": false, - "AttachStdout": true, - "AttachStderr": true, - "Tty": false, - "OpenStdin": false, - "StdinOnce": false, - "Env": null, - "Cmd": [ - "date" - ], - "Entrypoint": "", - "Image": "ubuntu", - "Volumes": { - "/tmp": {} - }, - "WorkingDir": "", - "NetworkDisabled": false, - "MacAddress": "12:34:56:78:9a:bc", - "ExposedPorts": { - "22/tcp": {} - }, - "SecurityOpts": [""], - "HostConfig": { - "Binds": ["/tmp:/tmp"], - "Links": ["redis3:redis"], - "LxcConf": {"lxc.utsname":"docker"}, - "PortBindings": { "22/tcp": [{ "HostPort": "11022" }] }, - "PublishAllPorts": false, - "Privileged": false, - "Dns": ["8.8.8.8"], - "DnsSearch": [""], - "ExtraHosts": null, - "VolumesFrom": ["parent", "other:ro"], - "CapAdd": ["NET_ADMIN"], - "CapDrop": ["MKNOD"], - "RestartPolicy": { "Name": "", "MaximumRetryCount": 0 }, - "NetworkMode": "bridge", - "Devices": [] - } - } - -**Example response**: - - HTTP/1.1 201 Created - Content-Type: application/json - - { - "Id": "f91ddc4b01e079c4481a8340bbbeca4dbd33d6e4a10662e499f8eacbb5bf252b" - "Warnings": [] - } - -Json Parameters: - -- **Hostname** - A string value containing the desired hostname to use for the - container. -- **Domainname** - A string value containing the desired domain name to use - for the container. -- **User** - A string value containg the user to use inside the container. -- **Memory** - Memory limit in bytes. -- **MemorySwap**- Total memory usage (memory + swap); set `-1` to disable swap. -- **CpuShares** - An integer value containing the CPU Shares for container - (ie. the relative weight vs othercontainers). - **CpuSet** - String value containg the cgroups Cpuset to use. -- **AttachStdin** - Boolean value, attaches to stdin. -- **AttachStdout** - Boolean value, attaches to stdout. -- **AttachStderr** - Boolean value, attaches to stderr. -- **Tty** - Boolean value, Attach standard streams to a tty, including stdin if it is not closed. -- **OpenStdin** - Boolean value, opens stdin, -- **StdinOnce** - Boolean value, close stdin after the 1 attached client disconnects. -- **Env** - A list of environment variables in the form of `VAR=value` -- **Cmd** - Command to run specified as a string or an array of strings. -- **Entrypoint** - Set the entrypoint for the container a a string or an array - of strings -- **Image** - String value containing the image name to use for the container -- **Volumes** – An object mapping mountpoint paths (strings) inside the - container to empty objects. -- **WorkingDir** - A string value containing the working dir for commands to - run in. -- **NetworkDisabled** - Boolean value, when true disables neworking for the - container -- **ExposedPorts** - An object mapping ports to an empty object in the form of: - `"ExposedPorts": { "/: {}" }` -- **SecurityOpts**: A list of string values to customize labels for MLS - systems, such as SELinux. -- **HostConfig** - - **Binds** – A list of volume bindings for this container. Each volume - binding is a string of the form `container_path` (to create a new - volume for the container), `host_path:container_path` (to bind-mount - a host path into the container), or `host_path:container_path:ro` - (to make the bind-mount read-only inside the container). - - **Links** - A list of links for the container. Each link entry should be of - of the form "container_name:alias". - - **LxcConf** - LXC specific configurations. These configurations will only - work when using the `lxc` execution driver. - - **PortBindings** - A map of exposed container ports and the host port they - should map to. It should be specified in the form - `{ /: [{ "HostPort": "" }] }` - Take note that `port` is specified as a string and not an integer value. - - **PublishAllPorts** - Allocates a random host port for all of a container's - exposed ports. Specified as a boolean value. - - **Privileged** - Gives the container full access to the host. Specified as - a boolean value. - - **Dns** - A list of dns servers for the container to use. - - **DnsSearch** - A list of DNS search domains - - **ExtraHosts** - A list of hostnames/IP mappings to be added to the - container's `/etc/hosts` file. Specified in the form `["hostname:IP"]`. - - **VolumesFrom** - A list of volumes to inherit from another container. - Specified in the form `[:]` - - **CapAdd** - A list of kernel capabilties to add to the container. - - **Capdrop** - A list of kernel capabilties to drop from the container. - - **RestartPolicy** – The behavior to apply when the container exits. The - value is an object with a `Name` property of either `"always"` to - always restart or `"on-failure"` to restart only when the container - exit code is non-zero. If `on-failure` is used, `MaximumRetryCount` - controls the number of times to retry before giving up. - The default is not to restart. (optional) - An ever increasing delay (double the previous delay, starting at 100mS) - is added before each restart to prevent flooding the server. - - **NetworkMode** - Sets the networking mode for the container. Supported - values are: `bridge`, `host`, and `container:` - - **Devices** - A list of devices to add to the container specified in the - form - `{ "PathOnHost": "/dev/deviceName", "PathInContainer": "/dev/deviceName", "CgroupPermissions": "mrw"}` - -Query Parameters: - -- **name** – Assign the specified name to the container. Must - match `/?[a-zA-Z0-9_-]+`. - -Status Codes: - -- **201** – no error -- **404** – no such container -- **406** – impossible to attach (container not running) -- **500** – server error - -### Inspect a container - -`GET /containers/(id)/json` - -Return low-level information on the container `id` - - -**Example request**: - - GET /containers/4fa6e0f0c678/json HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - { - "Id": "4fa6e0f0c6786287e131c3852c58a2e01cc697a68231826813597e4994f1d6e2", - "Created": "2013-05-07T14:51:42.041847+02:00", - "Path": "date", - "Args": [], - "Config": { - "Hostname": "4fa6e0f0c678", - "User": "", - "Memory": 0, - "MemorySwap": 0, - "AttachStdin": false, - "AttachStdout": true, - "AttachStderr": true, - "PortSpecs": null, - "Tty": false, - "OpenStdin": false, - "StdinOnce": false, - "Env": null, - "Cmd": [ - "date" - ], - "Dns": null, - "Image": "ubuntu", - "Volumes": {}, - "VolumesFrom": "", - "WorkingDir": "" - }, - "State": { - "Running": false, - "Pid": 0, - "ExitCode": 0, - "StartedAt": "2013-05-07T14:51:42.087658+02:01360", - "Ghost": false - }, - "Image": "b750fe79269d2ec9a3c593ef05b4332b1d1a02a62b4accb2c21d589ff2f5f2dc", - "NetworkSettings": { - "IpAddress": "", - "IpPrefixLen": 0, - "Gateway": "", - "Bridge": "", - "PortMapping": null - }, - "SysInitPath": "/home/kitty/go/src/github.com/docker/docker/bin/docker", - "ResolvConfPath": "/etc/resolv.conf", - "Volumes": {}, - "HostConfig": { - "Binds": null, - "ContainerIDFile": "", - "LxcConf": [], - "Privileged": false, - "PortBindings": { - "80/tcp": [ - { - "HostIp": "0.0.0.0", - "HostPort": "49153" - } - ] - }, - "Links": ["/name:alias"], - "PublishAllPorts": false, - "CapAdd": ["NET_ADMIN"], - "CapDrop": ["MKNOD"] - } - } - -Status Codes: - -- **200** – no error -- **404** – no such container -- **500** – server error - -### List processes running inside a container - -`GET /containers/(id)/top` - -List processes running inside the container `id` - -**Example request**: - - GET /containers/4fa6e0f0c678/top HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - { - "Titles": [ - "USER", - "PID", - "%CPU", - "%MEM", - "VSZ", - "RSS", - "TTY", - "STAT", - "START", - "TIME", - "COMMAND" - ], - "Processes": [ - ["root","20147","0.0","0.1","18060","1864","pts/4","S","10:06","0:00","bash"], - ["root","20271","0.0","0.0","4312","352","pts/4","S+","10:07","0:00","sleep","10"] - ] - } - -Query Parameters: - -- **ps_args** – ps arguments to use (e.g., aux) - -Status Codes: - -- **200** – no error -- **404** – no such container -- **500** – server error - -### Get container logs - -`GET /containers/(id)/logs` - -Get stdout and stderr logs from the container ``id`` - -**Example request**: - - GET /containers/4fa6e0f0c678/logs?stderr=1&stdout=1×tamps=1&follow=1&tail=10 HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/vnd.docker.raw-stream - - {{ STREAM }} - -Query Parameters: - -- **follow** – 1/True/true or 0/False/false, return stream. Default false -- **stdout** – 1/True/true or 0/False/false, show stdout log. Default false -- **stderr** – 1/True/true or 0/False/false, show stderr log. Default false -- **timestamps** – 1/True/true or 0/False/false, print timestamps for - every log line. Default false -- **tail** – Output specified number of lines at the end of logs: `all` or ``. Default all - -Status Codes: - -- **200** – no error -- **404** – no such container -- **500** – server error - -### Inspect changes on a container's filesystem - -`GET /containers/(id)/changes` - -Inspect changes on container `id`'s filesystem - -**Example request**: - - GET /containers/4fa6e0f0c678/changes HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - [ - { - "Path": "/dev", - "Kind": 0 - }, - { - "Path": "/dev/kmsg", - "Kind": 1 - }, - { - "Path": "/test", - "Kind": 1 - } - ] - -Status Codes: - -- **200** – no error -- **404** – no such container -- **500** – server error - -### Export a container - -`GET /containers/(id)/export` - -Export the contents of container `id` - -**Example request**: - - GET /containers/4fa6e0f0c678/export HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/octet-stream - - {{ TAR STREAM }} - -Status Codes: - -- **200** – no error -- **404** – no such container -- **500** – server error - -### Resize a container TTY - -`GET /containers/(id)/resize?h=&w=` - -Resize the TTY of container `id` - -**Example request**: - - GET /containers/4fa6e0f0c678/resize?h=40&w=80 HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Length: 0 - Content-Type: text/plain; charset=utf-8 - -Status Codes: - -- **200** – no error -- **404** – No such container -- **500** – bad file descriptor - -### Start a container - -`POST /containers/(id)/start` - -Start the container `id` - -**Example request**: - - POST /containers/(id)/start HTTP/1.1 - Content-Type: application/json - - { - "Binds": ["/tmp:/tmp"], - "Links": ["redis3:redis"], - "LxcConf": {"lxc.utsname":"docker"}, - "PortBindings": { "22/tcp": [{ "HostPort": "11022" }] }, - "PublishAllPorts": false, - "Privileged": false, - "Dns": ["8.8.8.8"], - "DnsSearch": [""], - "VolumesFrom": ["parent", "other:ro"], - "CapAdd": ["NET_ADMIN"], - "CapDrop": ["MKNOD"], - "RestartPolicy": { "Name": "", "MaximumRetryCount": 0 }, - "NetworkMode": "bridge", - "Devices": [] - } - -**Example response**: - - HTTP/1.1 204 No Content - -Json Parameters: - -- **Binds** – A list of volume bindings for this container. Each volume - binding is a string of the form `container_path` (to create a new - volume for the container), `host_path:container_path` (to bind-mount - a host path into the container), or `host_path:container_path:ro` - (to make the bind-mount read-only inside the container). -- **Links** - A list of links for the container. Each link entry should be of - of the form "container_name:alias". -- **LxcConf** - LXC specific configurations. These configurations will only - work when using the `lxc` execution driver. -- **PortBindings** - A map of exposed container ports and the host port they - should map to. It should be specified in the form - `{ /: [{ "HostPort": "" }] }` - Take note that `port` is specified as a string and not an integer value. -- **PublishAllPorts** - Allocates a random host port for all of a container's - exposed ports. Specified as a boolean value. -- **Privileged** - Gives the container full access to the host. Specified as - a boolean value. -- **Dns** - A list of dns servers for the container to use. -- **DnsSearch** - A list of DNS search domains -- **VolumesFrom** - A list of volumes to inherit from another container. - Specified in the form `[:]` -- **CapAdd** - A list of kernel capabilties to add to the container. -- **Capdrop** - A list of kernel capabilties to drop from the container. -- **RestartPolicy** – The behavior to apply when the container exits. The - value is an object with a `Name` property of either `"always"` to - always restart or `"on-failure"` to restart only when the container - exit code is non-zero. If `on-failure` is used, `MaximumRetryCount` - controls the number of times to retry before giving up. - The default is not to restart. (optional) - An ever increasing delay (double the previous delay, starting at 100mS) - is added before each restart to prevent flooding the server. -- **NetworkMode** - Sets the networking mode for the container. Supported - values are: `bridge`, `host`, and `container:` -- **Devices** - A list of devices to add to the container specified in the - form - `{ "PathOnHost": "/dev/deviceName", "PathInContainer": "/dev/deviceName", "CgroupPermissions": "mrw"}` - -Status Codes: - -- **204** – no error -- **304** – container already started -- **404** – no such container -- **500** – server error - -### Stop a container - -`POST /containers/(id)/stop` - -Stop the container `id` - -**Example request**: - - POST /containers/e90e34656806/stop?t=5 HTTP/1.1 - -**Example response**: - - HTTP/1.1 204 No Content - -Query Parameters: - -- **t** – number of seconds to wait before killing the container - -Status Codes: - -- **204** – no error -- **304** – container already stopped -- **404** – no such container -- **500** – server error - -### Restart a container - -`POST /containers/(id)/restart` - -Restart the container `id` - -**Example request**: - - POST /containers/e90e34656806/restart?t=5 HTTP/1.1 - -**Example response**: - - HTTP/1.1 204 No Content - -Query Parameters: - -- **t** – number of seconds to wait before killing the container - -Status Codes: - -- **204** – no error -- **404** – no such container -- **500** – server error - -### Kill a container - -`POST /containers/(id)/kill` - -Kill the container `id` - -**Example request**: - - POST /containers/e90e34656806/kill HTTP/1.1 - -**Example response**: - - HTTP/1.1 204 No Content - -Query Parameters - -- **signal** - Signal to send to the container: integer or string like "SIGINT". - When not set, SIGKILL is assumed and the call will waits for the container to exit. - -Status Codes: - -- **204** – no error -- **404** – no such container -- **500** – server error - -### Pause a container - -`POST /containers/(id)/pause` - -Pause the container `id` - -**Example request**: - - POST /containers/e90e34656806/pause HTTP/1.1 - -**Example response**: - - HTTP/1.1 204 No Content - -Status Codes: - -- **204** – no error -- **404** – no such container -- **500** – server error - -### Unpause a container - -`POST /containers/(id)/unpause` - -Unpause the container `id` - -**Example request**: - - POST /containers/e90e34656806/unpause HTTP/1.1 - -**Example response**: - - HTTP/1.1 204 No Content - -Status Codes: - -- **204** – no error -- **404** – no such container -- **500** – server error - -### Attach to a container - -`POST /containers/(id)/attach` - -Attach to the container `id` - -**Example request**: - - POST /containers/16253994b7c4/attach?logs=1&stream=0&stdout=1 HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/vnd.docker.raw-stream - - {{ STREAM }} - -Query Parameters: - -- **logs** – 1/True/true or 0/False/false, return logs. Default false -- **stream** – 1/True/true or 0/False/false, return stream. - Default false -- **stdin** – 1/True/true or 0/False/false, if stream=true, attach - to stdin. Default false -- **stdout** – 1/True/true or 0/False/false, if logs=true, return - stdout log, if stream=true, attach to stdout. Default false -- **stderr** – 1/True/true or 0/False/false, if logs=true, return - stderr log, if stream=true, attach to stderr. Default false - -Status Codes: - -- **200** – no error -- **400** – bad parameter -- **404** – no such container -- **500** – server error - - **Stream details**: - - When using the TTY setting is enabled in - [`POST /containers/create` - ](/reference/api/docker_remote_api_v1.9/#create-a-container "POST /containers/create"), - the stream is the raw data from the process PTY and client's stdin. - When the TTY is disabled, then the stream is multiplexed to separate - stdout and stderr. - - The format is a **Header** and a **Payload** (frame). - - **HEADER** - - The header will contain the information on which stream write the - stream (stdout or stderr). It also contain the size of the - associated frame encoded on the last 4 bytes (uint32). - - It is encoded on the first 8 bytes like this: - - header := [8]byte{STREAM_TYPE, 0, 0, 0, SIZE1, SIZE2, SIZE3, SIZE4} - - `STREAM_TYPE` can be: - -- 0: stdin (will be written on stdout) -- 1: stdout -- 2: stderr - - `SIZE1, SIZE2, SIZE3, SIZE4` are the 4 bytes of - the uint32 size encoded as big endian. - - **PAYLOAD** - - The payload is the raw stream. - - **IMPLEMENTATION** - - The simplest way to implement the Attach protocol is the following: - - 1. Read 8 bytes - 2. chose stdout or stderr depending on the first byte - 3. Extract the frame size from the last 4 byets - 4. Read the extracted size and output it on the correct output - 5. Goto 1 - -### Attach to a container (websocket) - -`GET /containers/(id)/attach/ws` - -Attach to the container `id` via websocket - -Implements websocket protocol handshake according to [RFC 6455](http://tools.ietf.org/html/rfc6455) - -**Example request** - - GET /containers/e90e34656806/attach/ws?logs=0&stream=1&stdin=1&stdout=1&stderr=1 HTTP/1.1 - -**Example response** - - {{ STREAM }} - -Query Parameters: - -- **logs** – 1/True/true or 0/False/false, return logs. Default false -- **stream** – 1/True/true or 0/False/false, return stream. - Default false -- **stdin** – 1/True/true or 0/False/false, if stream=true, attach - to stdin. Default false -- **stdout** – 1/True/true or 0/False/false, if logs=true, return - stdout log, if stream=true, attach to stdout. Default false -- **stderr** – 1/True/true or 0/False/false, if logs=true, return - stderr log, if stream=true, attach to stderr. Default false - -Status Codes: - -- **200** – no error -- **400** – bad parameter -- **404** – no such container -- **500** – server error - -### Wait a container - -`POST /containers/(id)/wait` - -Block until container `id` stops, then returns the exit code - -**Example request**: - - POST /containers/16253994b7c4/wait HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - {"StatusCode": 0} - -Status Codes: - -- **200** – no error -- **404** – no such container -- **500** – server error - -### Remove a container - -`DELETE /containers/(id)` - -Remove the container `id` from the filesystem - -**Example request**: - - DELETE /containers/16253994b7c4?v=1 HTTP/1.1 - -**Example response**: - - HTTP/1.1 204 No Content - -Query Parameters: - -- **v** – 1/True/true or 0/False/false, Remove the volumes - associated to the container. Default false -- **force** - 1/True/true or 0/False/false, Kill then remove the container. - Default false - -Status Codes: - -- **204** – no error -- **400** – bad parameter -- **404** – no such container -- **500** – server error - -### Copy files or folders from a container - -`POST /containers/(id)/copy` - -Copy files or folders of container `id` - -**Example request**: - - POST /containers/4fa6e0f0c678/copy HTTP/1.1 - Content-Type: application/json - - { - "Resource": "test.txt" - } - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/x-tar - - {{ TAR STREAM }} - -Status Codes: - -- **200** – no error -- **404** – no such container -- **500** – server error - -## 2.2 Images - -### List Images - -`GET /images/json` - -**Example request**: - - GET /images/json?all=0 HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - [ - { - "RepoTags": [ - "ubuntu:12.04", - "ubuntu:precise", - "ubuntu:latest" - ], - "Id": "8dbd9e392a964056420e5d58ca5cc376ef18e2de93b5cc90e868a1bbc8318c1c", - "Created": 1365714795, - "Size": 131506275, - "VirtualSize": 131506275 - }, - { - "RepoTags": [ - "ubuntu:12.10", - "ubuntu:quantal" - ], - "ParentId": "27cf784147099545", - "Id": "b750fe79269d2ec9a3c593ef05b4332b1d1a02a62b4accb2c21d589ff2f5f2dc", - "Created": 1364102658, - "Size": 24653, - "VirtualSize": 180116135 - } - ] - - -Query Parameters: - -- **all** – 1/True/true or 0/False/false, default false -- **filters** – a json encoded value of the filters (a map[string][]string) to process on the images list. Available filters: - - dangling=true - -### Create an image - -`POST /images/create` - -Create an image, either by pulling it from the registry or by importing it - -**Example request**: - - POST /images/create?fromImage=ubuntu HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - {"status": "Pulling..."} - {"status": "Pulling", "progress": "1 B/ 100 B", "progressDetail": {"current": 1, "total": 100}} - {"error": "Invalid..."} - ... - - When using this endpoint to pull an image from the registry, the - `X-Registry-Auth` header can be used to include - a base64-encoded AuthConfig object. - -Query Parameters: - -- **fromImage** – name of the image to pull -- **fromSrc** – source to import. The value may be a URL from which the image - can be retrieved or `-` to read the image from the request body. -- **repo** – repository -- **tag** – tag -- **registry** – the registry to pull from - - Request Headers: - -- **X-Registry-Auth** – base64-encoded AuthConfig object - -Status Codes: - -- **200** – no error -- **500** – server error - - - -### Inspect an image - -`GET /images/(name)/json` - -Return low-level information on the image `name` - -**Example request**: - - GET /images/ubuntu/json HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - { - "Created": "2013-03-23T22:24:18.818426-07:00", - "Container": "3d67245a8d72ecf13f33dffac9f79dcdf70f75acb84d308770391510e0c23ad0", - "ContainerConfig": - { - "Hostname": "", - "User": "", - "Memory": 0, - "MemorySwap": 0, - "AttachStdin": false, - "AttachStdout": false, - "AttachStderr": false, - "PortSpecs": null, - "Tty": true, - "OpenStdin": true, - "StdinOnce": false, - "Env": null, - "Cmd": ["/bin/bash"], - "Dns": null, - "Image": "ubuntu", - "Volumes": null, - "VolumesFrom": "", - "WorkingDir": "" - }, - "Id": "b750fe79269d2ec9a3c593ef05b4332b1d1a02a62b4accb2c21d589ff2f5f2dc", - "Parent": "27cf784147099545", - "Size": 6824592 - } - -Status Codes: - -- **200** – no error -- **404** – no such image -- **500** – server error - -### Get the history of an image - -`GET /images/(name)/history` - -Return the history of the image `name` - -**Example request**: - - GET /images/ubuntu/history HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - [ - { - "Id": "b750fe79269d", - "Created": 1364102658, - "CreatedBy": "/bin/bash" - }, - { - "Id": "27cf78414709", - "Created": 1364068391, - "CreatedBy": "" - } - ] - -Status Codes: - -- **200** – no error -- **404** – no such image -- **500** – server error - -### Push an image on the registry - -`POST /images/(name)/push` - -Push the image `name` on the registry - -**Example request**: - - POST /images/test/push HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - {"status": "Pushing..."} - {"status": "Pushing", "progress": "1/? (n/a)", "progressDetail": {"current": 1}}} - {"error": "Invalid..."} - ... - - If you wish to push an image on to a private registry, that image must already have been tagged - into a repository which references that registry host name and port. This repository name should - then be used in the URL. This mirrors the flow of the CLI. - -**Example request**: - - POST /images/registry.acme.com:5000/test/push HTTP/1.1 - - -Query Parameters: - -- **tag** – the tag to associate with the image on the registry, optional - -Request Headers: - -- **X-Registry-Auth** – include a base64-encoded AuthConfig - object. - -Status Codes: - -- **200** – no error -- **404** – no such image -- **500** – server error - -### Tag an image into a repository - -`POST /images/(name)/tag` - -Tag the image `name` into a repository - -**Example request**: - - POST /images/test/tag?repo=myrepo&force=0&tag=v42 HTTP/1.1 - -**Example response**: - - HTTP/1.1 201 OK - -Query Parameters: - -- **repo** – The repository to tag in -- **force** – 1/True/true or 0/False/false, default false -- **tag** - The new tag name - -Status Codes: - -- **201** – no error -- **400** – bad parameter -- **404** – no such image -- **409** – conflict -- **500** – server error - -### Remove an image - -`DELETE /images/(name)` - -Remove the image `name` from the filesystem - -**Example request**: - - DELETE /images/test HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-type: application/json - - [ - {"Untagged": "3e2f21a89f"}, - {"Deleted": "3e2f21a89f"}, - {"Deleted": "53b4f83ac9"} - ] - -Query Parameters: - -- **force** – 1/True/true or 0/False/false, default false -- **noprune** – 1/True/true or 0/False/false, default false - -Status Codes: - -- **200** – no error -- **404** – no such image -- **409** – conflict -- **500** – server error - -### Search images - -`GET /images/search` - -Search for an image on [Docker Hub](https://hub.docker.com). - -> **Note**: -> The response keys have changed from API v1.6 to reflect the JSON -> sent by the registry server to the docker daemon's request. - -**Example request**: - - GET /images/search?term=sshd HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - [ - { - "description": "", - "is_official": false, - "is_automated": false, - "name": "wma55/u1210sshd", - "star_count": 0 - }, - { - "description": "", - "is_official": false, - "is_automated": false, - "name": "jdswinbank/sshd", - "star_count": 0 - }, - { - "description": "", - "is_official": false, - "is_automated": false, - "name": "vgauthier/sshd", - "star_count": 0 - } - ... - ] - -Query Parameters: - -- **term** – term to search - -Status Codes: - -- **200** – no error -- **500** – server error - -## 2.3 Misc - -### Build an image from Dockerfile via stdin - -`POST /build` - -Build an image from Dockerfile via stdin - -**Example request**: - - POST /build HTTP/1.1 - - {{ TAR STREAM }} - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - {"stream": "Step 1..."} - {"stream": "..."} - {"error": "Error...", "errorDetail": {"code": 123, "message": "Error..."}} - - The stream must be a tar archive compressed with one of the - following algorithms: identity (no compression), gzip, bzip2, xz. - - The archive must include a file called `Dockerfile` - at its root. It may include any number of other files, - which will be accessible in the build context (See the [*ADD build - command*](/reference/builder/#dockerbuilder)). - -Query Parameters: - -- **t** – repository name (and optionally a tag) to be applied to - the resulting image in case of success -- **remote** – git or HTTP/HTTPS URI build source -- **q** – suppress verbose build output -- **nocache** – do not use the cache when building the image -- **rm** - remove intermediate containers after a successful build (default behavior) -- **forcerm** - always remove intermediate containers (includes rm) - - Request Headers: - -- **Content-type** – should be set to `"application/tar"`. -- **X-Registry-Config** – base64-encoded ConfigFile objec - -Status Codes: - -- **200** – no error -- **500** – server error - -### Check auth configuration - -`POST /auth` - -Get the default username and email - -**Example request**: - - POST /auth HTTP/1.1 - Content-Type: application/json - - { - "username":" hannibal", - "password: "xxxx", - "email": "hannibal@a-team.com", - "serveraddress": "https://index.docker.io/v1/" - } - -**Example response**: - - HTTP/1.1 200 OK - -Status Codes: - -- **200** – no error -- **204** – no error -- **500** – server error - -### Display system-wide information - -`GET /info` - -Display system-wide information - -**Example request**: - - GET /info HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - { - "Containers": 11, - "Images": 16, - "Driver": "btrfs", - "ExecutionDriver": "native-0.1", - "KernelVersion": "3.12.0-1-amd64" - "Debug": false, - "NFd": 11, - "NGoroutines": 21, - "NEventsListener": 0, - "InitPath": "/usr/bin/docker", - "IndexServerAddress": ["https://index.docker.io/v1/"], - "MemoryLimit": true, - "SwapLimit": false, - "IPv4Forwarding": true - } - -Status Codes: - -- **200** – no error -- **500** – server error - -### Show the docker version information - -`GET /version` - -Show the docker version information - -**Example request**: - - GET /version HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - { - "ApiVersion": "1.12", - "Version": "0.2.2", - "GitCommit": "5a2a5cc+CHANGES", - "GoVersion": "go1.0.3" - } - -Status Codes: - -- **200** – no error -- **500** – server error - -### Ping the docker server - -`GET /_ping` - -Ping the docker server - -**Example request**: - - GET /_ping HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: text/plain - - OK - -Status Codes: - -- **200** - no error -- **500** - server error - -### Create a new image from a container's changes - -`POST /commit` - -Create a new image from a container's changes - -**Example request**: - - POST /commit?container=44c004db4b17&comment=message&repo=myrepo HTTP/1.1 - Content-Type: application/json - - { - "Hostname": "", - "Domainname": "", - "User": "", - "Memory": 0, - "MemorySwap": 0, - "CpuShares": 512, - "Cpuset": "0,1", - "AttachStdin": false, - "AttachStdout": true, - "AttachStderr": true, - "PortSpecs": null, - "Tty": false, - "OpenStdin": false, - "StdinOnce": false, - "Env": null, - "Cmd": [ - "date" - ], - "Volumes": { - "/tmp": {} - }, - "WorkingDir": "", - "NetworkDisabled": false, - "ExposedPorts": { - "22/tcp": {} - } - } - -**Example response**: - - HTTP/1.1 201 Created - Content-Type: application/vnd.docker.raw-stream - - {"Id": "596069db4bf5"} - -Json Parameters: - -- **config** - the container's configuration - -Query Parameters: - -- **container** – source container -- **repo** – repository -- **tag** – tag -- **comment** – commit message -- **author** – author (e.g., "John Hannibal Smith - <[hannibal@a-team.com](mailto:hannibal%40a-team.com)>") - -Status Codes: - -- **201** – no error -- **404** – no such container -- **500** – server error - -### Monitor Docker's events - -`GET /events` - -Get container events from docker, either in real time via streaming, or via -polling (using since). - -Docker containers will report the following events: - - create, destroy, die, export, kill, pause, restart, start, stop, unpause - -and Docker images will report: - - untag, delete - -**Example request**: - - GET /events?since=1374067924 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - {"status": "create", "id": "dfdf82bd3881","from": "ubuntu:latest", "time":1374067924} - {"status": "start", "id": "dfdf82bd3881","from": "ubuntu:latest", "time":1374067924} - {"status": "stop", "id": "dfdf82bd3881","from": "ubuntu:latest", "time":1374067966} - {"status": "destroy", "id": "dfdf82bd3881","from": "ubuntu:latest", "time":1374067970} - -Query Parameters: - -- **since** – timestamp used for polling -- **until** – timestamp used for polling - -Status Codes: - -- **200** – no error -- **500** – server error - -### Get a tarball containing all images in a repository - -`GET /images/(name)/get` - -Get a tarball containing all images and metadata for the repository specified -by `name`. - -If `name` is a specific name and tag (e.g. ubuntu:latest), then only that image -(and its parents) are returned. If `name` is an image ID, similarly only tha -image (and its parents) are returned, but with the exclusion of the -'repositories' file in the tarball, as there were no image names referenced. - -See the [image tarball format](#image-tarball-format) for more details. - -**Example request** - - GET /images/ubuntu/get - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/x-tar - - Binary data stream - -Status Codes: - -- **200** – no error -- **500** – server error - -### Get a tarball containing all images. - -`GET /images/get` - -Get a tarball containing all images and metadata for one or more repositories. - -For each value of the `names` parameter: if it is a specific name and tag (e.g. -ubuntu:latest), then only that image (and its parents) are returned; if it is -an image ID, similarly only that image (and its parents) are returned and there -would be no names referenced in the 'repositories' file for this image ID. - -See the [image tarball format](#image-tarball-format) for more details. - -**Example request** - - GET /images/get?names=myname%2Fmyapp%3Alatest&names=busybox - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/x-tar - - Binary data stream - -Status Codes: - -- **200** – no error -- **500** – server error - -### Load a tarball with a set of images and tags into docker - -`POST /images/load` - -Load a set of images and tags into the docker repository. -See the [image tarball format](#image-tarball-format) for more details. - -**Example request** - - POST /images/load - - Tarball in body - -**Example response**: - - HTTP/1.1 200 OK - -Status Codes: - -- **200** – no error -- **500** – server error - -### Image tarball format - -An image tarball contains one directory per image layer (named using its long ID), -each containing three files: - -1. `VERSION`: currently `1.0` - the file format version -2. `json`: detailed layer information, similar to `docker inspect layer_id` -3. `layer.tar`: A tarfile containing the filesystem changes in this layer - -The `layer.tar` file will contain `aufs` style `.wh..wh.aufs` files and directories -for storing attribute changes and deletions. - -If the tarball defines a repository, there will also be a `repositories` file at -the root that contains a list of repository and tag names mapped to layer IDs. - -``` -{"hello-world": - {"latest": "565a9d68a73f6706862bfe8409a7f659776d4d60a8d096eb4a3cbce6999cc2a1"} -} -``` - -### Exec Create - -`POST /containers/(id)/exec` - -Sets up an exec instance in a running container `id` - -**Example request**: - - POST /containers/e90e34656806/exec HTTP/1.1 - Content-Type: application/json - - { - "AttachStdin": false, - "AttachStdout": true, - "AttachStderr": true, - "Tty": false, - "Cmd": [ - "date" - ], - } - -**Example response**: - - HTTP/1.1 201 OK - Content-Type: application/json - - { - "Id": "f90e34656806" - } - -Json Parameters: - -- **AttachStdin** - Boolean value, attaches to stdin of the exec command. -- **AttachStdout** - Boolean value, attaches to stdout of the exec command. -- **AttachStderr** - Boolean value, attaches to stderr of the exec command. -- **Tty** - Boolean value to allocate a pseudo-TTY -- **Cmd** - Command to run specified as a string or an array of strings. - - -Status Codes: - -- **201** – no error -- **404** – no such container - -### Exec Start - -`POST /exec/(id)/start` - -Starts a previously set up exec instance `id`. If `detach` is true, this API -returns after starting the `exec` command. Otherwise, this API sets up an -interactive session with the `exec` command. - -**Example request**: - - POST /exec/e90e34656806/start HTTP/1.1 - Content-Type: application/json - - { - "Detach": false, - "Tty": false, - } - -**Example response**: - - HTTP/1.1 201 OK - Content-Type: application/json - - {{ STREAM }} - -Json Parameters: - -- **Detach** - Detach from the exec command -- **Tty** - Boolean value to allocate a pseudo-TTY - -Status Codes: - -- **201** – no error -- **404** – no such exec instance - - **Stream details**: - Similar to the stream behavior of `POST /container/(id)/attach` API - -### Exec Resize - -`POST /exec/(id)/resize` - -Resizes the tty session used by the exec command `id`. -This API is valid only if `tty` was specified as part of creating and starting the exec command. - -**Example request**: - - POST /exec/e90e34656806/resize HTTP/1.1 - Content-Type: plain/text - -**Example response**: - - HTTP/1.1 201 OK - Content-Type: plain/text - -Query Parameters: - -- **h** – height of tty session -- **w** – width - -Status Codes: - -- **201** – no error -- **404** – no such exec instance - -# 3. Going further - -## 3.1 Inside `docker run` - -As an example, the `docker run` command line makes the following API calls: - -- Create the container - -- If the status code is 404, it means the image doesn't exist: - - Try to pull it - - Then retry to create the container - -- Start the container - -- If you are not in detached mode: -- Attach to the container, using logs=1 (to have stdout and - stderr from the container's start) and stream=1 - -- If in detached mode or only stdin is attached: -- Display the container's id - -## 3.2 Hijacking - -In this version of the API, /attach, uses hijacking to transport stdin, -stdout and stderr on the same socket. This might change in the future. - -## 3.3 CORS Requests - -To enable cross origin requests to the remote api add the flag -"--api-enable-cors" when running docker in daemon mode. - - $ docker -d -H="192.168.1.9:2375" --api-enable-cors diff --git a/reference/api/docker_remote_api_v1.16.md~ b/reference/api/docker_remote_api_v1.16.md~ deleted file mode 100644 index 86df97b717..0000000000 --- a/reference/api/docker_remote_api_v1.16.md~ +++ /dev/null @@ -1,1800 +0,0 @@ -page_title: Remote API v1.16 -page_description: API Documentation for Docker -page_keywords: API, Docker, rcli, REST, documentation - -# Docker Remote API v1.16 - -## 1. Brief introduction - - - The Remote API has replaced `rcli`. - - The daemon listens on `unix:///var/run/docker.sock` but you can - [Bind Docker to another host/port or a Unix socket]( - /articles/basics/#bind-docker-to-another-hostport-or-a-unix-socket). - - The API tends to be REST, but for some complex commands, like `attach` - or `pull`, the HTTP connection is hijacked to transport `STDOUT`, - `STDIN` and `STDERR`. - -# 2. Endpoints - -## 2.1 Containers - -### List containers - -`GET /containers/json` - -List containers - -**Example request**: - - GET /containers/json?all=1&before=8dfafdbc3a40&size=1 HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - [ - { - "Id": "8dfafdbc3a40", - "Image": "ubuntu:latest", - "Command": "echo 1", - "Created": 1367854155, - "Status": "Exit 0", - "Ports": [{"PrivatePort": 2222, "PublicPort": 3333, "Type": "tcp"}], - "SizeRw": 12288, - "SizeRootFs": 0 - }, - { - "Id": "9cd87474be90", - "Image": "ubuntu:latest", - "Command": "echo 222222", - "Created": 1367854155, - "Status": "Exit 0", - "Ports": [], - "SizeRw": 12288, - "SizeRootFs": 0 - }, - { - "Id": "3176a2479c92", - "Image": "ubuntu:latest", - "Command": "echo 3333333333333333", - "Created": 1367854154, - "Status": "Exit 0", - "Ports":[], - "SizeRw":12288, - "SizeRootFs":0 - }, - { - "Id": "4cb07b47f9fb", - "Image": "ubuntu:latest", - "Command": "echo 444444444444444444444444444444444", - "Created": 1367854152, - "Status": "Exit 0", - "Ports": [], - "SizeRw": 12288, - "SizeRootFs": 0 - } - ] - -Query Parameters: - -- **all** – 1/True/true or 0/False/false, Show all containers. - Only running containers are shown by default (i.e., this defaults to false) -- **limit** – Show `limit` last created - containers, include non-running ones. -- **since** – Show only containers created since Id, include - non-running ones. -- **before** – Show only containers created before Id, include - non-running ones. -- **size** – 1/True/true or 0/False/false, Show the containers - sizes -- **filters** - a json encoded value of the filters (a map[string][]string) to process on the containers list. Available filters: - - exited=<int> -- containers with exit code of <int> - - status=(restarting|running|paused|exited) - -Status Codes: - -- **200** – no error -- **400** – bad parameter -- **500** – server error - -### Create a container - -`POST /containers/create` - -Create a container - -**Example request**: - - POST /containers/create HTTP/1.1 - Content-Type: application/json - - { - "Hostname": "", - "Domainname": "", - "User": "", - "Memory": 0, - "MemorySwap": 0, - "CpuShares": 512, - "Cpuset": "0,1", - "AttachStdin": false, - "AttachStdout": true, - "AttachStderr": true, - "Tty": false, - "OpenStdin": false, - "StdinOnce": false, - "Env": null, - "Cmd": [ - "date" - ], - "Entrypoint": "", - "Image": "ubuntu", - "Volumes": { - "/tmp": {} - }, - "WorkingDir": "", - "NetworkDisabled": false, - "MacAddress": "12:34:56:78:9a:bc", - "ExposedPorts": { - "22/tcp": {} - }, - "SecurityOpts": [""], - "HostConfig": { - "Binds": ["/tmp:/tmp"], - "Links": ["redis3:redis"], - "LxcConf": {"lxc.utsname":"docker"}, - "PortBindings": { "22/tcp": [{ "HostPort": "11022" }] }, - "PublishAllPorts": false, - "Privileged": false, - "Dns": ["8.8.8.8"], - "DnsSearch": [""], - "ExtraHosts": null, - "VolumesFrom": ["parent", "other:ro"], - "CapAdd": ["NET_ADMIN"], - "CapDrop": ["MKNOD"], - "RestartPolicy": { "Name": "", "MaximumRetryCount": 0 }, - "NetworkMode": "bridge", - "Devices": [] - } - } - -**Example response**: - - HTTP/1.1 201 Created - Content-Type: application/json - - { - "Id":"e90e34656806" - "Warnings":[] - } - -Json Parameters: - -- **Hostname** - A string value containing the desired hostname to use for the - container. -- **Domainname** - A string value containing the desired domain name to use - for the container. -- **User** - A string value containg the user to use inside the container. -- **Memory** - Memory limit in bytes. -- **MemorySwap**- Total memory usage (memory + swap); set `-1` to disable swap. -- **CpuShares** - An integer value containing the CPU Shares for container - (ie. the relative weight vs othercontainers). - **CpuSet** - String value containg the cgroups Cpuset to use. -- **AttachStdin** - Boolean value, attaches to stdin. -- **AttachStdout** - Boolean value, attaches to stdout. -- **AttachStderr** - Boolean value, attaches to stderr. -- **Tty** - Boolean value, Attach standard streams to a tty, including stdin if it is not closed. -- **OpenStdin** - Boolean value, opens stdin, -- **StdinOnce** - Boolean value, close stdin after the 1 attached client disconnects. -- **Env** - A list of environment variables in the form of `VAR=value` -- **Cmd** - Command to run specified as a string or an array of strings. -- **Entrypoint** - Set the entrypoint for the container a a string or an array - of strings -- **Image** - String value containing the image name to use for the container -- **Volumes** – An object mapping mountpoint paths (strings) inside the - container to empty objects. -- **WorkingDir** - A string value containing the working dir for commands to - run in. -- **NetworkDisabled** - Boolean value, when true disables neworking for the - container -- **ExposedPorts** - An object mapping ports to an empty object in the form of: - `"ExposedPorts": { "/: {}" }` -- **SecurityOpts**: A list of string values to customize labels for MLS - systems, such as SELinux. -- **HostConfig** - - **Binds** – A list of volume bindings for this container. Each volume - binding is a string of the form `container_path` (to create a new - volume for the container), `host_path:container_path` (to bind-mount - a host path into the container), or `host_path:container_path:ro` - (to make the bind-mount read-only inside the container). - - **Links** - A list of links for the container. Each link entry should be of - of the form "container_name:alias". - - **LxcConf** - LXC specific configurations. These configurations will only - work when using the `lxc` execution driver. - - **PortBindings** - A map of exposed container ports and the host port they - should map to. It should be specified in the form - `{ /: [{ "HostPort": "" }] }` - Take note that `port` is specified as a string and not an integer value. - - **PublishAllPorts** - Allocates a random host port for all of a container's - exposed ports. Specified as a boolean value. - - **Privileged** - Gives the container full access to the host. Specified as - a boolean value. - - **Dns** - A list of dns servers for the container to use. - - **DnsSearch** - A list of DNS search domains - - **ExtraHosts** - A list of hostnames/IP mappings to be added to the - container's `/etc/hosts` file. Specified in the form `["hostname:IP"]`. - - **VolumesFrom** - A list of volumes to inherit from another container. - Specified in the form `[:]` - - **CapAdd** - A list of kernel capabilties to add to the container. - - **Capdrop** - A list of kernel capabilties to drop from the container. - - **RestartPolicy** – The behavior to apply when the container exits. The - value is an object with a `Name` property of either `"always"` to - always restart or `"on-failure"` to restart only when the container - exit code is non-zero. If `on-failure` is used, `MaximumRetryCount` - controls the number of times to retry before giving up. - The default is not to restart. (optional) - An ever increasing delay (double the previous delay, starting at 100mS) - is added before each restart to prevent flooding the server. - - **NetworkMode** - Sets the networking mode for the container. Supported - values are: `bridge`, `host`, and `container:` - - **Devices** - A list of devices to add to the container specified in the - form - `{ "PathOnHost": "/dev/deviceName", "PathInContainer": "/dev/deviceName", "CgroupPermissions": "mrw"}` - -Query Parameters: - -- **name** – Assign the specified name to the container. Must - match `/?[a-zA-Z0-9_-]+`. - -Status Codes: - -- **201** – no error -- **404** – no such container -- **406** – impossible to attach (container not running) -- **500** – server error - -### Inspect a container - -`GET /containers/(id)/json` - -Return low-level information on the container `id` - - -**Example request**: - - GET /containers/4fa6e0f0c678/json HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - { - "Id": "4fa6e0f0c6786287e131c3852c58a2e01cc697a68231826813597e4994f1d6e2", - "Created": "2013-05-07T14:51:42.041847+02:00", - "Path": "date", - "Args": [], - "Config": { - "Hostname": "4fa6e0f0c678", - "User": "", - "Memory": 0, - "MemorySwap": 0, - "AttachStdin": false, - "AttachStdout": true, - "AttachStderr": true, - "PortSpecs": null, - "Tty": false, - "OpenStdin": false, - "StdinOnce": false, - "Env": null, - "Cmd": [ - "date" - ], - "Dns": null, - "Image": "ubuntu", - "Volumes": {}, - "VolumesFrom": "", - "WorkingDir": "" - }, - "State": { - "Running": false, - "Pid": 0, - "ExitCode": 0, - "StartedAt": "2013-05-07T14:51:42.087658+02:01360", - "Ghost": false - }, - "Image": "b750fe79269d2ec9a3c593ef05b4332b1d1a02a62b4accb2c21d589ff2f5f2dc", - "NetworkSettings": { - "IpAddress": "", - "IpPrefixLen": 0, - "Gateway": "", - "Bridge": "", - "PortMapping": null - }, - "SysInitPath": "/home/kitty/go/src/github.com/docker/docker/bin/docker", - "ResolvConfPath": "/etc/resolv.conf", - "Volumes": {}, - "HostConfig": { - "Binds": null, - "ContainerIDFile": "", - "LxcConf": [], - "Privileged": false, - "PortBindings": { - "80/tcp": [ - { - "HostIp": "0.0.0.0", - "HostPort": "49153" - } - ] - }, - "Links": ["/name:alias"], - "PublishAllPorts": false, - "CapAdd": ["NET_ADMIN"], - "CapDrop": ["MKNOD"] - } - } - -Status Codes: - -- **200** – no error -- **404** – no such container -- **500** – server error - -### List processes running inside a container - -`GET /containers/(id)/top` - -List processes running inside the container `id` - -**Example request**: - - GET /containers/4fa6e0f0c678/top HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - { - "Titles": [ - "USER", - "PID", - "%CPU", - "%MEM", - "VSZ", - "RSS", - "TTY", - "STAT", - "START", - "TIME", - "COMMAND" - ], - "Processes": [ - ["root","20147","0.0","0.1","18060","1864","pts/4","S","10:06","0:00","bash"], - ["root","20271","0.0","0.0","4312","352","pts/4","S+","10:07","0:00","sleep","10"] - ] - } - -Query Parameters: - -- **ps_args** – ps arguments to use (e.g., aux) - -Status Codes: - -- **200** – no error -- **404** – no such container -- **500** – server error - -### Get container logs - -`GET /containers/(id)/logs` - -Get stdout and stderr logs from the container ``id`` - -**Example request**: - - GET /containers/4fa6e0f0c678/logs?stderr=1&stdout=1×tamps=1&follow=1&tail=10 HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/vnd.docker.raw-stream - - {{ STREAM }} - -Query Parameters: - -- **follow** – 1/True/true or 0/False/false, return stream. Default false -- **stdout** – 1/True/true or 0/False/false, show stdout log. Default false -- **stderr** – 1/True/true or 0/False/false, show stderr log. Default false -- **timestamps** – 1/True/true or 0/False/false, print timestamps for - every log line. Default false -- **tail** – Output specified number of lines at the end of logs: `all` or ``. Default all - -Status Codes: - -- **200** – no error -- **404** – no such container -- **500** – server error - -### Inspect changes on a container's filesystem - -`GET /containers/(id)/changes` - -Inspect changes on container `id`'s filesystem - -**Example request**: - - GET /containers/4fa6e0f0c678/changes HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - [ - { - "Path": "/dev", - "Kind": 0 - }, - { - "Path": "/dev/kmsg", - "Kind": 1 - }, - { - "Path": "/test", - "Kind": 1 - } - ] - -Status Codes: - -- **200** – no error -- **404** – no such container -- **500** – server error - -### Export a container - -`GET /containers/(id)/export` - -Export the contents of container `id` - -**Example request**: - - GET /containers/4fa6e0f0c678/export HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/octet-stream - - {{ TAR STREAM }} - -Status Codes: - -- **200** – no error -- **404** – no such container -- **500** – server error - -### Resize a container TTY - -`POST /containers/(id)/resize?h=&w=` - -Resize the TTY for container with `id`. The container must be restarted for the resize to take effect. - -**Example request**: - - POST /containers/4fa6e0f0c678/resize?h=40&w=80 HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Length: 0 - Content-Type: text/plain; charset=utf-8 - -Status Codes: - -- **200** – no error -- **404** – No such container -- **500** – Cannot resize container - -### Start a container - -`POST /containers/(id)/start` - -Start the container `id` - -**Example request**: - - POST /containers/(id)/start HTTP/1.1 - Content-Type: application/json - -**Example response**: - - HTTP/1.1 204 No Content - -Json Parameters: - -Status Codes: - -- **204** – no error -- **304** – container already started -- **404** – no such container -- **500** – server error - -### Stop a container - -`POST /containers/(id)/stop` - -Stop the container `id` - -**Example request**: - - POST /containers/e90e34656806/stop?t=5 HTTP/1.1 - -**Example response**: - - HTTP/1.1 204 No Content - -Query Parameters: - -- **t** – number of seconds to wait before killing the container - -Status Codes: - -- **204** – no error -- **304** – container already stopped -- **404** – no such container -- **500** – server error - -### Restart a container - -`POST /containers/(id)/restart` - -Restart the container `id` - -**Example request**: - - POST /containers/e90e34656806/restart?t=5 HTTP/1.1 - -**Example response**: - - HTTP/1.1 204 No Content - -Query Parameters: - -- **t** – number of seconds to wait before killing the container - -Status Codes: - -- **204** – no error -- **404** – no such container -- **500** – server error - -### Kill a container - -`POST /containers/(id)/kill` - -Kill the container `id` - -**Example request**: - - POST /containers/e90e34656806/kill HTTP/1.1 - -**Example response**: - - HTTP/1.1 204 No Content - -Query Parameters - -- **signal** - Signal to send to the container: integer or string like "SIGINT". - When not set, SIGKILL is assumed and the call will waits for the container to exit. - -Status Codes: - -- **204** – no error -- **404** – no such container -- **500** – server error - -### Pause a container - -`POST /containers/(id)/pause` - -Pause the container `id` - -**Example request**: - - POST /containers/e90e34656806/pause HTTP/1.1 - -**Example response**: - - HTTP/1.1 204 No Content - -Status Codes: - -- **204** – no error -- **404** – no such container -- **500** – server error - -### Unpause a container - -`POST /containers/(id)/unpause` - -Unpause the container `id` - -**Example request**: - - POST /containers/e90e34656806/unpause HTTP/1.1 - -**Example response**: - - HTTP/1.1 204 No Content - -Status Codes: - -- **204** – no error -- **404** – no such container -- **500** – server error - -### Attach to a container - -`POST /containers/(id)/attach` - -Attach to the container `id` - -**Example request**: - - POST /containers/16253994b7c4/attach?logs=1&stream=0&stdout=1 HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/vnd.docker.raw-stream - - {{ STREAM }} - -Query Parameters: - -- **logs** – 1/True/true or 0/False/false, return logs. Default false -- **stream** – 1/True/true or 0/False/false, return stream. - Default false -- **stdin** – 1/True/true or 0/False/false, if stream=true, attach - to stdin. Default false -- **stdout** – 1/True/true or 0/False/false, if logs=true, return - stdout log, if stream=true, attach to stdout. Default false -- **stderr** – 1/True/true or 0/False/false, if logs=true, return - stderr log, if stream=true, attach to stderr. Default false - -Status Codes: - -- **200** – no error -- **400** – bad parameter -- **404** – no such container -- **500** – server error - - **Stream details**: - - When using the TTY setting is enabled in - [`POST /containers/create` - ](/reference/api/docker_remote_api_v1.9/#create-a-container "POST /containers/create"), - the stream is the raw data from the process PTY and client's stdin. - When the TTY is disabled, then the stream is multiplexed to separate - stdout and stderr. - - The format is a **Header** and a **Payload** (frame). - - **HEADER** - - The header will contain the information on which stream write the - stream (stdout or stderr). It also contain the size of the - associated frame encoded on the last 4 bytes (uint32). - - It is encoded on the first 8 bytes like this: - - header := [8]byte{STREAM_TYPE, 0, 0, 0, SIZE1, SIZE2, SIZE3, SIZE4} - - `STREAM_TYPE` can be: - -- 0: stdin (will be written on stdout) -- 1: stdout -- 2: stderr - - `SIZE1, SIZE2, SIZE3, SIZE4` are the 4 bytes of - the uint32 size encoded as big endian. - - **PAYLOAD** - - The payload is the raw stream. - - **IMPLEMENTATION** - - The simplest way to implement the Attach protocol is the following: - - 1. Read 8 bytes - 2. chose stdout or stderr depending on the first byte - 3. Extract the frame size from the last 4 byets - 4. Read the extracted size and output it on the correct output - 5. Goto 1 - -### Attach to a container (websocket) - -`GET /containers/(id)/attach/ws` - -Attach to the container `id` via websocket - -Implements websocket protocol handshake according to [RFC 6455](http://tools.ietf.org/html/rfc6455) - -**Example request** - - GET /containers/e90e34656806/attach/ws?logs=0&stream=1&stdin=1&stdout=1&stderr=1 HTTP/1.1 - -**Example response** - - {{ STREAM }} - -Query Parameters: - -- **logs** – 1/True/true or 0/False/false, return logs. Default false -- **stream** – 1/True/true or 0/False/false, return stream. - Default false -- **stdin** – 1/True/true or 0/False/false, if stream=true, attach - to stdin. Default false -- **stdout** – 1/True/true or 0/False/false, if logs=true, return - stdout log, if stream=true, attach to stdout. Default false -- **stderr** – 1/True/true or 0/False/false, if logs=true, return - stderr log, if stream=true, attach to stderr. Default false - -Status Codes: - -- **200** – no error -- **400** – bad parameter -- **404** – no such container -- **500** – server error - -### Wait a container - -`POST /containers/(id)/wait` - -Block until container `id` stops, then returns the exit code - -**Example request**: - - POST /containers/16253994b7c4/wait HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - {"StatusCode": 0} - -Status Codes: - -- **200** – no error -- **404** – no such container -- **500** – server error - -### Remove a container - -`DELETE /containers/(id)` - -Remove the container `id` from the filesystem - -**Example request**: - - DELETE /containers/16253994b7c4?v=1 HTTP/1.1 - -**Example response**: - - HTTP/1.1 204 No Content - -Query Parameters: - -- **v** – 1/True/true or 0/False/false, Remove the volumes - associated to the container. Default false -- **force** - 1/True/true or 0/False/false, Kill then remove the container. - Default false - -Status Codes: - -- **204** – no error -- **400** – bad parameter -- **404** – no such container -- **500** – server error - -### Copy files or folders from a container - -`POST /containers/(id)/copy` - -Copy files or folders of container `id` - -**Example request**: - - POST /containers/4fa6e0f0c678/copy HTTP/1.1 - Content-Type: application/json - - { - "Resource": "test.txt" - } - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/x-tar - - {{ TAR STREAM }} - -Status Codes: - -- **200** – no error -- **404** – no such container -- **500** – server error - -## 2.2 Images - -### List Images - -`GET /images/json` - -**Example request**: - - GET /images/json?all=0 HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - [ - { - "RepoTags": [ - "ubuntu:12.04", - "ubuntu:precise", - "ubuntu:latest" - ], - "Id": "8dbd9e392a964056420e5d58ca5cc376ef18e2de93b5cc90e868a1bbc8318c1c", - "Created": 1365714795, - "Size": 131506275, - "VirtualSize": 131506275 - }, - { - "RepoTags": [ - "ubuntu:12.10", - "ubuntu:quantal" - ], - "ParentId": "27cf784147099545", - "Id": "b750fe79269d2ec9a3c593ef05b4332b1d1a02a62b4accb2c21d589ff2f5f2dc", - "Created": 1364102658, - "Size": 24653, - "VirtualSize": 180116135 - } - ] - - -Query Parameters: - -- **all** – 1/True/true or 0/False/false, default false -- **filters** – a json encoded value of the filters (a map[string][]string) to process on the images list. Available filters: - - dangling=true - -### Create an image - -`POST /images/create` - -Create an image, either by pulling it from the registry or by importing it - -**Example request**: - - POST /images/create?fromImage=ubuntu HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - {"status": "Pulling..."} - {"status": "Pulling", "progress": "1 B/ 100 B", "progressDetail": {"current": 1, "total": 100}} - {"error": "Invalid..."} - ... - - When using this endpoint to pull an image from the registry, the - `X-Registry-Auth` header can be used to include - a base64-encoded AuthConfig object. - -Query Parameters: - -- **fromImage** – name of the image to pull -- **fromSrc** – source to import. The value may be a URL from which the image - can be retrieved or `-` to read the image from the request body. -- **repo** – repository -- **tag** – tag -- **registry** – the registry to pull from - - Request Headers: - -- **X-Registry-Auth** – base64-encoded AuthConfig object - -Status Codes: - -- **200** – no error -- **500** – server error - - - -### Inspect an image - -`GET /images/(name)/json` - -Return low-level information on the image `name` - -**Example request**: - - GET /images/ubuntu/json HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - { - "Created": "2013-03-23T22:24:18.818426-07:00", - "Container": "3d67245a8d72ecf13f33dffac9f79dcdf70f75acb84d308770391510e0c23ad0", - "ContainerConfig": - { - "Hostname": "", - "User": "", - "Memory": 0, - "MemorySwap": 0, - "AttachStdin": false, - "AttachStdout": false, - "AttachStderr": false, - "PortSpecs": null, - "Tty": true, - "OpenStdin": true, - "StdinOnce": false, - "Env": null, - "Cmd": ["/bin/bash"], - "Dns": null, - "Image": "ubuntu", - "Volumes": null, - "VolumesFrom": "", - "WorkingDir": "" - }, - "Id": "b750fe79269d2ec9a3c593ef05b4332b1d1a02a62b4accb2c21d589ff2f5f2dc", - "Parent": "27cf784147099545", - "Size": 6824592 - } - -Status Codes: - -- **200** – no error -- **404** – no such image -- **500** – server error - -### Get the history of an image - -`GET /images/(name)/history` - -Return the history of the image `name` - -**Example request**: - - GET /images/ubuntu/history HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - [ - { - "Id": "b750fe79269d", - "Created": 1364102658, - "CreatedBy": "/bin/bash" - }, - { - "Id": "27cf78414709", - "Created": 1364068391, - "CreatedBy": "" - } - ] - -Status Codes: - -- **200** – no error -- **404** – no such image -- **500** – server error - -### Push an image on the registry - -`POST /images/(name)/push` - -Push the image `name` on the registry - -**Example request**: - - POST /images/test/push HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - {"status": "Pushing..."} - {"status": "Pushing", "progress": "1/? (n/a)", "progressDetail": {"current": 1}}} - {"error": "Invalid..."} - ... - - If you wish to push an image on to a private registry, that image must already have been tagged - into a repository which references that registry host name and port. This repository name should - then be used in the URL. This mirrors the flow of the CLI. - -**Example request**: - - POST /images/registry.acme.com:5000/test/push HTTP/1.1 - - -Query Parameters: - -- **tag** – the tag to associate with the image on the registry, optional - -Request Headers: - -- **X-Registry-Auth** – include a base64-encoded AuthConfig - object. - -Status Codes: - -- **200** – no error -- **404** – no such image -- **500** – server error - -### Tag an image into a repository - -`POST /images/(name)/tag` - -Tag the image `name` into a repository - -**Example request**: - - POST /images/test/tag?repo=myrepo&force=0&tag=v42 HTTP/1.1 - -**Example response**: - - HTTP/1.1 201 OK - -Query Parameters: - -- **repo** – The repository to tag in -- **force** – 1/True/true or 0/False/false, default false -- **tag** - The new tag name - -Status Codes: - -- **201** – no error -- **400** – bad parameter -- **404** – no such image -- **409** – conflict -- **500** – server error - -### Remove an image - -`DELETE /images/(name)` - -Remove the image `name` from the filesystem - -**Example request**: - - DELETE /images/test HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-type: application/json - - [ - {"Untagged": "3e2f21a89f"}, - {"Deleted": "3e2f21a89f"}, - {"Deleted": "53b4f83ac9"} - ] - -Query Parameters: - -- **force** – 1/True/true or 0/False/false, default false -- **noprune** – 1/True/true or 0/False/false, default false - -Status Codes: - -- **200** – no error -- **404** – no such image -- **409** – conflict -- **500** – server error - -### Search images - -`GET /images/search` - -Search for an image on [Docker Hub](https://hub.docker.com). - -> **Note**: -> The response keys have changed from API v1.6 to reflect the JSON -> sent by the registry server to the docker daemon's request. - -**Example request**: - - GET /images/search?term=sshd HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - [ - { - "description": "", - "is_official": false, - "is_automated": false, - "name": "wma55/u1210sshd", - "star_count": 0 - }, - { - "description": "", - "is_official": false, - "is_automated": false, - "name": "jdswinbank/sshd", - "star_count": 0 - }, - { - "description": "", - "is_official": false, - "is_automated": false, - "name": "vgauthier/sshd", - "star_count": 0 - } - ... - ] - -Query Parameters: - -- **term** – term to search - -Status Codes: - -- **200** – no error -- **500** – server error - -## 2.3 Misc - -### Build an image from Dockerfile via stdin - -`POST /build` - -Build an image from Dockerfile via stdin - -**Example request**: - - POST /build HTTP/1.1 - - {{ TAR STREAM }} - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - {"stream": "Step 1..."} - {"stream": "..."} - {"error": "Error...", "errorDetail": {"code": 123, "message": "Error..."}} - - The stream must be a tar archive compressed with one of the - following algorithms: identity (no compression), gzip, bzip2, xz. - - The archive must include a file called `Dockerfile` - at its root. It may include any number of other files, - which will be accessible in the build context (See the [*ADD build - command*](/reference/builder/#dockerbuilder)). - -Query Parameters: - -- **t** – repository name (and optionally a tag) to be applied to - the resulting image in case of success -- **remote** – git or HTTP/HTTPS URI build source -- **q** – suppress verbose build output -- **nocache** – do not use the cache when building the image -- **pull** - attempt to pull the image even if an older image exists locally -- **rm** - remove intermediate containers after a successful build (default behavior) -- **forcerm** - always remove intermediate containers (includes rm) - - Request Headers: - -- **Content-type** – should be set to `"application/tar"`. -- **X-Registry-Config** – base64-encoded ConfigFile objec - -Status Codes: - -- **200** – no error -- **500** – server error - -### Check auth configuration - -`POST /auth` - -Get the default username and email - -**Example request**: - - POST /auth HTTP/1.1 - Content-Type: application/json - - { - "username":" hannibal", - "password: "xxxx", - "email": "hannibal@a-team.com", - "serveraddress": "https://index.docker.io/v1/" - } - -**Example response**: - - HTTP/1.1 200 OK - -Status Codes: - -- **200** – no error -- **204** – no error -- **500** – server error - -### Display system-wide information - -`GET /info` - -Display system-wide information - -**Example request**: - - GET /info HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - { - "Containers":11, - "Images":16, - "Driver":"btrfs", - "DriverStatus": [[""]], - "ExecutionDriver":"native-0.1", - "KernelVersion":"3.12.0-1-amd64" - "NCPU":1, - "MemTotal":2099236864, - "Name":"prod-server-42", - "ID":"7TRN:IPZB:QYBB:VPBQ:UMPP:KARE:6ZNR:XE6T:7EWV:PKF4:ZOJD:TPYS", - "Debug":false, - "NFd": 11, - "NGoroutines":21, - "NEventsListener":0, - "InitPath":"/usr/bin/docker", - "InitSha1":"", - "IndexServerAddress":["https://index.docker.io/v1/"], - "MemoryLimit":true, - "SwapLimit":false, - "IPv4Forwarding":true, - "Labels":["storage=ssd"], - "DockerRootDir": "/var/lib/docker", - "OperatingSystem": "Boot2Docker", - } - -Status Codes: - -- **200** – no error -- **500** – server error - -### Show the docker version information - -`GET /version` - -Show the docker version information - -**Example request**: - - GET /version HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - { - "ApiVersion": "1.12", - "Version": "0.2.2", - "GitCommit": "5a2a5cc+CHANGES", - "GoVersion": "go1.0.3" - } - -Status Codes: - -- **200** – no error -- **500** – server error - -### Ping the docker server - -`GET /_ping` - -Ping the docker server - -**Example request**: - - GET /_ping HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: text/plain - - OK - -Status Codes: - -- **200** - no error -- **500** - server error - -### Create a new image from a container's changes - -`POST /commit` - -Create a new image from a container's changes - -**Example request**: - - POST /commit?container=44c004db4b17&comment=message&repo=myrepo HTTP/1.1 - Content-Type: application/json - - { - "Hostname": "", - "Domainname": "", - "User": "", - "Memory": 0, - "MemorySwap": 0, - "CpuShares": 512, - "Cpuset": "0,1", - "AttachStdin": false, - "AttachStdout": true, - "AttachStderr": true, - "PortSpecs": null, - "Tty": false, - "OpenStdin": false, - "StdinOnce": false, - "Env": null, - "Cmd": [ - "date" - ], - "Volumes": { - "/tmp": {} - }, - "WorkingDir": "", - "NetworkDisabled": false, - "ExposedPorts": { - "22/tcp": {} - } - } - -**Example response**: - - HTTP/1.1 201 Created - Content-Type: application/vnd.docker.raw-stream - - {"Id": "596069db4bf5"} - -Json Parameters: - -- **config** - the container's configuration - -Query Parameters: - -- **container** – source container -- **repo** – repository -- **tag** – tag -- **comment** – commit message -- **author** – author (e.g., "John Hannibal Smith - <[hannibal@a-team.com](mailto:hannibal%40a-team.com)>") - -Status Codes: - -- **201** – no error -- **404** – no such container -- **500** – server error - -### Monitor Docker's events - -`GET /events` - -Get container events from docker, either in real time via streaming, or via -polling (using since). - -Docker containers will report the following events: - - create, destroy, die, export, kill, pause, restart, start, stop, unpause - -and Docker images will report: - - untag, delete - -**Example request**: - - GET /events?since=1374067924 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - {"status": "create", "id": "dfdf82bd3881","from": "ubuntu:latest", "time":1374067924} - {"status": "start", "id": "dfdf82bd3881","from": "ubuntu:latest", "time":1374067924} - {"status": "stop", "id": "dfdf82bd3881","from": "ubuntu:latest", "time":1374067966} - {"status": "destroy", "id": "dfdf82bd3881","from": "ubuntu:latest", "time":1374067970} - -Query Parameters: - -- **since** – timestamp used for polling -- **until** – timestamp used for polling -- **filters** – a json encoded value of the filters (a map[string][]string) to process on the event list. Available filters: - - event=<string> -- event to filter - - image=<string> -- image to filter - - container=<string> -- container to filter - -Status Codes: - -- **200** – no error -- **500** – server error - -### Get a tarball containing all images in a repository - -`GET /images/(name)/get` - -Get a tarball containing all images and metadata for the repository specified -by `name`. - -If `name` is a specific name and tag (e.g. ubuntu:latest), then only that image -(and its parents) are returned. If `name` is an image ID, similarly only tha -image (and its parents) are returned, but with the exclusion of the -'repositories' file in the tarball, as there were no image names referenced. - -See the [image tarball format](#image-tarball-format) for more details. - -**Example request** - - GET /images/ubuntu/get - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/x-tar - - Binary data stream - -Status Codes: - -- **200** – no error -- **500** – server error - -### Get a tarball containing all images. - -`GET /images/get` - -Get a tarball containing all images and metadata for one or more repositories. - -For each value of the `names` parameter: if it is a specific name and tag (e.g. -ubuntu:latest), then only that image (and its parents) are returned; if it is -an image ID, similarly only that image (and its parents) are returned and there -would be no names referenced in the 'repositories' file for this image ID. - -See the [image tarball format](#image-tarball-format) for more details. - -**Example request** - - GET /images/get?names=myname%2Fmyapp%3Alatest&names=busybox - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/x-tar - - Binary data stream - -Status Codes: - -- **200** – no error -- **500** – server error - -### Load a tarball with a set of images and tags into docker - -`POST /images/load` - -Load a set of images and tags into the docker repository. -See the [image tarball format](#image-tarball-format) for more details. - -**Example request** - - POST /images/load - - Tarball in body - -**Example response**: - - HTTP/1.1 200 OK - -Status Codes: - -- **200** – no error -- **500** – server error - -### Image tarball format - -An image tarball contains one directory per image layer (named using its long ID), -each containing three files: - -1. `VERSION`: currently `1.0` - the file format version -2. `json`: detailed layer information, similar to `docker inspect layer_id` -3. `layer.tar`: A tarfile containing the filesystem changes in this layer - -The `layer.tar` file will contain `aufs` style `.wh..wh.aufs` files and directories -for storing attribute changes and deletions. - -If the tarball defines a repository, there will also be a `repositories` file at -the root that contains a list of repository and tag names mapped to layer IDs. - -``` -{"hello-world": - {"latest": "565a9d68a73f6706862bfe8409a7f659776d4d60a8d096eb4a3cbce6999cc2a1"} -} -``` - -### Exec Create - -`POST /containers/(id)/exec` - -Sets up an exec instance in a running container `id` - -**Example request**: - - POST /containers/e90e34656806/exec HTTP/1.1 - Content-Type: application/json - - { - "AttachStdin": false, - "AttachStdout": true, - "AttachStderr": true, - "Tty": false, - "Cmd": [ - "date" - ], - } - -**Example response**: - - HTTP/1.1 201 OK - Content-Type: application/json - - { - "Id": "f90e34656806" - } - -Json Parameters: - -- **AttachStdin** - Boolean value, attaches to stdin of the exec command. -- **AttachStdout** - Boolean value, attaches to stdout of the exec command. -- **AttachStderr** - Boolean value, attaches to stderr of the exec command. -- **Tty** - Boolean value to allocate a pseudo-TTY -- **Cmd** - Command to run specified as a string or an array of strings. - - -Status Codes: - -- **201** – no error -- **404** – no such container - -### Exec Start - -`POST /exec/(id)/start` - -Starts a previously set up exec instance `id`. If `detach` is true, this API -returns after starting the `exec` command. Otherwise, this API sets up an -interactive session with the `exec` command. - -**Example request**: - - POST /exec/e90e34656806/start HTTP/1.1 - Content-Type: application/json - - { - "Detach": false, - "Tty": false, - } - -**Example response**: - - HTTP/1.1 201 OK - Content-Type: application/json - - {{ STREAM }} - -Json Parameters: - -- **Detach** - Detach from the exec command -- **Tty** - Boolean value to allocate a pseudo-TTY - -Status Codes: - -- **201** – no error -- **404** – no such exec instance - - **Stream details**: - Similar to the stream behavior of `POST /container/(id)/attach` API - -### Exec Resize - -`POST /exec/(id)/resize` - -Resizes the tty session used by the exec command `id`. -This API is valid only if `tty` was specified as part of creating and starting the exec command. - -**Example request**: - - POST /exec/e90e34656806/resize HTTP/1.1 - Content-Type: plain/text - -**Example response**: - - HTTP/1.1 201 OK - Content-Type: plain/text - -Query Parameters: - -- **h** – height of tty session -- **w** – width - -Status Codes: - -- **201** – no error -- **404** – no such exec instance - -### Exec Inspect - -`GET /exec/(id)/json` - -Return low-level information about the exec command `id`. - -**Example request**: - - GET /exec/11fb006128e8ceb3942e7c58d77750f24210e35f879dd204ac975c184b820b39/json HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: plain/text - - { - "ID" : "11fb006128e8ceb3942e7c58d77750f24210e35f879dd204ac975c184b820b39", - "Running" : false, - "ExitCode" : 2, - "ProcessConfig" : { - "privileged" : false, - "user" : "", - "tty" : false, - "entrypoint" : "sh", - "arguments" : [ - "-c", - "exit 2" - ] - }, - "OpenStdin" : false, - "OpenStderr" : false, - "OpenStdout" : false, - "Container" : { - "State" : { - "Running" : true, - "Paused" : false, - "Restarting" : false, - "OOMKilled" : false, - "Pid" : 3650, - "ExitCode" : 0, - "Error" : "", - "StartedAt" : "2014-11-17T22:26:03.717657531Z", - "FinishedAt" : "0001-01-01T00:00:00Z" - }, - "ID" : "8f177a186b977fb451136e0fdf182abff5599a08b3c7f6ef0d36a55aaf89634c", - "Created" : "2014-11-17T22:26:03.626304998Z", - "Path" : "date", - "Args" : [], - "Config" : { - "Hostname" : "8f177a186b97", - "Domainname" : "", - "User" : "", - "Memory" : 0, - "MemorySwap" : 0, - "CpuShares" : 0, - "Cpuset" : "", - "AttachStdin" : false, - "AttachStdout" : false, - "AttachStderr" : false, - "PortSpecs" : null, - "ExposedPorts" : null, - "Tty" : false, - "OpenStdin" : false, - "StdinOnce" : false, - "Env" : [ "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin" ], - "Cmd" : [ - "date" - ], - "Image" : "ubuntu", - "Volumes" : null, - "WorkingDir" : "", - "Entrypoint" : null, - "NetworkDisabled" : false, - "MacAddress" : "", - "OnBuild" : null, - "SecurityOpt" : null - }, - "Image" : "5506de2b643be1e6febbf3b8a240760c6843244c41e12aa2f60ccbb7153d17f5", - "NetworkSettings" : { - "IPAddress" : "172.17.0.2", - "IPPrefixLen" : 16, - "MacAddress" : "02:42:ac:11:00:02", - "Gateway" : "172.17.42.1", - "Bridge" : "docker0", - "PortMapping" : null, - "Ports" : {} - }, - "ResolvConfPath" : "/var/lib/docker/containers/8f177a186b977fb451136e0fdf182abff5599a08b3c7f6ef0d36a55aaf89634c/resolv.conf", - "HostnamePath" : "/var/lib/docker/containers/8f177a186b977fb451136e0fdf182abff5599a08b3c7f6ef0d36a55aaf89634c/hostname", - "HostsPath" : "/var/lib/docker/containers/8f177a186b977fb451136e0fdf182abff5599a08b3c7f6ef0d36a55aaf89634c/hosts", - "Name" : "/test", - "Driver" : "aufs", - "ExecDriver" : "native-0.2", - "MountLabel" : "", - "ProcessLabel" : "", - "AppArmorProfile" : "", - "RestartCount" : 0, - "Volumes" : {}, - "VolumesRW" : {} - } - } - -Status Codes: - -- **200** – no error -- **404** – no such exec instance -- **500** - server error - -# 3. Going further - -## 3.1 Inside `docker run` - -As an example, the `docker run` command line makes the following API calls: - -- Create the container - -- If the status code is 404, it means the image doesn't exist: - - Try to pull it - - Then retry to create the container - -- Start the container - -- If you are not in detached mode: -- Attach to the container, using logs=1 (to have stdout and - stderr from the container's start) and stream=1 - -- If in detached mode or only stdin is attached: -- Display the container's id - -## 3.2 Hijacking - -In this version of the API, /attach, uses hijacking to transport stdin, -stdout and stderr on the same socket. This might change in the future. - -## 3.3 CORS Requests - -To enable cross origin requests to the remote api add the flag -"--api-enable-cors" when running docker in daemon mode. - - $ docker -d -H="192.168.1.9:2375" --api-enable-cors diff --git a/reference/api/docker_remote_api_v1.17.md~ b/reference/api/docker_remote_api_v1.17.md~ deleted file mode 100644 index 955ae8fb54..0000000000 --- a/reference/api/docker_remote_api_v1.17.md~ +++ /dev/null @@ -1,1975 +0,0 @@ -page_title: Remote API v1.17 -page_description: API Documentation for Docker -page_keywords: API, Docker, rcli, REST, documentation - -# Docker Remote API v1.17 - -## 1. Brief introduction - - - The Remote API has replaced `rcli`. - - The daemon listens on `unix:///var/run/docker.sock` but you can - [Bind Docker to another host/port or a Unix socket]( - /articles/basics/#bind-docker-to-another-hostport-or-a-unix-socket). - - The API tends to be REST, but for some complex commands, like `attach` - or `pull`, the HTTP connection is hijacked to transport `STDOUT`, - `STDIN` and `STDERR`. - -# 2. Endpoints - -## 2.1 Containers - -### List containers - -`GET /containers/json` - -List containers - -**Example request**: - - GET /containers/json?all=1&before=8dfafdbc3a40&size=1 HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - [ - { - "Id": "8dfafdbc3a40", - "Image": "ubuntu:latest", - "Command": "echo 1", - "Created": 1367854155, - "Status": "Exit 0", - "Ports": [{"PrivatePort": 2222, "PublicPort": 3333, "Type": "tcp"}], - "SizeRw": 12288, - "SizeRootFs": 0 - }, - { - "Id": "9cd87474be90", - "Image": "ubuntu:latest", - "Command": "echo 222222", - "Created": 1367854155, - "Status": "Exit 0", - "Ports": [], - "SizeRw": 12288, - "SizeRootFs": 0 - }, - { - "Id": "3176a2479c92", - "Image": "ubuntu:latest", - "Command": "echo 3333333333333333", - "Created": 1367854154, - "Status": "Exit 0", - "Ports":[], - "SizeRw":12288, - "SizeRootFs":0 - }, - { - "Id": "4cb07b47f9fb", - "Image": "ubuntu:latest", - "Command": "echo 444444444444444444444444444444444", - "Created": 1367854152, - "Status": "Exit 0", - "Ports": [], - "SizeRw": 12288, - "SizeRootFs": 0 - } - ] - -Query Parameters: - -- **all** – 1/True/true or 0/False/false, Show all containers. - Only running containers are shown by default (i.e., this defaults to false) -- **limit** – Show `limit` last created - containers, include non-running ones. -- **since** – Show only containers created since Id, include - non-running ones. -- **before** – Show only containers created before Id, include - non-running ones. -- **size** – 1/True/true or 0/False/false, Show the containers - sizes -- **filters** - a json encoded value of the filters (a map[string][]string) to process on the containers list. Available filters: - - exited=<int> -- containers with exit code of <int> - - status=(restarting|running|paused|exited) - -Status Codes: - -- **200** – no error -- **400** – bad parameter -- **500** – server error - -### Create a container - -`POST /containers/create` - -Create a container - -**Example request**: - - POST /containers/create HTTP/1.1 - Content-Type: application/json - - { - "Hostname": "", - "Domainname": "", - "User": "", - "Memory": 0, - "MemorySwap": 0, - "CpuShares": 512, - "Cpuset": "0,1", - "AttachStdin": false, - "AttachStdout": true, - "AttachStderr": true, - "Tty": false, - "OpenStdin": false, - "StdinOnce": false, - "Env": null, - "Cmd": [ - "date" - ], - "Entrypoint": "", - "Image": "ubuntu", - "Volumes": { - "/tmp": {} - }, - "WorkingDir": "", - "NetworkDisabled": false, - "MacAddress": "12:34:56:78:9a:bc", - "ExposedPorts": { - "22/tcp": {} - }, - "SecurityOpts": [""], - "HostConfig": { - "Binds": ["/tmp:/tmp"], - "Links": ["redis3:redis"], - "LxcConf": {"lxc.utsname":"docker"}, - "PortBindings": { "22/tcp": [{ "HostPort": "11022" }] }, - "PublishAllPorts": false, - "Privileged": false, - "ReadonlyRootfs": false, - "Dns": ["8.8.8.8"], - "DnsSearch": [""], - "ExtraHosts": null, - "VolumesFrom": ["parent", "other:ro"], - "CapAdd": ["NET_ADMIN"], - "CapDrop": ["MKNOD"], - "RestartPolicy": { "Name": "", "MaximumRetryCount": 0 }, - "NetworkMode": "bridge", - "Devices": [] - } - } - -**Example response**: - - HTTP/1.1 201 Created - Content-Type: application/json - - { - "Id":"e90e34656806" - "Warnings":[] - } - -Json Parameters: - -- **Hostname** - A string value containing the desired hostname to use for the - container. -- **Domainname** - A string value containing the desired domain name to use - for the container. -- **User** - A string value containg the user to use inside the container. -- **Memory** - Memory limit in bytes. -- **MemorySwap**- Total memory usage (memory + swap); set `-1` to disable swap. -- **CpuShares** - An integer value containing the CPU Shares for container - (ie. the relative weight vs othercontainers). - **CpuSet** - String value containg the cgroups Cpuset to use. -- **AttachStdin** - Boolean value, attaches to stdin. -- **AttachStdout** - Boolean value, attaches to stdout. -- **AttachStderr** - Boolean value, attaches to stderr. -- **Tty** - Boolean value, Attach standard streams to a tty, including stdin if it is not closed. -- **OpenStdin** - Boolean value, opens stdin, -- **StdinOnce** - Boolean value, close stdin after the 1 attached client disconnects. -- **Env** - A list of environment variables in the form of `VAR=value` -- **Cmd** - Command to run specified as a string or an array of strings. -- **Entrypoint** - Set the entrypoint for the container a a string or an array - of strings -- **Image** - String value containing the image name to use for the container -- **Volumes** – An object mapping mountpoint paths (strings) inside the - container to empty objects. -- **WorkingDir** - A string value containing the working dir for commands to - run in. -- **NetworkDisabled** - Boolean value, when true disables neworking for the - container -- **ExposedPorts** - An object mapping ports to an empty object in the form of: - `"ExposedPorts": { "/: {}" }` -- **SecurityOpts**: A list of string values to customize labels for MLS - systems, such as SELinux. -- **HostConfig** - - **Binds** – A list of volume bindings for this container. Each volume - binding is a string of the form `container_path` (to create a new - volume for the container), `host_path:container_path` (to bind-mount - a host path into the container), or `host_path:container_path:ro` - (to make the bind-mount read-only inside the container). - - **Links** - A list of links for the container. Each link entry should be of - of the form "container_name:alias". - - **LxcConf** - LXC specific configurations. These configurations will only - work when using the `lxc` execution driver. - - **PortBindings** - A map of exposed container ports and the host port they - should map to. It should be specified in the form - `{ /: [{ "HostPort": "" }] }` - Take note that `port` is specified as a string and not an integer value. - - **PublishAllPorts** - Allocates a random host port for all of a container's - exposed ports. Specified as a boolean value. - - **Privileged** - Gives the container full access to the host. Specified as - a boolean value. - - **ReadonlyRootfs** - Mount the container's root filesystem as read only. - Specified as a boolean value. - - **Dns** - A list of dns servers for the container to use. - - **DnsSearch** - A list of DNS search domains - - **ExtraHosts** - A list of hostnames/IP mappings to be added to the - container's `/etc/hosts` file. Specified in the form `["hostname:IP"]`. - - **VolumesFrom** - A list of volumes to inherit from another container. - Specified in the form `[:]` - - **CapAdd** - A list of kernel capabilties to add to the container. - - **Capdrop** - A list of kernel capabilties to drop from the container. - - **RestartPolicy** – The behavior to apply when the container exits. The - value is an object with a `Name` property of either `"always"` to - always restart or `"on-failure"` to restart only when the container - exit code is non-zero. If `on-failure` is used, `MaximumRetryCount` - controls the number of times to retry before giving up. - The default is not to restart. (optional) - An ever increasing delay (double the previous delay, starting at 100mS) - is added before each restart to prevent flooding the server. - - **NetworkMode** - Sets the networking mode for the container. Supported - values are: `bridge`, `host`, and `container:` - - **Devices** - A list of devices to add to the container specified in the - form - `{ "PathOnHost": "/dev/deviceName", "PathInContainer": "/dev/deviceName", "CgroupPermissions": "mrw"}` - -Query Parameters: - -- **name** – Assign the specified name to the container. Must - match `/?[a-zA-Z0-9_-]+`. - -Status Codes: - -- **201** – no error -- **404** – no such container -- **406** – impossible to attach (container not running) -- **500** – server error - -### Inspect a container - -`GET /containers/(id)/json` - -Return low-level information on the container `id` - - -**Example request**: - - GET /containers/4fa6e0f0c678/json HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - { - "AppArmorProfile": "", - "Args": [ - "-c", - "exit 9" - ], - "Config": { - "AttachStderr": true, - "AttachStdin": false, - "AttachStdout": true, - "Cmd": [ - "/bin/sh", - "-c", - "exit 9" - ], - "CpuShares": 0, - "Cpuset": "", - "Domainname": "", - "Entrypoint": null, - "Env": [ - "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin" - ], - "ExposedPorts": null, - "Hostname": "ba033ac44011", - "Image": "ubuntu", - "MacAddress": "", - "Memory": 0, - "MemorySwap": 0, - "NetworkDisabled": false, - "OnBuild": null, - "OpenStdin": false, - "PortSpecs": null, - "StdinOnce": false, - "Tty": false, - "User": "", - "Volumes": null, - "WorkingDir": "" - }, - "Created": "2015-01-06T15:47:31.485331387Z", - "Driver": "devicemapper", - "ExecDriver": "native-0.2", - "ExecIDs": null, - "HostConfig": { - "Binds": null, - "CapAdd": null, - "CapDrop": null, - "ContainerIDFile": "", - "Devices": [], - "Dns": null, - "DnsSearch": null, - "ExtraHosts": null, - "IpcMode": "", - "Links": null, - "LxcConf": [], - "NetworkMode": "bridge", - "PortBindings": {}, - "Privileged": false, - "ReadonlyRootfs": false, - "PublishAllPorts": false, - "RestartPolicy": { - "MaximumRetryCount": 2, - "Name": "on-failure" - }, - "SecurityOpt": null, - "VolumesFrom": null - }, - "HostnamePath": "/var/lib/docker/containers/ba033ac4401106a3b513bc9d639eee123ad78ca3616b921167cd74b20e25ed39/hostname", - "HostsPath": "/var/lib/docker/containers/ba033ac4401106a3b513bc9d639eee123ad78ca3616b921167cd74b20e25ed39/hosts", - "Id": "ba033ac4401106a3b513bc9d639eee123ad78ca3616b921167cd74b20e25ed39", - "Image": "04c5d3b7b0656168630d3ba35d8889bd0e9caafcaeb3004d2bfbc47e7c5d35d2", - "MountLabel": "", - "Name": "/boring_euclid", - "NetworkSettings": { - "Bridge": "", - "Gateway": "", - "IPAddress": "", - "IPPrefixLen": 0, - "MacAddress": "", - "PortMapping": null, - "Ports": null - }, - "Path": "/bin/sh", - "ProcessLabel": "", - "ResolvConfPath": "/var/lib/docker/containers/ba033ac4401106a3b513bc9d639eee123ad78ca3616b921167cd74b20e25ed39/resolv.conf", - "RestartCount": 1, - "State": { - "Error": "", - "ExitCode": 9, - "FinishedAt": "2015-01-06T15:47:32.080254511Z", - "OOMKilled": false, - "Paused": false, - "Pid": 0, - "Restarting": false, - "Running": false, - "StartedAt": "2015-01-06T15:47:32.072697474Z" - }, - "Volumes": {}, - "VolumesRW": {} - } - -Status Codes: - -- **200** – no error -- **404** – no such container -- **500** – server error - -### List processes running inside a container - -`GET /containers/(id)/top` - -List processes running inside the container `id` - -**Example request**: - - GET /containers/4fa6e0f0c678/top HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - { - "Titles": [ - "USER", - "PID", - "%CPU", - "%MEM", - "VSZ", - "RSS", - "TTY", - "STAT", - "START", - "TIME", - "COMMAND" - ], - "Processes": [ - ["root","20147","0.0","0.1","18060","1864","pts/4","S","10:06","0:00","bash"], - ["root","20271","0.0","0.0","4312","352","pts/4","S+","10:07","0:00","sleep","10"] - ] - } - -Query Parameters: - -- **ps_args** – ps arguments to use (e.g., aux) - -Status Codes: - -- **200** – no error -- **404** – no such container -- **500** – server error - -### Get container logs - -`GET /containers/(id)/logs` - -Get stdout and stderr logs from the container ``id`` - -**Example request**: - - GET /containers/4fa6e0f0c678/logs?stderr=1&stdout=1×tamps=1&follow=1&tail=10 HTTP/1.1 - -**Example response**: - - HTTP/1.1 101 UPGRADED - Content-Type: application/vnd.docker.raw-stream - Connection: Upgrade - Upgrade: tcp - - {{ STREAM }} - -Query Parameters: - -- **follow** – 1/True/true or 0/False/false, return stream. Default false -- **stdout** – 1/True/true or 0/False/false, show stdout log. Default false -- **stderr** – 1/True/true or 0/False/false, show stderr log. Default false -- **timestamps** – 1/True/true or 0/False/false, print timestamps for - every log line. Default false -- **tail** – Output specified number of lines at the end of logs: `all` or ``. Default all - -Status Codes: - -- **101** – no error, hints proxy about hijacking -- **200** – no error, no upgrade header found -- **404** – no such container -- **500** – server error - -### Inspect changes on a container's filesystem - -`GET /containers/(id)/changes` - -Inspect changes on container `id`'s filesystem - -**Example request**: - - GET /containers/4fa6e0f0c678/changes HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - [ - { - "Path": "/dev", - "Kind": 0 - }, - { - "Path": "/dev/kmsg", - "Kind": 1 - }, - { - "Path": "/test", - "Kind": 1 - } - ] - -Status Codes: - -- **200** – no error -- **404** – no such container -- **500** – server error - -### Export a container - -`GET /containers/(id)/export` - -Export the contents of container `id` - -**Example request**: - - GET /containers/4fa6e0f0c678/export HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/octet-stream - - {{ TAR STREAM }} - -Status Codes: - -- **200** – no error -- **404** – no such container -- **500** – server error - -### Get container stats based on resource usage - -`GET /containers/(id)/stats` - -This endpoint returns a live stream of a container's resource usage statistics. - -> **Note**: this functionality currently only works when using the *libcontainer* exec-driver. - -**Example request**: - - GET /containers/redis1/stats HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - { - "read" : "2015-01-08T22:57:31.547920715Z", - "network" : { - "rx_dropped" : 0, - "rx_bytes" : 648, - "rx_errors" : 0, - "tx_packets" : 8, - "tx_dropped" : 0, - "rx_packets" : 8, - "tx_errors" : 0, - "tx_bytes" : 648 - }, - "memory_stats" : { - "stats" : { - "total_pgmajfault" : 0, - "cache" : 0, - "mapped_file" : 0, - "total_inactive_file" : 0, - "pgpgout" : 414, - "rss" : 6537216, - "total_mapped_file" : 0, - "writeback" : 0, - "unevictable" : 0, - "pgpgin" : 477, - "total_unevictable" : 0, - "pgmajfault" : 0, - "total_rss" : 6537216, - "total_rss_huge" : 6291456, - "total_writeback" : 0, - "total_inactive_anon" : 0, - "rss_huge" : 6291456, - "hierarchical_memory_limit" : 67108864, - "total_pgfault" : 964, - "total_active_file" : 0, - "active_anon" : 6537216, - "total_active_anon" : 6537216, - "total_pgpgout" : 414, - "total_cache" : 0, - "inactive_anon" : 0, - "active_file" : 0, - "pgfault" : 964, - "inactive_file" : 0, - "total_pgpgin" : 477 - }, - "max_usage" : 6651904, - "usage" : 6537216, - "failcnt" : 0, - "limit" : 67108864 - }, - "blkio_stats" : {}, - "cpu_stats" : { - "cpu_usage" : { - "percpu_usage" : [ - 16970827, - 1839451, - 7107380, - 10571290 - ], - "usage_in_usermode" : 10000000, - "total_usage" : 36488948, - "usage_in_kernelmode" : 20000000 - }, - "system_cpu_usage" : 20091722000000000, - "throttling_data" : {} - } - } - -Status Codes: - -- **200** – no error -- **404** – no such container -- **500** – server error - -### Resize a container TTY - -`POST /containers/(id)/resize?h=&w=` - -Resize the TTY for container with `id`. The container must be restarted for the resize to take effect. - -**Example request**: - - POST /containers/4fa6e0f0c678/resize?h=40&w=80 HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Length: 0 - Content-Type: text/plain; charset=utf-8 - -Status Codes: - -- **200** – no error -- **404** – No such container -- **500** – Cannot resize container - -### Start a container - -`POST /containers/(id)/start` - -Start the container `id` - -**Example request**: - - POST /containers/(id)/start HTTP/1.1 - Content-Type: application/json - -**Example response**: - - HTTP/1.1 204 No Content - -Json Parameters: - -Status Codes: - -- **204** – no error -- **304** – container already started -- **404** – no such container -- **500** – server error - -### Stop a container - -`POST /containers/(id)/stop` - -Stop the container `id` - -**Example request**: - - POST /containers/e90e34656806/stop?t=5 HTTP/1.1 - -**Example response**: - - HTTP/1.1 204 No Content - -Query Parameters: - -- **t** – number of seconds to wait before killing the container - -Status Codes: - -- **204** – no error -- **304** – container already stopped -- **404** – no such container -- **500** – server error - -### Restart a container - -`POST /containers/(id)/restart` - -Restart the container `id` - -**Example request**: - - POST /containers/e90e34656806/restart?t=5 HTTP/1.1 - -**Example response**: - - HTTP/1.1 204 No Content - -Query Parameters: - -- **t** – number of seconds to wait before killing the container - -Status Codes: - -- **204** – no error -- **404** – no such container -- **500** – server error - -### Kill a container - -`POST /containers/(id)/kill` - -Kill the container `id` - -**Example request**: - - POST /containers/e90e34656806/kill HTTP/1.1 - -**Example response**: - - HTTP/1.1 204 No Content - -Query Parameters - -- **signal** - Signal to send to the container: integer or string like "SIGINT". - When not set, SIGKILL is assumed and the call will waits for the container to exit. - -Status Codes: - -- **204** – no error -- **404** – no such container -- **500** – server error - -### Rename a container - -`POST /containers/(id)/rename` - -Rename the container `id` to a `new_name` - -**Example request**: - - POST /containers/e90e34656806/rename?name=new_name HTTP/1.1 - -**Example response**: - - HTTP/1.1 204 No Content - -Query Parameters: - -- **name** – new name for the container - -Status Codes: - -- **204** – no error -- **404** – no such container -- **409** - conflict name already assigned -- **500** – server error - -### Pause a container - -`POST /containers/(id)/pause` - -Pause the container `id` - -**Example request**: - - POST /containers/e90e34656806/pause HTTP/1.1 - -**Example response**: - - HTTP/1.1 204 No Content - -Status Codes: - -- **204** – no error -- **404** – no such container -- **500** – server error - -### Unpause a container - -`POST /containers/(id)/unpause` - -Unpause the container `id` - -**Example request**: - - POST /containers/e90e34656806/unpause HTTP/1.1 - -**Example response**: - - HTTP/1.1 204 No Content - -Status Codes: - -- **204** – no error -- **404** – no such container -- **500** – server error - -### Attach to a container - -`POST /containers/(id)/attach` - -Attach to the container `id` - -**Example request**: - - POST /containers/16253994b7c4/attach?logs=1&stream=0&stdout=1 HTTP/1.1 - -**Example response**: - - HTTP/1.1 101 UPGRADED - Content-Type: application/vnd.docker.raw-stream - Connection: Upgrade - Upgrade: tcp - - {{ STREAM }} - -Query Parameters: - -- **logs** – 1/True/true or 0/False/false, return logs. Default false -- **stream** – 1/True/true or 0/False/false, return stream. - Default false -- **stdin** – 1/True/true or 0/False/false, if stream=true, attach - to stdin. Default false -- **stdout** – 1/True/true or 0/False/false, if logs=true, return - stdout log, if stream=true, attach to stdout. Default false -- **stderr** – 1/True/true or 0/False/false, if logs=true, return - stderr log, if stream=true, attach to stderr. Default false - -Status Codes: - -- **101** – no error, hints proxy about hijacking -- **200** – no error, no upgrade header found -- **400** – bad parameter -- **404** – no such container -- **500** – server error - - **Stream details**: - - When using the TTY setting is enabled in - [`POST /containers/create` - ](/reference/api/docker_remote_api_v1.9/#create-a-container "POST /containers/create"), - the stream is the raw data from the process PTY and client's stdin. - When the TTY is disabled, then the stream is multiplexed to separate - stdout and stderr. - - The format is a **Header** and a **Payload** (frame). - - **HEADER** - - The header will contain the information on which stream write the - stream (stdout or stderr). It also contain the size of the - associated frame encoded on the last 4 bytes (uint32). - - It is encoded on the first 8 bytes like this: - - header := [8]byte{STREAM_TYPE, 0, 0, 0, SIZE1, SIZE2, SIZE3, SIZE4} - - `STREAM_TYPE` can be: - -- 0: stdin (will be written on stdout) -- 1: stdout -- 2: stderr - - `SIZE1, SIZE2, SIZE3, SIZE4` are the 4 bytes of - the uint32 size encoded as big endian. - - **PAYLOAD** - - The payload is the raw stream. - - **IMPLEMENTATION** - - The simplest way to implement the Attach protocol is the following: - - 1. Read 8 bytes - 2. chose stdout or stderr depending on the first byte - 3. Extract the frame size from the last 4 byets - 4. Read the extracted size and output it on the correct output - 5. Goto 1 - -### Attach to a container (websocket) - -`GET /containers/(id)/attach/ws` - -Attach to the container `id` via websocket - -Implements websocket protocol handshake according to [RFC 6455](http://tools.ietf.org/html/rfc6455) - -**Example request** - - GET /containers/e90e34656806/attach/ws?logs=0&stream=1&stdin=1&stdout=1&stderr=1 HTTP/1.1 - -**Example response** - - {{ STREAM }} - -Query Parameters: - -- **logs** – 1/True/true or 0/False/false, return logs. Default false -- **stream** – 1/True/true or 0/False/false, return stream. - Default false -- **stdin** – 1/True/true or 0/False/false, if stream=true, attach - to stdin. Default false -- **stdout** – 1/True/true or 0/False/false, if logs=true, return - stdout log, if stream=true, attach to stdout. Default false -- **stderr** – 1/True/true or 0/False/false, if logs=true, return - stderr log, if stream=true, attach to stderr. Default false - -Status Codes: - -- **200** – no error -- **400** – bad parameter -- **404** – no such container -- **500** – server error - -### Wait a container - -`POST /containers/(id)/wait` - -Block until container `id` stops, then returns the exit code - -**Example request**: - - POST /containers/16253994b7c4/wait HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - {"StatusCode": 0} - -Status Codes: - -- **200** – no error -- **404** – no such container -- **500** – server error - -### Remove a container - -`DELETE /containers/(id)` - -Remove the container `id` from the filesystem - -**Example request**: - - DELETE /containers/16253994b7c4?v=1 HTTP/1.1 - -**Example response**: - - HTTP/1.1 204 No Content - -Query Parameters: - -- **v** – 1/True/true or 0/False/false, Remove the volumes - associated to the container. Default false -- **force** - 1/True/true or 0/False/false, Kill then remove the container. - Default false - -Status Codes: - -- **204** – no error -- **400** – bad parameter -- **404** – no such container -- **500** – server error - -### Copy files or folders from a container - -`POST /containers/(id)/copy` - -Copy files or folders of container `id` - -**Example request**: - - POST /containers/4fa6e0f0c678/copy HTTP/1.1 - Content-Type: application/json - - { - "Resource": "test.txt" - } - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/x-tar - - {{ TAR STREAM }} - -Status Codes: - -- **200** – no error -- **404** – no such container -- **500** – server error - -## 2.2 Images - -### List Images - -`GET /images/json` - -**Example request**: - - GET /images/json?all=0 HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - [ - { - "RepoTags": [ - "ubuntu:12.04", - "ubuntu:precise", - "ubuntu:latest" - ], - "Id": "8dbd9e392a964056420e5d58ca5cc376ef18e2de93b5cc90e868a1bbc8318c1c", - "Created": 1365714795, - "Size": 131506275, - "VirtualSize": 131506275 - }, - { - "RepoTags": [ - "ubuntu:12.10", - "ubuntu:quantal" - ], - "ParentId": "27cf784147099545", - "Id": "b750fe79269d2ec9a3c593ef05b4332b1d1a02a62b4accb2c21d589ff2f5f2dc", - "Created": 1364102658, - "Size": 24653, - "VirtualSize": 180116135 - } - ] - - -Query Parameters: - -- **all** – 1/True/true or 0/False/false, default false -- **filters** – a json encoded value of the filters (a map[string][]string) to process on the images list. Available filters: - - dangling=true - -### Build image from a Dockerfile - -`POST /build` - -Build an image from a Dockerfile - -**Example request**: - - POST /build HTTP/1.1 - - {{ TAR STREAM }} - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - {"stream": "Step 1..."} - {"stream": "..."} - {"error": "Error...", "errorDetail": {"code": 123, "message": "Error..."}} - -The input stream must be a tar archive compressed with one of the -following algorithms: identity (no compression), gzip, bzip2, xz. - -The archive must include a build instructions file, typically called -`Dockerfile` at the root of the archive. The `dockerfile` parameter may be -used to specify a different build instructions file by having its value be -the path to the alternate build instructions file to use. - -The archive may include any number of other files, -which will be accessible in the build context (See the [*ADD build -command*](/reference/builder/#dockerbuilder)). - -Query Parameters: - -- **dockerfile** - path within the build context to the Dockerfile -- **t** – repository name (and optionally a tag) to be applied to - the resulting image in case of success -- **remote** – git or HTTP/HTTPS URI build source -- **q** – suppress verbose build output -- **nocache** – do not use the cache when building the image -- **pull** - attempt to pull the image even if an older image exists locally -- **rm** - remove intermediate containers after a successful build (default behavior) -- **forcerm** - always remove intermediate containers (includes rm) - - Request Headers: - -- **Content-type** – should be set to `"application/tar"`. -- **X-Registry-Config** – base64-encoded ConfigFile objec - -Status Codes: - -- **200** – no error -- **500** – server error - -### Create an image - -`POST /images/create` - -Create an image, either by pulling it from the registry or by importing it - -**Example request**: - - POST /images/create?fromImage=ubuntu HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - {"status": "Pulling..."} - {"status": "Pulling", "progress": "1 B/ 100 B", "progressDetail": {"current": 1, "total": 100}} - {"error": "Invalid..."} - ... - - When using this endpoint to pull an image from the registry, the - `X-Registry-Auth` header can be used to include - a base64-encoded AuthConfig object. - -Query Parameters: - -- **fromImage** – name of the image to pull -- **fromSrc** – source to import. The value may be a URL from which the image - can be retrieved or `-` to read the image from the request body. -- **repo** – repository -- **tag** – tag -- **registry** – the registry to pull from - - Request Headers: - -- **X-Registry-Auth** – base64-encoded AuthConfig object - -Status Codes: - -- **200** – no error -- **500** – server error - - - -### Inspect an image - -`GET /images/(name)/json` - -Return low-level information on the image `name` - -**Example request**: - - GET /images/ubuntu/json HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - { - "Created": "2013-03-23T22:24:18.818426-07:00", - "Container": "3d67245a8d72ecf13f33dffac9f79dcdf70f75acb84d308770391510e0c23ad0", - "ContainerConfig": - { - "Hostname": "", - "User": "", - "Memory": 0, - "MemorySwap": 0, - "AttachStdin": false, - "AttachStdout": false, - "AttachStderr": false, - "PortSpecs": null, - "Tty": true, - "OpenStdin": true, - "StdinOnce": false, - "Env": null, - "Cmd": ["/bin/bash"], - "Dns": null, - "Image": "ubuntu", - "Volumes": null, - "VolumesFrom": "", - "WorkingDir": "" - }, - "Id": "b750fe79269d2ec9a3c593ef05b4332b1d1a02a62b4accb2c21d589ff2f5f2dc", - "Parent": "27cf784147099545", - "Size": 6824592 - } - -Status Codes: - -- **200** – no error -- **404** – no such image -- **500** – server error - -### Get the history of an image - -`GET /images/(name)/history` - -Return the history of the image `name` - -**Example request**: - - GET /images/ubuntu/history HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - [ - { - "Id": "b750fe79269d", - "Created": 1364102658, - "CreatedBy": "/bin/bash" - }, - { - "Id": "27cf78414709", - "Created": 1364068391, - "CreatedBy": "" - } - ] - -Status Codes: - -- **200** – no error -- **404** – no such image -- **500** – server error - -### Push an image on the registry - -`POST /images/(name)/push` - -Push the image `name` on the registry - -**Example request**: - - POST /images/test/push HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - {"status": "Pushing..."} - {"status": "Pushing", "progress": "1/? (n/a)", "progressDetail": {"current": 1}}} - {"error": "Invalid..."} - ... - - If you wish to push an image on to a private registry, that image must already have been tagged - into a repository which references that registry host name and port. This repository name should - then be used in the URL. This mirrors the flow of the CLI. - -**Example request**: - - POST /images/registry.acme.com:5000/test/push HTTP/1.1 - - -Query Parameters: - -- **tag** – the tag to associate with the image on the registry, optional - -Request Headers: - -- **X-Registry-Auth** – include a base64-encoded AuthConfig - object. - -Status Codes: - -- **200** – no error -- **404** – no such image -- **500** – server error - -### Tag an image into a repository - -`POST /images/(name)/tag` - -Tag the image `name` into a repository - -**Example request**: - - POST /images/test/tag?repo=myrepo&force=0&tag=v42 HTTP/1.1 - -**Example response**: - - HTTP/1.1 201 OK - -Query Parameters: - -- **repo** – The repository to tag in -- **force** – 1/True/true or 0/False/false, default false -- **tag** - The new tag name - -Status Codes: - -- **201** – no error -- **400** – bad parameter -- **404** – no such image -- **409** – conflict -- **500** – server error - -### Remove an image - -`DELETE /images/(name)` - -Remove the image `name` from the filesystem - -**Example request**: - - DELETE /images/test HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-type: application/json - - [ - {"Untagged": "3e2f21a89f"}, - {"Deleted": "3e2f21a89f"}, - {"Deleted": "53b4f83ac9"} - ] - -Query Parameters: - -- **force** – 1/True/true or 0/False/false, default false -- **noprune** – 1/True/true or 0/False/false, default false - -Status Codes: - -- **200** – no error -- **404** – no such image -- **409** – conflict -- **500** – server error - -### Search images - -`GET /images/search` - -Search for an image on [Docker Hub](https://hub.docker.com). - -> **Note**: -> The response keys have changed from API v1.6 to reflect the JSON -> sent by the registry server to the docker daemon's request. - -**Example request**: - - GET /images/search?term=sshd HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - [ - { - "description": "", - "is_official": false, - "is_automated": false, - "name": "wma55/u1210sshd", - "star_count": 0 - }, - { - "description": "", - "is_official": false, - "is_automated": false, - "name": "jdswinbank/sshd", - "star_count": 0 - }, - { - "description": "", - "is_official": false, - "is_automated": false, - "name": "vgauthier/sshd", - "star_count": 0 - } - ... - ] - -Query Parameters: - -- **term** – term to search - -Status Codes: - -- **200** – no error -- **500** – server error - -## 2.3 Misc - -### Check auth configuration - -`POST /auth` - -Get the default username and email - -**Example request**: - - POST /auth HTTP/1.1 - Content-Type: application/json - - { - "username":" hannibal", - "password: "xxxx", - "email": "hannibal@a-team.com", - "serveraddress": "https://index.docker.io/v1/" - } - -**Example response**: - - HTTP/1.1 200 OK - -Status Codes: - -- **200** – no error -- **204** – no error -- **500** – server error - -### Display system-wide information - -`GET /info` - -Display system-wide information - -**Example request**: - - GET /info HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - { - "Containers":11, - "Images":16, - "Driver":"btrfs", - "DriverStatus": [[""]], - "ExecutionDriver":"native-0.1", - "KernelVersion":"3.12.0-1-amd64" - "NCPU":1, - "MemTotal":2099236864, - "Name":"prod-server-42", - "ID":"7TRN:IPZB:QYBB:VPBQ:UMPP:KARE:6ZNR:XE6T:7EWV:PKF4:ZOJD:TPYS", - "Debug":false, - "NFd": 11, - "NGoroutines":21, - "NEventsListener":0, - "InitPath":"/usr/bin/docker", - "InitSha1":"", - "IndexServerAddress":["https://index.docker.io/v1/"], - "MemoryLimit":true, - "SwapLimit":false, - "IPv4Forwarding":true, - "Labels":["storage=ssd"], - "DockerRootDir": "/var/lib/docker", - "OperatingSystem": "Boot2Docker", - } - -Status Codes: - -- **200** – no error -- **500** – server error - -### Show the docker version information - -`GET /version` - -Show the docker version information - -**Example request**: - - GET /version HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - { - "ApiVersion": "1.12", - "Version": "0.2.2", - "GitCommit": "5a2a5cc+CHANGES", - "GoVersion": "go1.0.3" - } - -Status Codes: - -- **200** – no error -- **500** – server error - -### Ping the docker server - -`GET /_ping` - -Ping the docker server - -**Example request**: - - GET /_ping HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: text/plain - - OK - -Status Codes: - -- **200** - no error -- **500** - server error - -### Create a new image from a container's changes - -`POST /commit` - -Create a new image from a container's changes - -**Example request**: - - POST /commit?container=44c004db4b17&comment=message&repo=myrepo HTTP/1.1 - Content-Type: application/json - - { - "Hostname": "", - "Domainname": "", - "User": "", - "Memory": 0, - "MemorySwap": 0, - "CpuShares": 512, - "Cpuset": "0,1", - "AttachStdin": false, - "AttachStdout": true, - "AttachStderr": true, - "PortSpecs": null, - "Tty": false, - "OpenStdin": false, - "StdinOnce": false, - "Env": null, - "Cmd": [ - "date" - ], - "Volumes": { - "/tmp": {} - }, - "WorkingDir": "", - "NetworkDisabled": false, - "ExposedPorts": { - "22/tcp": {} - } - } - -**Example response**: - - HTTP/1.1 201 Created - Content-Type: application/vnd.docker.raw-stream - - {"Id": "596069db4bf5"} - -Json Parameters: - -- **config** - the container's configuration - -Query Parameters: - -- **container** – source container -- **repo** – repository -- **tag** – tag -- **comment** – commit message -- **author** – author (e.g., "John Hannibal Smith - <[hannibal@a-team.com](mailto:hannibal%40a-team.com)>") - -Status Codes: - -- **201** – no error -- **404** – no such container -- **500** – server error - -### Monitor Docker's events - -`GET /events` - -Get container events from docker, either in real time via streaming, or via -polling (using since). - -Docker containers will report the following events: - - create, destroy, die, exec_create, exec_start, export, kill, oom, pause, restart, start, stop, unpause - -and Docker images will report: - - untag, delete - -**Example request**: - - GET /events?since=1374067924 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - {"status": "create", "id": "dfdf82bd3881","from": "ubuntu:latest", "time":1374067924} - {"status": "start", "id": "dfdf82bd3881","from": "ubuntu:latest", "time":1374067924} - {"status": "stop", "id": "dfdf82bd3881","from": "ubuntu:latest", "time":1374067966} - {"status": "destroy", "id": "dfdf82bd3881","from": "ubuntu:latest", "time":1374067970} - -Query Parameters: - -- **since** – timestamp used for polling -- **until** – timestamp used for polling -- **filters** – a json encoded value of the filters (a map[string][]string) to process on the event list. Available filters: - - event=<string> -- event to filter - - image=<string> -- image to filter - - container=<string> -- container to filter - -Status Codes: - -- **200** – no error -- **500** – server error - -### Get a tarball containing all images in a repository - -`GET /images/(name)/get` - -Get a tarball containing all images and metadata for the repository specified -by `name`. - -If `name` is a specific name and tag (e.g. ubuntu:latest), then only that image -(and its parents) are returned. If `name` is an image ID, similarly only tha -image (and its parents) are returned, but with the exclusion of the -'repositories' file in the tarball, as there were no image names referenced. - -See the [image tarball format](#image-tarball-format) for more details. - -**Example request** - - GET /images/ubuntu/get - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/x-tar - - Binary data stream - -Status Codes: - -- **200** – no error -- **500** – server error - -### Get a tarball containing all images. - -`GET /images/get` - -Get a tarball containing all images and metadata for one or more repositories. - -For each value of the `names` parameter: if it is a specific name and tag (e.g. -ubuntu:latest), then only that image (and its parents) are returned; if it is -an image ID, similarly only that image (and its parents) are returned and there -would be no names referenced in the 'repositories' file for this image ID. - -See the [image tarball format](#image-tarball-format) for more details. - -**Example request** - - GET /images/get?names=myname%2Fmyapp%3Alatest&names=busybox - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/x-tar - - Binary data stream - -Status Codes: - -- **200** – no error -- **500** – server error - -### Load a tarball with a set of images and tags into docker - -`POST /images/load` - -Load a set of images and tags into the docker repository. -See the [image tarball format](#image-tarball-format) for more details. - -**Example request** - - POST /images/load - - Tarball in body - -**Example response**: - - HTTP/1.1 200 OK - -Status Codes: - -- **200** – no error -- **500** – server error - -### Image tarball format - -An image tarball contains one directory per image layer (named using its long ID), -each containing three files: - -1. `VERSION`: currently `1.0` - the file format version -2. `json`: detailed layer information, similar to `docker inspect layer_id` -3. `layer.tar`: A tarfile containing the filesystem changes in this layer - -The `layer.tar` file will contain `aufs` style `.wh..wh.aufs` files and directories -for storing attribute changes and deletions. - -If the tarball defines a repository, there will also be a `repositories` file at -the root that contains a list of repository and tag names mapped to layer IDs. - -``` -{"hello-world": - {"latest": "565a9d68a73f6706862bfe8409a7f659776d4d60a8d096eb4a3cbce6999cc2a1"} -} -``` - -### Exec Create - -`POST /containers/(id)/exec` - -Sets up an exec instance in a running container `id` - -**Example request**: - - POST /containers/e90e34656806/exec HTTP/1.1 - Content-Type: application/json - - { - "AttachStdin": false, - "AttachStdout": true, - "AttachStderr": true, - "Tty": false, - "Cmd": [ - "date" - ], - } - -**Example response**: - - HTTP/1.1 201 OK - Content-Type: application/json - - { - "Id": "f90e34656806" - } - -Json Parameters: - -- **AttachStdin** - Boolean value, attaches to stdin of the exec command. -- **AttachStdout** - Boolean value, attaches to stdout of the exec command. -- **AttachStderr** - Boolean value, attaches to stderr of the exec command. -- **Tty** - Boolean value to allocate a pseudo-TTY -- **Cmd** - Command to run specified as a string or an array of strings. - - -Status Codes: - -- **201** – no error -- **404** – no such container - -### Exec Start - -`POST /exec/(id)/start` - -Starts a previously set up exec instance `id`. If `detach` is true, this API -returns after starting the `exec` command. Otherwise, this API sets up an -interactive session with the `exec` command. - -**Example request**: - - POST /exec/e90e34656806/start HTTP/1.1 - Content-Type: application/json - - { - "Detach": false, - "Tty": false, - } - -**Example response**: - - HTTP/1.1 201 OK - Content-Type: application/json - - {{ STREAM }} - -Json Parameters: - -- **Detach** - Detach from the exec command -- **Tty** - Boolean value to allocate a pseudo-TTY - -Status Codes: - -- **201** – no error -- **404** – no such exec instance - - **Stream details**: - Similar to the stream behavior of `POST /container/(id)/attach` API - -### Exec Resize - -`POST /exec/(id)/resize` - -Resizes the tty session used by the exec command `id`. -This API is valid only if `tty` was specified as part of creating and starting the exec command. - -**Example request**: - - POST /exec/e90e34656806/resize HTTP/1.1 - Content-Type: text/plain - -**Example response**: - - HTTP/1.1 201 OK - Content-Type: text/plain - -Query Parameters: - -- **h** – height of tty session -- **w** – width - -Status Codes: - -- **201** – no error -- **404** – no such exec instance - -### Exec Inspect - -`GET /exec/(id)/json` - -Return low-level information about the exec command `id`. - -**Example request**: - - GET /exec/11fb006128e8ceb3942e7c58d77750f24210e35f879dd204ac975c184b820b39/json HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: plain/text - - { - "ID" : "11fb006128e8ceb3942e7c58d77750f24210e35f879dd204ac975c184b820b39", - "Running" : false, - "ExitCode" : 2, - "ProcessConfig" : { - "privileged" : false, - "user" : "", - "tty" : false, - "entrypoint" : "sh", - "arguments" : [ - "-c", - "exit 2" - ] - }, - "OpenStdin" : false, - "OpenStderr" : false, - "OpenStdout" : false, - "Container" : { - "State" : { - "Running" : true, - "Paused" : false, - "Restarting" : false, - "OOMKilled" : false, - "Pid" : 3650, - "ExitCode" : 0, - "Error" : "", - "StartedAt" : "2014-11-17T22:26:03.717657531Z", - "FinishedAt" : "0001-01-01T00:00:00Z" - }, - "ID" : "8f177a186b977fb451136e0fdf182abff5599a08b3c7f6ef0d36a55aaf89634c", - "Created" : "2014-11-17T22:26:03.626304998Z", - "Path" : "date", - "Args" : [], - "Config" : { - "Hostname" : "8f177a186b97", - "Domainname" : "", - "User" : "", - "Memory" : 0, - "MemorySwap" : 0, - "CpuShares" : 0, - "Cpuset" : "", - "AttachStdin" : false, - "AttachStdout" : false, - "AttachStderr" : false, - "PortSpecs" : null, - "ExposedPorts" : null, - "Tty" : false, - "OpenStdin" : false, - "StdinOnce" : false, - "Env" : [ "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin" ], - "Cmd" : [ - "date" - ], - "Image" : "ubuntu", - "Volumes" : null, - "WorkingDir" : "", - "Entrypoint" : null, - "NetworkDisabled" : false, - "MacAddress" : "", - "OnBuild" : null, - "SecurityOpt" : null - }, - "Image" : "5506de2b643be1e6febbf3b8a240760c6843244c41e12aa2f60ccbb7153d17f5", - "NetworkSettings" : { - "IPAddress" : "172.17.0.2", - "IPPrefixLen" : 16, - "MacAddress" : "02:42:ac:11:00:02", - "Gateway" : "172.17.42.1", - "Bridge" : "docker0", - "PortMapping" : null, - "Ports" : {} - }, - "ResolvConfPath" : "/var/lib/docker/containers/8f177a186b977fb451136e0fdf182abff5599a08b3c7f6ef0d36a55aaf89634c/resolv.conf", - "HostnamePath" : "/var/lib/docker/containers/8f177a186b977fb451136e0fdf182abff5599a08b3c7f6ef0d36a55aaf89634c/hostname", - "HostsPath" : "/var/lib/docker/containers/8f177a186b977fb451136e0fdf182abff5599a08b3c7f6ef0d36a55aaf89634c/hosts", - "Name" : "/test", - "Driver" : "aufs", - "ExecDriver" : "native-0.2", - "MountLabel" : "", - "ProcessLabel" : "", - "AppArmorProfile" : "", - "RestartCount" : 0, - "Volumes" : {}, - "VolumesRW" : {} - } - } - -Status Codes: - -- **200** – no error -- **404** – no such exec instance -- **500** - server error - -# 3. Going further - -## 3.1 Inside `docker run` - -As an example, the `docker run` command line makes the following API calls: - -- Create the container - -- If the status code is 404, it means the image doesn't exist: - - Try to pull it - - Then retry to create the container - -- Start the container - -- If you are not in detached mode: -- Attach to the container, using logs=1 (to have stdout and - stderr from the container's start) and stream=1 - -- If in detached mode or only stdin is attached: -- Display the container's id - -## 3.2 Hijacking - -In this version of the API, /attach, uses hijacking to transport stdin, -stdout and stderr on the same socket. - -To hint potential proxies about connection hijacking, Docker client sends -connection upgrade headers similarly to websocket. - - Upgrade: tcp - Connection: Upgrade - -When Docker daemon detects the `Upgrade` header, it will switch its status code -from **200 OK** to **101 UPGRADED** and resend the same headers. - -This might change in the future. - -## 3.3 CORS Requests - -To enable cross origin requests to the remote api add the flag -"--api-enable-cors" when running docker in daemon mode. - - $ docker -d -H="192.168.1.9:2375" --api-enable-cors diff --git a/reference/api/docker_remote_api_v1.2.md~ b/reference/api/docker_remote_api_v1.2.md~ deleted file mode 100644 index 3438eab2db..0000000000 --- a/reference/api/docker_remote_api_v1.2.md~ +++ /dev/null @@ -1,1017 +0,0 @@ -page_title: Remote API v1.2 -page_description: API Documentation for Docker -page_keywords: API, Docker, rcli, REST, documentation - -# Docker Remote API v1.2 - -# 1. Brief introduction - -- The Remote API is replacing rcli -- Default port in the docker daemon is 2375 -- The API tends to be REST, but for some complex commands, like attach - or pull, the HTTP connection is hijacked to transport stdout stdin - and stderr - -# 2. Endpoints - -## 2.1 Containers - -### List containers - -`GET /containers/json` - -List containers - -**Example request**: - - GET /containers/json?all=1&before=8dfafdbc3a40 HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - [ - { - "Id": "8dfafdbc3a40", - "Image": "ubuntu:latest", - "Command": "echo 1", - "Created": 1367854155, - "Status": "Exit 0", - "Ports":"", - "SizeRw":12288, - "SizeRootFs":0 - }, - { - "Id": "9cd87474be90", - "Image": "ubuntu:latest", - "Command": "echo 222222", - "Created": 1367854155, - "Status": "Exit 0", - "Ports":"", - "SizeRw":12288, - "SizeRootFs":0 - }, - { - "Id": "3176a2479c92", - "Image": "centos:latest", - "Command": "echo 3333333333333333", - "Created": 1367854154, - "Status": "Exit 0", - "Ports":"", - "SizeRw":12288, - "SizeRootFs":0 - }, - { - "Id": "4cb07b47f9fb", - "Image": "fedora:latest", - "Command": "echo 444444444444444444444444444444444", - "Created": 1367854152, - "Status": "Exit 0", - "Ports":"", - "SizeRw":12288, - "SizeRootFs":0 - } - ] - -Query Parameters: - -- **all** – 1/True/true or 0/False/false, Show all containers. - Only running containers are shown by default -- **limit** – Show `limit` last created - containers, include non-running ones. -- **since** – Show only containers created since Id, include - non-running ones. -- **before** – Show only containers created before Id, include - non-running ones. - -Status Codes: - -- **200** – no error -- **400** – bad parameter -- **500** – server error - -### Create a container - -`POST /containers/create` - -Create a container - -**Example request**: - - POST /containers/create HTTP/1.1 - Content-Type: application/json - - { - "Hostname":"", - "User":"", - "Memory":0, - "MemorySwap":0, - "AttachStdin":false, - "AttachStdout":true, - "AttachStderr":true, - "PortSpecs":null, - "Tty":false, - "OpenStdin":false, - "StdinOnce":false, - "Env":null, - "Cmd":[ - "date" - ], - "Dns":null, - "Image":"ubuntu", - "Volumes":{}, - "VolumesFrom":"" - } - -**Example response**: - - HTTP/1.1 201 Created - Content-Type: application/json - - { - "Id":"e90e34656806" - "Warnings":[] - } - -Json Parameters: - -- **config** – the container's configuration - -Status Codes: - -- **201** – no error -- **404** – no such container -- **406** – impossible to attach (container not running) -- **500** – server error - -### Inspect a container - -`GET /containers/(id)/json` - -Return low-level information on the container `id` - - -**Example request**: - - GET /containers/4fa6e0f0c678/json HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - { - "Id": "4fa6e0f0c6786287e131c3852c58a2e01cc697a68231826813597e4994f1d6e2", - "Created": "2013-05-07T14:51:42.041847+02:00", - "Path": "date", - "Args": [], - "Config": { - "Hostname": "4fa6e0f0c678", - "User": "", - "Memory": 0, - "MemorySwap": 0, - "AttachStdin": false, - "AttachStdout": true, - "AttachStderr": true, - "PortSpecs": null, - "Tty": false, - "OpenStdin": false, - "StdinOnce": false, - "Env": null, - "Cmd": [ - "date" - ], - "Dns": null, - "Image": "ubuntu", - "Volumes": {}, - "VolumesFrom": "" - }, - "State": { - "Running": false, - "Pid": 0, - "ExitCode": 0, - "StartedAt": "2013-05-07T14:51:42.087658+02:01360", - "Ghost": false - }, - "Image": "b750fe79269d2ec9a3c593ef05b4332b1d1a02a62b4accb2c21d589ff2f5f2dc", - "NetworkSettings": { - "IpAddress": "", - "IpPrefixLen": 0, - "Gateway": "", - "Bridge": "", - "PortMapping": null - }, - "SysInitPath": "/home/kitty/go/src/github.com/docker/docker/bin/docker", - "ResolvConfPath": "/etc/resolv.conf", - "Volumes": {} - } - -Status Codes: - -- **200** – no error -- **404** – no such container -- **500** – server error - -### Inspect changes on a container's filesystem - -`GET /containers/(id)/changes` - -Inspect changes on container `id`'s filesystem - -**Example request**: - - GET /containers/4fa6e0f0c678/changes HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - [ - { - "Path": "/dev", - "Kind": 0 - }, - { - "Path": "/dev/kmsg", - "Kind": 1 - }, - { - "Path": "/test", - "Kind": 1 - } - ] - -Status Codes: - -- **200** – no error -- **404** – no such container -- **500** – server error - -### Export a container - -`GET /containers/(id)/export` - -Export the contents of container `id` - -**Example request**: - - GET /containers/4fa6e0f0c678/export HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/octet-stream - - {{ TAR STREAM }} - -Status Codes: - -- **200** – no error -- **404** – no such container -- **500** – server error - -### Start a container - -`POST /containers/(id)/start` - -Start the container `id` - -**Example request**: - - POST /containers/e90e34656806/start HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - -Status Codes: - -- **200** – no error -- **404** – no such container -- **500** – server error - -### Stop a container - -`POST /containers/(id)/stop` - -Stop the container `id` - -**Example request**: - - POST /containers/e90e34656806/stop?t=5 HTTP/1.1 - -**Example response**: - - HTTP/1.1 204 OK - -Query Parameters: - -- **t** – number of seconds to wait before killing the container - -Status Codes: - -- **204** – no error -- **404** – no such container -- **500** – server error - -### Restart a container - -`POST /containers/(id)/restart` - -Restart the container `id` - -**Example request**: - - POST /containers/e90e34656806/restart?t=5 HTTP/1.1 - -**Example response**: - - HTTP/1.1 204 No Content - -Query Parameters: - -- **t** – number of seconds to wait before killing the container - -Status Codes: - -- **204** – no error -- **404** – no such container -- **500** – server error - -### Kill a container - -`POST /containers/(id)/kill` - -Kill the container `id` - -**Example request**: - - POST /containers/e90e34656806/kill HTTP/1.1 - -**Example response**: - - HTTP/1.1 204 No Content - -Status Codes: - -- **204** – no error -- **404** – no such container -- **500** – server error - -### Attach to a container - -`POST /containers/(id)/attach` - -Attach to the container `id` - -**Example request**: - - POST /containers/16253994b7c4/attach?logs=1&stream=0&stdout=1 HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/vnd.docker.raw-stream - - {{ STREAM }} - -Query Parameters: - -- **logs** – 1/True/true or 0/False/false, return logs. Defaul - false -- **stream** – 1/True/true or 0/False/false, return stream. - Default false -- **stdin** – 1/True/true or 0/False/false, if stream=true, attach - to stdin. Default false -- **stdout** – 1/True/true or 0/False/false, if logs=true, return - stdout log, if stream=true, attach to stdout. Default false -- **stderr** – 1/True/true or 0/False/false, if logs=true, return - stderr log, if stream=true, attach to stderr. Default false - -Status Codes: - -- **200** – no error -- **400** – bad parameter -- **404** – no such container -- **500** – server error - -### Attach to a container (websocket) - -`GET /containers/(id)/attach/ws` - -Attach to the container `id` via websocket - -Implements websocket protocol handshake according to [RFC 6455](http://tools.ietf.org/html/rfc6455) - -**Example request** - - GET /containers/e90e34656806/attach/ws?logs=0&stream=1&stdin=1&stdout=1&stderr=1 HTTP/1.1 - -**Example response** - - {{ STREAM }} - -Query Parameters: - -- **logs** – 1/True/true or 0/False/false, return logs. Default false -- **stream** – 1/True/true or 0/False/false, return stream. - Default false -- **stdin** – 1/True/true or 0/False/false, if stream=true, attach - to stdin. Default false -- **stdout** – 1/True/true or 0/False/false, if logs=true, return - stdout log, if stream=true, attach to stdout. Default false -- **stderr** – 1/True/true or 0/False/false, if logs=true, return - stderr log, if stream=true, attach to stderr. Default false - -Status Codes: - -- **200** – no error -- **400** – bad parameter -- **404** – no such container -- **500** – server error - -### Wait a container - -`POST /containers/(id)/wait` - -Block until container `id` stops, then returns the exit code - -**Example request**: - - POST /containers/16253994b7c4/wait HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - {"StatusCode": 0} - -Status Codes: - -- **200** – no error -- **404** – no such container -- **500** – server error - -### Remove a container - -`DELETE /containers/(id)` - -Remove the container `id` from the filesystem - -**Example request**: - - DELETE /containers/16253994b7c4?v=1 HTTP/1.1 - -**Example response**: - - HTTP/1.1 204 No Content - -Query Parameters: - -- **v** – 1/True/true or 0/False/false, Remove the volumes - associated to the container. Default false - -Status Codes: - -- **204** – no error -- **400** – bad parameter -- **404** – no such container -- **500** – server error - -## 2.2 Images - -### List Images - -`GET /images/(format)` - -List images `format` could be json or viz (json default) - -**Example request**: - - GET /images/json?all=0 HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - [ - { - "Repository":"ubuntu", - "Tag":"precise", - "Id":"b750fe79269d", - "Created":1364102658, - "Size":24653, - "VirtualSize":180116135 - }, - { - "Repository":"ubuntu", - "Tag":"12.04", - "Id":"b750fe79269d", - "Created":1364102658, - "Size":24653, - "VirtualSize":180116135 - } - ] - -**Example request**: - - GET /images/viz HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: text/plain - - digraph docker { - "d82cbacda43a" -> "074be284591f" - "1496068ca813" -> "08306dc45919" - "08306dc45919" -> "0e7893146ac2" - "b750fe79269d" -> "1496068ca813" - base -> "27cf78414709" [style=invis] - "f71189fff3de" -> "9a33b36209ed" - "27cf78414709" -> "b750fe79269d" - "0e7893146ac2" -> "d6434d954665" - "d6434d954665" -> "d82cbacda43a" - base -> "e9aa60c60128" [style=invis] - "074be284591f" -> "f71189fff3de" - "b750fe79269d" [label="b750fe79269d\nubuntu",shape=box,fillcolor="paleturquoise",style="filled,rounded"]; - "e9aa60c60128" [label="e9aa60c60128\ncentos",shape=box,fillcolor="paleturquoise",style="filled,rounded"]; - "9a33b36209ed" [label="9a33b36209ed\nfedora",shape=box,fillcolor="paleturquoise",style="filled,rounded"]; - base [style=invisible] - } - -Query Parameters: - -- **all** – 1/True/true or 0/False/false, Show all containers. - Only running containers are shown by defaul - -Status Codes: - -- **200** – no error -- **400** – bad parameter -- **500** – server error - -### Create an image - -`POST /images/create` - -Create an image, either by pull it from the registry or by importing i - -**Example request**: - - POST /images/create?fromImage=ubuntu HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - {"status":"Pulling..."} - {"status":"Pulling", "progress":"1/? (n/a)"} - {"error":"Invalid..."} - ... - -Query Parameters: - -- **fromImage** – name of the image to pull -- **fromSrc** – source to import, - means stdin -- **repo** – repository -- **tag** – tag -- **registry** – the registry to pull from - -Status Codes: - -- **200** – no error -- **500** – server error - -### Insert a file in an image - -`POST /images/(name)/insert` - -Insert a file from `url` in the image `name` at `path` - -**Example request**: - - POST /images/test/insert?path=/usr&url=myurl HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - {"status":"Inserting..."} - {"status":"Inserting", "progress":"1/? (n/a)"} - {"error":"Invalid..."} - ... - -Query Parameters: - -- **url** – The url from where the file is taken -- **path** – The path where the file is stored - -Status Codes: - -- **200** – no error -- **500** – server error - -### Inspect an image - -`GET /images/(name)/json` - -Return low-level information on the image `name` - -**Example request**: - - GET /images/centos/json HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - { - "id":"b750fe79269d2ec9a3c593ef05b4332b1d1a02a62b4accb2c21d589ff2f5f2dc", - "parent":"27cf784147099545", - "created":"2013-03-23T22:24:18.818426-07:00", - "container":"3d67245a8d72ecf13f33dffac9f79dcdf70f75acb84d308770391510e0c23ad0", - "container_config": - { - "Hostname":"", - "User":"", - "Memory":0, - "MemorySwap":0, - "AttachStdin":false, - "AttachStdout":false, - "AttachStderr":false, - "PortSpecs":null, - "Tty":true, - "OpenStdin":true, - "StdinOnce":false, - "Env":null, - "Cmd": ["/bin/bash"], - "Dns":null, - "Image":"centos", - "Volumes":null, - "VolumesFrom":"" - }, - "Size": 6824592 - } - -Status Codes: - -- **200** – no error -- **404** – no such image -- **500** – server error - -### Get the history of an image - -`GET /images/(name)/history` - -Return the history of the image `name` - -**Example request**: - - GET /images/fedora/history HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - [ - { - "Id":"b750fe79269d", - "Tag":["ubuntu:latest"], - "Created":1364102658, - "CreatedBy":"/bin/bash" - }, - { - "Id":"27cf78414709", - "Created":1364068391, - "CreatedBy":"" - } - ] - -Status Codes: - -- **200** – no error -- **404** – no such image -- **500** – server error - -### Push an image on the registry - -`POST /images/(name)/push` - -Push the image `name` on the registry - - > **Example request**: - > - > POST /images/test/push HTTP/1.1 - > {{ authConfig }} - > - > **Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - {"status":"Pushing..."} - {"status":"Pushing", "progress":"1/? (n/a)"} - {"error":"Invalid..."} - ... - -Status Codes: - -- **200** – no error -- **404** – no such image -- **500** – server error - -### Tag an image into a repository - -`POST /images/(name)/tag` - -Tag the image `name` into a repository - -**Example request**: - - POST /images/test/tag?repo=myrepo&force=0&tag=v42 HTTP/1.1 - -**Example response**: - - HTTP/1.1 201 OK - -Query Parameters: - -- **repo** – The repository to tag in -- **force** – 1/True/true or 0/False/false, default false -- **tag** - The new tag name - -Status Codes: - -- **201** – no error -- **400** – bad parameter -- **404** – no such image -- **409** – conflict -- **500** – server error - -### Remove an image - -`DELETE /images/(name)` - -Remove the image `name` from the filesystem - -**Example request**: - - DELETE /images/test HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-type: application/json - - [ - {"Untagged": "3e2f21a89f"}, - {"Deleted": "3e2f21a89f"}, - {"Deleted": "53b4f83ac9"} - ] - -Status Codes: - -- **204** – no error -- **404** – no such image -- **409** – conflict -- **500** – server error - -### Search images - -`GET /images/search` - -Search for an image on [Docker Hub](https://hub.docker.com) - -**Example request**: - - GET /images/search?term=sshd HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - [ - { - "Name":"cespare/sshd", - "Description":"" - }, - { - "Name":"johnfuller/sshd", - "Description":"" - }, - { - "Name":"dhrp/mongodb-sshd", - "Description":"" - } - ] - - :query term: term to search - :statuscode 200: no error - :statuscode 500: server error - -## 2.3 Misc - -### Build an image from Dockerfile via stdin - -`POST /build` - -Build an image from Dockerfile - -**Example request**: - - POST /build HTTP/1.1 - - {{ TAR STREAM }} - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: text/plain - - {{ STREAM }} - -Query Parameters: - -- **t** – repository name to be applied to the resulting image in - case of success -- **remote** – resource to fetch, as URI - -Status Codes: - -- **200** – no error -- **500** – server error - -{{ STREAM }} is the raw text output of the build command. It uses the -HTTP Hijack method in order to stream. - -### Check auth configuration - -`POST /auth` - -Get the default username and email - -**Example request**: - - POST /auth HTTP/1.1 - Content-Type: application/json - - { - "username":"hannibal", - "password:"xxxx", - "email":"hannibal@a-team.com" - } - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - { - "Status": "Login Succeeded" - } - -Status Codes: - -- **200** – no error -- **204** – no error -- **401** – unauthorized -- **403** – forbidden -- **500** – server error - -### Display system-wide information - -`GET /info` - -Display system-wide information - -**Example request**: - - GET /info HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - { - "Containers":11, - "Images":16, - "Debug":false, - "NFd": 11, - "NGoroutines":21, - "MemoryLimit":true, - "SwapLimit":false - } - -Status Codes: - -- **200** – no error -- **500** – server error - -### Show the docker version information - -`GET /version` - -Show the docker version information - -**Example request**: - - GET /version HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - { - "Version":"0.2.2", - "GitCommit":"5a2a5cc+CHANGES", - "GoVersion":"go1.0.3" - } - -Status Codes: - -- **200** – no error -- **500** – server error - -### Create a new image from a container's changes - -`POST /commit` - -Create a new image from a container's changes - -**Example request**: - - POST /commit?container=44c004db4b17&m=message&repo=myrepo HTTP/1.1 - Content-Type: application/json - - { - "Cmd": ["cat", "/world"], - "PortSpecs":["22"] - } - -**Example response**: - - HTTP/1.1 201 OK - Content-Type: application/vnd.docker.raw-stream - - {"Id": "596069db4bf5"} - -Query Parameters: - -- **container** – source container -- **repo** – repository -- **tag** – tag -- **m** – commit message -- **author** – author (e.g., "John Hannibal Smith - <[hannibal@a-team.com](mailto:hannibal%40a-team.com)>") - -Status Codes: - -- **201** – no error -- **404** – no such container -- **500** – server error - -# 3. Going further - -## 3.1 Inside `docker run` - -Here are the steps of `docker run` : - - - Create the container - - - If the status code is 404, it means the image doesn't exist: - - Try to pull it - - Then retry to create the container - - - Start the container - - - If you are not in detached mode: - - Attach to the container, using logs=1 (to have stdout and - stderr from the container's start) and stream=1 - - - If in detached mode or only stdin is attached: - - Display the container's - -## 3.2 Hijacking - -In this version of the API, /attach, uses hijacking to transport stdin, -stdout and stderr on the same socket. This might change in the future. - -## 3.3 CORS Requests - -To enable cross origin requests to the remote api add the flag -"--api-enable-cors" when running docker in daemon mode. - -> docker -d -H="[tcp://192.168.1.9:2375](tcp://192.168.1.9:2375)" -> -api-enable-cors diff --git a/reference/api/docker_remote_api_v1.3.md~ b/reference/api/docker_remote_api_v1.3.md~ deleted file mode 100644 index 5a88d8276b..0000000000 --- a/reference/api/docker_remote_api_v1.3.md~ +++ /dev/null @@ -1,1103 +0,0 @@ -page_title: Remote API v1.3 -page_description: API Documentation for Docker -page_keywords: API, Docker, rcli, REST, documentation - -# Docker Remote API v1.3 - -# 1. Brief introduction - -- The Remote API is replacing rcli -- Default port in the docker daemon is 2375 -- The API tends to be REST, but for some complex commands, like attach - or pull, the HTTP connection is hijacked to transport stdout stdin - and stderr - -# 2. Endpoints - -## 2.1 Containers - -### List containers - -`GET /containers/json` - -List containers - -**Example request**: - - GET /containers/json?all=1&before=8dfafdbc3a40&size=1 HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - [ - { - "Id": "8dfafdbc3a40", - "Image": "ubuntu:latest", - "Command": "echo 1", - "Created": 1367854155, - "Status": "Exit 0", - "Ports":"", - "SizeRw":12288, - "SizeRootFs":0 - }, - { - "Id": "9cd87474be90", - "Image": "ubuntu:latest", - "Command": "echo 222222", - "Created": 1367854155, - "Status": "Exit 0", - "Ports":"", - "SizeRw":12288, - "SizeRootFs":0 - }, - { - "Id": "3176a2479c92", - "Image": "centos:latest", - "Command": "echo 3333333333333333", - "Created": 1367854154, - "Status": "Exit 0", - "Ports":"", - "SizeRw":12288, - "SizeRootFs":0 - }, - { - "Id": "4cb07b47f9fb", - "Image": "fedora:latest", - "Command": "echo 444444444444444444444444444444444", - "Created": 1367854152, - "Status": "Exit 0", - "Ports":"", - "SizeRw":12288, - "SizeRootFs":0 - } - ] - -Query Parameters: - -   - -- **all** – 1/True/true or 0/False/false, Show all containers. - Only running containers are shown by default (i.e., this defaults to false) -- **limit** – Show `limit` last created containers, include non-running ones. -- **since** – Show only containers created since Id, include non-running ones. -- **before** – Show only containers created before Id, include non-running ones. -- **size** – 1/True/true or 0/False/false, Show the containers sizes - -Status Codes: - -- **200** – no error -- **400** – bad parameter -- **500** – server error - -### Create a container - -`POST /containers/create` - -Create a container - -**Example request**: - - POST /containers/create HTTP/1.1 - Content-Type: application/json - - { - "Hostname":"", - "User":"", - "Memory":0, - "MemorySwap":0, - "AttachStdin":false, - "AttachStdout":true, - "AttachStderr":true, - "PortSpecs":null, - "Tty":false, - "OpenStdin":false, - "StdinOnce":false, - "Env":null, - "Cmd":[ - "date" - ], - "Dns":null, - "Image":"ubuntu", - "Volumes":{}, - "VolumesFrom":"" - } - -**Example response**: - - HTTP/1.1 201 Created - Content-Type: application/json - - { - "Id":"e90e34656806" - "Warnings":[] - } - -Json Parameters: - -- **config** – the container's configuration - -Status Codes: - -- **201** – no error -- **404** – no such container -- **406** – impossible to attach (container not running) -- **500** – server error - -### Inspect a container - -`GET /containers/(id)/json` - -Return low-level information on the container `id` - - -**Example request**: - - GET /containers/4fa6e0f0c678/json HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - { - "Id": "4fa6e0f0c6786287e131c3852c58a2e01cc697a68231826813597e4994f1d6e2", - "Created": "2013-05-07T14:51:42.041847+02:00", - "Path": "date", - "Args": [], - "Config": { - "Hostname": "4fa6e0f0c678", - "User": "", - "Memory": 0, - "MemorySwap": 0, - "AttachStdin": false, - "AttachStdout": true, - "AttachStderr": true, - "PortSpecs": null, - "Tty": false, - "OpenStdin": false, - "StdinOnce": false, - "Env": null, - "Cmd": [ - "date" - ], - "Dns": null, - "Image": "ubuntu", - "Volumes": {}, - "VolumesFrom": "" - }, - "State": { - "Running": false, - "Pid": 0, - "ExitCode": 0, - "StartedAt": "2013-05-07T14:51:42.087658+02:01360", - "Ghost": false - }, - "Image": "b750fe79269d2ec9a3c593ef05b4332b1d1a02a62b4accb2c21d589ff2f5f2dc", - "NetworkSettings": { - "IpAddress": "", - "IpPrefixLen": 0, - "Gateway": "", - "Bridge": "", - "PortMapping": null - }, - "SysInitPath": "/home/kitty/go/src/github.com/docker/docker/bin/docker", - "ResolvConfPath": "/etc/resolv.conf", - "Volumes": {} - } - -Status Codes: - -- **200** – no error -- **404** – no such container -- **500** – server error - -### List processes running inside a container - -`GET /containers/(id)/top` - -List processes running inside the container `id` - -**Example request**: - - GET /containers/4fa6e0f0c678/top HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - [ - { - "PID":"11935", - "Tty":"pts/2", - "Time":"00:00:00", - "Cmd":"sh" - }, - { - "PID":"12140", - "Tty":"pts/2", - "Time":"00:00:00", - "Cmd":"sleep" - } - ] - -Status Codes: - -- **200** – no error -- **404** – no such container -- **500** – server error - -### Inspect changes on a container's filesystem - -`GET /containers/(id)/changes` - -Inspect changes on container `id`'s filesystem - -**Example request**: - - GET /containers/4fa6e0f0c678/changes HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - [ - { - "Path": "/dev", - "Kind": 0 - }, - { - "Path": "/dev/kmsg", - "Kind": 1 - }, - { - "Path": "/test", - "Kind": 1 - } - ] - -Status Codes: - -- **200** – no error -- **404** – no such container -- **500** – server error - -### Export a container - -`GET /containers/(id)/export` - -Export the contents of container `id` - -**Example request**: - - GET /containers/4fa6e0f0c678/export HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/octet-stream - - {{ TAR STREAM }} - -Status Codes: - -- **200** – no error -- **404** – no such container -- **500** – server error - -### Start a container - -`POST /containers/(id)/start` - -Start the container `id` - -**Example request**: - - POST /containers/(id)/start HTTP/1.1 - Content-Type: application/json - - { - "Binds":["/tmp:/tmp"] - } - -**Example response**: - - HTTP/1.1 204 No Content - Content-Type: text/plain - -Json Parameters: - -   - -- **hostConfig** – the container's host configuration (optional) - -Status Codes: - -- **200** – no error -- **404** – no such container -- **500** – server error - -### Stop a container - -`POST /containers/(id)/stop` - -Stop the container `id` - -**Example request**: - - POST /containers/e90e34656806/stop?t=5 HTTP/1.1 - -**Example response**: - - HTTP/1.1 204 OK - -Query Parameters: - -- **t** – number of seconds to wait before killing the container - -Status Codes: - -- **204** – no error -- **404** – no such container -- **500** – server error - -### Restart a container - -`POST /containers/(id)/restart` - -Restart the container `id` - -**Example request**: - - POST /containers/e90e34656806/restart?t=5 HTTP/1.1 - -**Example response**: - - HTTP/1.1 204 No Content - -Query Parameters: - -- **t** – number of seconds to wait before killing the container - -Status Codes: - -- **204** – no error -- **404** – no such container -- **500** – server error - -### Kill a container - -`POST /containers/(id)/kill` - -Kill the container `id` - -**Example request**: - - POST /containers/e90e34656806/kill HTTP/1.1 - -**Example response**: - - HTTP/1.1 204 No Content - -Status Codes: - -- **204** – no error -- **404** – no such container -- **500** – server error - -### Attach to a container - -`POST /containers/(id)/attach` - -Attach to the container `id` - -**Example request**: - - POST /containers/16253994b7c4/attach?logs=1&stream=0&stdout=1 HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/vnd.docker.raw-stream - - {{ STREAM }} - -Query Parameters: - -- **logs** – 1/True/true or 0/False/false, return logs. Defaul - false -- **stream** – 1/True/true or 0/False/false, return stream. - Default false -- **stdin** – 1/True/true or 0/False/false, if stream=true, attach - to stdin. Default false -- **stdout** – 1/True/true or 0/False/false, if logs=true, return - stdout log, if stream=true, attach to stdout. Default false -- **stderr** – 1/True/true or 0/False/false, if logs=true, return - stderr log, if stream=true, attach to stderr. Default false - -Status Codes: - -- **200** – no error -- **400** – bad parameter -- **404** – no such container -- **500** – server error - -### Attach to a container (websocket) - -`GET /containers/(id)/attach/ws` - -Attach to the container `id` via websocket - -Implements websocket protocol handshake according to [RFC 6455](http://tools.ietf.org/html/rfc6455) - -**Example request** - - GET /containers/e90e34656806/attach/ws?logs=0&stream=1&stdin=1&stdout=1&stderr=1 HTTP/1.1 - -**Example response** - - {{ STREAM }} - -Query Parameters: - -- **logs** – 1/True/true or 0/False/false, return logs. Default false -- **stream** – 1/True/true or 0/False/false, return stream. - Default false -- **stdin** – 1/True/true or 0/False/false, if stream=true, attach - to stdin. Default false -- **stdout** – 1/True/true or 0/False/false, if logs=true, return - stdout log, if stream=true, attach to stdout. Default false -- **stderr** – 1/True/true or 0/False/false, if logs=true, return - stderr log, if stream=true, attach to stderr. Default false - -Status Codes: - -- **200** – no error -- **400** – bad parameter -- **404** – no such container -- **500** – server error - -### Wait a container - -`POST /containers/(id)/wait` - -Block until container `id` stops, then returns the exit code - -**Example request**: - - POST /containers/16253994b7c4/wait HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - {"StatusCode": 0} - -Status Codes: - -- **200** – no error -- **404** – no such container -- **500** – server error - -### Remove a container - -`DELETE /containers/(id)` - -Remove the container `id` from the filesystem - -**Example request**: - - DELETE /containers/16253994b7c4?v=1 HTTP/1.1 - -**Example response**: - - HTTP/1.1 204 No Content - -Query Parameters: - -- **v** – 1/True/true or 0/False/false, Remove the volumes - associated to the container. Default false - -Status Codes: - -- **204** – no error -- **400** – bad parameter -- **404** – no such container -- **500** – server error - -## 2.2 Images - -### List Images - -`GET /images/(format)` - -List images `format` could be json or viz (json default) - -**Example request**: - - GET /images/json?all=0 HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - [ - { - "Repository":"ubuntu", - "Tag":"precise", - "Id":"b750fe79269d", - "Created":1364102658, - "Size":24653, - "VirtualSize":180116135 - }, - { - "Repository":"ubuntu", - "Tag":"12.04", - "Id":"b750fe79269d", - "Created":1364102658, - "Size":24653, - "VirtualSize":180116135 - } - ] - -**Example request**: - - GET /images/viz HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: text/plain - - digraph docker { - "d82cbacda43a" -> "074be284591f" - "1496068ca813" -> "08306dc45919" - "08306dc45919" -> "0e7893146ac2" - "b750fe79269d" -> "1496068ca813" - base -> "27cf78414709" [style=invis] - "f71189fff3de" -> "9a33b36209ed" - "27cf78414709" -> "b750fe79269d" - "0e7893146ac2" -> "d6434d954665" - "d6434d954665" -> "d82cbacda43a" - base -> "e9aa60c60128" [style=invis] - "074be284591f" -> "f71189fff3de" - "b750fe79269d" [label="b750fe79269d\nubuntu",shape=box,fillcolor="paleturquoise",style="filled,rounded"]; - "e9aa60c60128" [label="e9aa60c60128\ncentos",shape=box,fillcolor="paleturquoise",style="filled,rounded"]; - "9a33b36209ed" [label="9a33b36209ed\nfedora",shape=box,fillcolor="paleturquoise",style="filled,rounded"]; - base [style=invisible] - } - -Query Parameters: - -- **all** – 1/True/true or 0/False/false, Show all containers. - Only running containers are shown by defaul - -Status Codes: - -- **200** – no error -- **400** – bad parameter -- **500** – server error - -### Create an image - -`POST /images/create` - -Create an image, either by pull it from the registry or by importing i - -**Example request**: - - POST /images/create?fromImage=ubuntu HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - {"status":"Pulling..."} - {"status":"Pulling", "progress":"1/? (n/a)"} - {"error":"Invalid..."} - ... - -Query Parameters: - -- **fromImage** – name of the image to pull -- **fromSrc** – source to import, - means stdin -- **repo** – repository -- **tag** – tag -- **registry** – the registry to pull from - -Status Codes: - -- **200** – no error -- **500** – server error - -### Insert a file in an image - -`POST /images/(name)/insert` - -Insert a file from `url` in the image `name` at `path` - -**Example request**: - - POST /images/test/insert?path=/usr&url=myurl HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - {"status":"Inserting..."} - {"status":"Inserting", "progress":"1/? (n/a)"} - {"error":"Invalid..."} - ... - -Query Parameters: - -- **url** – The url from where the file is taken -- **path** – The path where the file is stored - -Status Codes: - -- **200** – no error -- **500** – server error - -### Inspect an image - -`GET /images/(name)/json` - -Return low-level information on the image `name` - -**Example request**: - - GET /images/centos/json HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - { - "id":"b750fe79269d2ec9a3c593ef05b4332b1d1a02a62b4accb2c21d589ff2f5f2dc", - "parent":"27cf784147099545", - "created":"2013-03-23T22:24:18.818426-07:00", - "container":"3d67245a8d72ecf13f33dffac9f79dcdf70f75acb84d308770391510e0c23ad0", - "container_config": - { - "Hostname":"", - "User":"", - "Memory":0, - "MemorySwap":0, - "AttachStdin":false, - "AttachStdout":false, - "AttachStderr":false, - "PortSpecs":null, - "Tty":true, - "OpenStdin":true, - "StdinOnce":false, - "Env":null, - "Cmd": ["/bin/bash"], - "Dns":null, - "Image":"centos", - "Volumes":null, - "VolumesFrom":"" - }, - "Size": 6824592 - } - -Status Codes: - -- **200** – no error -- **404** – no such image -- **500** – server error - -### Get the history of an image - -`GET /images/(name)/history` - -Return the history of the image `name` - -**Example request**: - - GET /images/fedora/history HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - [ - { - "Id": "b750fe79269d", - "Created": 1364102658, - "CreatedBy": "/bin/bash" - }, - { - "Id": "27cf78414709", - "Created": 1364068391, - "CreatedBy": "" - } - ] - -Status Codes: - -- **200** – no error -- **404** – no such image -- **500** – server error - -### Push an image on the registry - -`POST /images/(name)/push` - -Push the image `name` on the registry - - > **Example request**: - > - > POST /images/test/push HTTP/1.1 - > {{ authConfig }} - > - > **Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - {"status":"Pushing..."} - {"status":"Pushing", "progress":"1/? (n/a)"} - {"error":"Invalid..."} - ... - -Status Codes: - -- **200** – no error -- **404** – no such image -- **500** – server error - -### Tag an image into a repository - -`POST /images/(name)/tag` - -Tag the image `name` into a repository - -**Example request**: - - POST /images/test/tag?repo=myrepo&force=0&tag=v42 HTTP/1.1 - -**Example response**: - - HTTP/1.1 201 OK - -Query Parameters: - -- **repo** – The repository to tag in -- **force** – 1/True/true or 0/False/false, default false -- **tag** - The new tag name - -Status Codes: - -- **201** – no error -- **400** – bad parameter -- **404** – no such image -- **409** – conflict -- **500** – server error - -### Remove an image - -`DELETE /images/(name)` - -Remove the image `name` from the filesystem - -**Example request**: - - DELETE /images/test HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-type: application/json - - [ - {"Untagged": "3e2f21a89f"}, - {"Deleted": "3e2f21a89f"}, - {"Deleted": "53b4f83ac9"} - ] - -Status Codes: - -- **200** – no error -- **404** – no such image -- **409** – conflict -- **500** – server error - -### Search images - -`GET /images/search` - -Search for an image on [Docker Hub](https://hub.docker.com) - -**Example request**: - - GET /images/search?term=sshd HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - [ - { - "Name":"cespare/sshd", - "Description":"" - }, - { - "Name":"johnfuller/sshd", - "Description":"" - }, - { - "Name":"dhrp/mongodb-sshd", - "Description":"" - } - ] - - :query term: term to search - :statuscode 200: no error - :statuscode 500: server error - -## 2.3 Misc - -### Build an image from Dockerfile via stdin - -`POST /build` - -Build an image from Dockerfile via stdin - -**Example request**: - - POST /build HTTP/1.1 - - {{ TAR STREAM }} - -**Example response**: - - HTTP/1.1 200 OK - - {{ STREAM }} - - The stream must be a tar archive compressed with one of the - following algorithms: identity (no compression), gzip, bzip2, xz. - The archive must include a file called Dockerfile at its root. I - may include any number of other files, which will be accessible in - the build context (See the ADD build command). - - The Content-type header should be set to "application/tar". - -Query Parameters: - -- **t** – repository name (and optionally a tag) to be applied to - the resulting image in case of success -- **remote** – build source URI (git or HTTPS/HTTP) -- **q** – suppress verbose build output - -Status Codes: - -- **200** – no error -- **500** – server error - -### Check auth configuration - -`POST /auth` - -Get the default username and email - -**Example request**: - - POST /auth HTTP/1.1 - Content-Type: application/json - - { - "username":"hannibal", - "password:"xxxx", - "email":"hannibal@a-team.com" - } - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: text/plain - -Status Codes: - -- **200** – no error -- **204** – no error -- **500** – server error - -### Display system-wide information - -`GET /info` - -Display system-wide information - -**Example request**: - - GET /info HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - { - "Containers":11, - "Images":16, - "Debug":false, - "NFd": 11, - "NGoroutines":21, - "MemoryLimit":true, - "SwapLimit":false, - "EventsListeners":"0", - "LXCVersion":"0.7.5", - "KernelVersion":"3.8.0-19-generic" - } - -Status Codes: - -- **200** – no error -- **500** – server error - -### Show the docker version information - -`GET /version` - -Show the docker version information - -**Example request**: - - GET /version HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - { - "Version":"0.2.2", - "GitCommit":"5a2a5cc+CHANGES", - "GoVersion":"go1.0.3" - } - -Status Codes: - -- **200** – no error -- **500** – server error - -### Create a new image from a container's changes - -`POST /commit` - -Create a new image from a container's changes - -**Example request**: - - POST /commit?container=44c004db4b17&m=message&repo=myrepo HTTP/1.1 - Content-Type: application/json - - { - "Cmd": ["cat", "/world"], - "PortSpecs":["22"] - } - -**Example response**: - - HTTP/1.1 201 OK - Content-Type: application/vnd.docker.raw-stream - - {"Id": "596069db4bf5"} - -Query Parameters: - -- **container** – source container -- **repo** – repository -- **tag** – tag -- **m** – commit message -- **author** – author (e.g., "John Hannibal Smith - <[hannibal@a-team.com](mailto:hannibal%40a-team.com)>") - -Status Codes: - -- **201** – no error -- **404** – no such container -- **500** – server error - -### Monitor Docker's events - -`GET /events` - -Get events from docker, either in real time via streaming, or via -polling (using since). - -Docker containers will report the following events: - - create, destroy, die, export, kill, pause, restart, start, stop, unpause - -and Docker images will report: - - untag, delete - -**Example request**: - - GET /events?since=1374067924 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - {"status":"create","id":"dfdf82bd3881","time":1374067924} - {"status":"start","id":"dfdf82bd3881","time":1374067924} - {"status":"stop","id":"dfdf82bd3881","time":1374067966} - {"status":"destroy","id":"dfdf82bd3881","time":1374067970} - -Query Parameters: - -- **since** – timestamp used for polling - -Status Codes: - -- **200** – no error -- **500** – server error - -# 3. Going further - -## 3.1 Inside `docker run` - -Here are the steps of `docker run` : - - - Create the container - - - If the status code is 404, it means the image doesn't exist: - - Try to pull it - - Then retry to create the container - - - Start the container - - - If you are not in detached mode: - - Attach to the container, using logs=1 (to have stdout and - stderr from the container's start) and stream=1 - - - If in detached mode or only stdin is attached: - - Display the container's id - -## 3.2 Hijacking - -In this version of the API, /attach, uses hijacking to transport stdin, -stdout and stderr on the same socket. This might change in the future. - -## 3.3 CORS Requests - -To enable cross origin requests to the remote api add the flag -"--api-enable-cors" when running docker in daemon mode. - -> docker -d -H="192.168.1.9:2375" -api-enable-cors diff --git a/reference/api/docker_remote_api_v1.4.md~ b/reference/api/docker_remote_api_v1.4.md~ deleted file mode 100644 index 790c97d079..0000000000 --- a/reference/api/docker_remote_api_v1.4.md~ +++ /dev/null @@ -1,1146 +0,0 @@ -page_title: Remote API v1.4 -page_description: API Documentation for Docker -page_keywords: API, Docker, rcli, REST, documentation - -# Docker Remote API v1.4 - -# 1. Brief introduction - -- The Remote API is replacing rcli -- Default port in the docker daemon is 2375 -- The API tends to be REST, but for some complex commands, like attach - or pull, the HTTP connection is hijacked to transport stdout stdin - and stderr - -# 2. Endpoints - -## 2.1 Containers - -### List containers - -`GET /containers/json` - -List containers - -**Example request**: - - GET /containers/json?all=1&before=8dfafdbc3a40&size=1 HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - [ - { - "Id": "8dfafdbc3a40", - "Image": "ubuntu:latest", - "Command": "echo 1", - "Created": 1367854155, - "Status": "Exit 0", - "Ports":"", - "SizeRw":12288, - "SizeRootFs":0 - }, - { - "Id": "9cd87474be90", - "Image": "ubuntu:latest", - "Command": "echo 222222", - "Created": 1367854155, - "Status": "Exit 0", - "Ports":"", - "SizeRw":12288, - "SizeRootFs":0 - }, - { - "Id": "3176a2479c92", - "Image": "centos:latest", - "Command": "echo 3333333333333333", - "Created": 1367854154, - "Status": "Exit 0", - "Ports":"", - "SizeRw":12288, - "SizeRootFs":0 - }, - { - "Id": "4cb07b47f9fb", - "Image": "fedora:latest", - "Command": "echo 444444444444444444444444444444444", - "Created": 1367854152, - "Status": "Exit 0", - "Ports":"", - "SizeRw":12288, - "SizeRootFs":0 - } - ] - -Query Parameters: - -   - -- **all** – 1/True/true or 0/False/false, Show all containers. - Only running containers are shown by default (i.e., this defaults to false) -- **limit** – Show `limit` last created containers, include non-running ones. -- **since** – Show only containers created since Id, include non-running ones. -- **before** – Show only containers created before Id, include non-running ones. -- **size** – 1/True/true or 0/False/false, Show the containers sizes - -Status Codes: - -- **200** – no error -- **400** – bad parameter -- **500** – server error - -### Create a container - -`POST /containers/create` - -Create a container - -**Example request**: - - POST /containers/create HTTP/1.1 - Content-Type: application/json - - { - "Hostname":"", - "User":"", - "Memory":0, - "MemorySwap":0, - "AttachStdin":false, - "AttachStdout":true, - "AttachStderr":true, - "PortSpecs":null, - "Privileged": false, - "Tty":false, - "OpenStdin":false, - "StdinOnce":false, - "Env":null, - "Cmd":[ - "date" - ], - "Dns":null, - "Image":"ubuntu", - "Volumes":{}, - "VolumesFrom":"", - "WorkingDir":"" - - } - -**Example response**: - - HTTP/1.1 201 Created - Content-Type: application/json - - { - "Id":"e90e34656806" - "Warnings":[] - } - -Json Parameters: - -- **config** – the container's configuration - -Status Codes: - -- **201** – no error -- **404** – no such container -- **406** – impossible to attach (container not running) -- **500** – server error - -### Inspect a container - -`GET /containers/(id)/json` - -Return low-level information on the container `id` - - -**Example request**: - - GET /containers/4fa6e0f0c678/json HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - { - "Id": "4fa6e0f0c6786287e131c3852c58a2e01cc697a68231826813597e4994f1d6e2", - "Created": "2013-05-07T14:51:42.041847+02:00", - "Path": "date", - "Args": [], - "Config": { - "Hostname": "4fa6e0f0c678", - "User": "", - "Memory": 0, - "MemorySwap": 0, - "AttachStdin": false, - "AttachStdout": true, - "AttachStderr": true, - "PortSpecs": null, - "Tty": false, - "OpenStdin": false, - "StdinOnce": false, - "Env": null, - "Cmd": [ - "date" - ], - "Dns": null, - "Image": "ubuntu", - "Volumes": {}, - "VolumesFrom": "", - "WorkingDir": "" - }, - "State": { - "Running": false, - "Pid": 0, - "ExitCode": 0, - "StartedAt": "2013-05-07T14:51:42.087658+02:01360", - "Ghost": false - }, - "Image": "b750fe79269d2ec9a3c593ef05b4332b1d1a02a62b4accb2c21d589ff2f5f2dc", - "NetworkSettings": { - "IpAddress": "", - "IpPrefixLen": 0, - "Gateway": "", - "Bridge": "", - "PortMapping": null - }, - "SysInitPath": "/home/kitty/go/src/github.com/docker/docker/bin/docker", - "ResolvConfPath": "/etc/resolv.conf", - "Volumes": {} - } - -Status Codes: - -- **200** – no error -- **404** – no such container -- **409** – conflict between containers and images -- **500** – server error - -### List processes running inside a container - -`GET /containers/(id)/top` - -List processes running inside the container `id` - -**Example request**: - - GET /containers/4fa6e0f0c678/top HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - { - "Titles": [ - "USER", - "PID", - "%CPU", - "%MEM", - "VSZ", - "RSS", - "TTY", - "STAT", - "START", - "TIME", - "COMMAND" - ], - "Processes": [ - ["root","20147","0.0","0.1","18060","1864","pts/4","S","10:06","0:00","bash"], - ["root","20271","0.0","0.0","4312","352","pts/4","S+","10:07","0:00","sleep","10"] - ] - } - -Query Parameters: - -- **ps_args** – ps arguments to use (e.g., aux) - -Status Codes: - -- **200** – no error -- **404** – no such container -- **500** – server error - -### Inspect changes on a container's filesystem - -`GET /containers/(id)/changes` - -Inspect changes on container `id`'s filesystem - -**Example request**: - - GET /containers/4fa6e0f0c678/changes HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - [ - { - "Path": "/dev", - "Kind": 0 - }, - { - "Path": "/dev/kmsg", - "Kind": 1 - }, - { - "Path": "/test", - "Kind": 1 - } - ] - -Status Codes: - -- **200** – no error -- **404** – no such container -- **500** – server error - -### Export a container - -`GET /containers/(id)/export` - -Export the contents of container `id` - -**Example request**: - - GET /containers/4fa6e0f0c678/export HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/octet-stream - - {{ TAR STREAM }} - -Status Codes: - -- **200** – no error -- **404** – no such container -- **500** – server error - -### Start a container - -`POST /containers/(id)/start` - -Start the container `id` - -**Example request**: - - POST /containers/(id)/start HTTP/1.1 - Content-Type: application/json - - { - "Binds":["/tmp:/tmp"], - "LxcConf":[{"Key":"lxc.utsname","Value":"docker"}] - } - -**Example response**: - - HTTP/1.1 204 No Content - Content-Type: text/plain - -Json Parameters: - -   - -- **hostConfig** – the container's host configuration (optional) - -Status Codes: - -- **204** – no error -- **404** – no such container -- **500** – server error - -### Stop a container - -`POST /containers/(id)/stop` - -Stop the container `id` - -**Example request**: - - POST /containers/e90e34656806/stop?t=5 HTTP/1.1 - -**Example response**: - - HTTP/1.1 204 OK - -Query Parameters: - -- **t** – number of seconds to wait before killing the container - -Status Codes: - -- **204** – no error -- **404** – no such container -- **500** – server error - -### Restart a container - -`POST /containers/(id)/restart` - -Restart the container `id` - -**Example request**: - - POST /containers/e90e34656806/restart?t=5 HTTP/1.1 - -**Example response**: - - HTTP/1.1 204 No Content - -Query Parameters: - -- **t** – number of seconds to wait before killing the container - -Status Codes: - -- **204** – no error -- **404** – no such container -- **500** – server error - -### Kill a container - -`POST /containers/(id)/kill` - -Kill the container `id` - -**Example request**: - - POST /containers/e90e34656806/kill HTTP/1.1 - -**Example response**: - - HTTP/1.1 204 No Content - -Status Codes: - -- **204** – no error -- **404** – no such container -- **500** – server error - -### Attach to a container - -`POST /containers/(id)/attach` - -Attach to the container `id` - -**Example request**: - - POST /containers/16253994b7c4/attach?logs=1&stream=0&stdout=1 HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/vnd.docker.raw-stream - - {{ STREAM }} - -Query Parameters: - -- **logs** – 1/True/true or 0/False/false, return logs. Defaul - false -- **stream** – 1/True/true or 0/False/false, return stream. - Default false -- **stdin** – 1/True/true or 0/False/false, if stream=true, attach - to stdin. Default false -- **stdout** – 1/True/true or 0/False/false, if logs=true, return - stdout log, if stream=true, attach to stdout. Default false -- **stderr** – 1/True/true or 0/False/false, if logs=true, return - stderr log, if stream=true, attach to stderr. Default false - -Status Codes: - -- **200** – no error -- **400** – bad parameter -- **404** – no such container -- **500** – server error - -### Attach to a container (websocket) - -`GET /containers/(id)/attach/ws` - -Attach to the container `id` via websocket - -Implements websocket protocol handshake according to [RFC 6455](http://tools.ietf.org/html/rfc6455) - -**Example request** - - GET /containers/e90e34656806/attach/ws?logs=0&stream=1&stdin=1&stdout=1&stderr=1 HTTP/1.1 - -**Example response** - - {{ STREAM }} - -Query Parameters: - -- **logs** – 1/True/true or 0/False/false, return logs. Default false -- **stream** – 1/True/true or 0/False/false, return stream. - Default false -- **stdin** – 1/True/true or 0/False/false, if stream=true, attach - to stdin. Default false -- **stdout** – 1/True/true or 0/False/false, if logs=true, return - stdout log, if stream=true, attach to stdout. Default false -- **stderr** – 1/True/true or 0/False/false, if logs=true, return - stderr log, if stream=true, attach to stderr. Default false - -Status Codes: - -- **200** – no error -- **400** – bad parameter -- **404** – no such container -- **500** – server error - -### Wait a container - -`POST /containers/(id)/wait` - -Block until container `id` stops, then returns the exit code - -**Example request**: - - POST /containers/16253994b7c4/wait HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - {"StatusCode": 0} - -Status Codes: - -- **200** – no error -- **404** – no such container -- **500** – server error - -### Remove a container - -`DELETE /containers/(id)` - -Remove the container `id` from the filesystem - -**Example request**: - - DELETE /containers/16253994b7c4?v=1 HTTP/1.1 - -**Example response**: - - HTTP/1.1 204 No Content - -Query Parameters: - -- **v** – 1/True/true or 0/False/false, Remove the volumes - associated to the container. Default false - -Status Codes: - -- **204** – no error -- **400** – bad parameter -- **404** – no such container -- **500** – server error - -### Copy files or folders from a container - -`POST /containers/(id)/copy` - -Copy files or folders of container `id` - -**Example request**: - - POST /containers/4fa6e0f0c678/copy HTTP/1.1 - Content-Type: application/json - - { - "Resource": "test.txt" - } - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/octet-stream - - {{ TAR STREAM }} - -Status Codes: - -- **200** – no error -- **404** – no such container -- **500** – server error - -## 2.2 Images - -### List Images - -`GET /images/(format)` - -List images `format` could be json or viz (json default) - -**Example request**: - - GET /images/json?all=0 HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - [ - { - "Repository":"ubuntu", - "Tag":"precise", - "Id":"b750fe79269d", - "Created":1364102658, - "Size":24653, - "VirtualSize":180116135 - }, - { - "Repository":"ubuntu", - "Tag":"12.04", - "Id":"b750fe79269d", - "Created":1364102658, - "Size":24653, - "VirtualSize":180116135 - } - ] - -**Example request**: - - GET /images/viz HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: text/plain - - digraph docker { - "d82cbacda43a" -> "074be284591f" - "1496068ca813" -> "08306dc45919" - "08306dc45919" -> "0e7893146ac2" - "b750fe79269d" -> "1496068ca813" - base -> "27cf78414709" [style=invis] - "f71189fff3de" -> "9a33b36209ed" - "27cf78414709" -> "b750fe79269d" - "0e7893146ac2" -> "d6434d954665" - "d6434d954665" -> "d82cbacda43a" - base -> "e9aa60c60128" [style=invis] - "074be284591f" -> "f71189fff3de" - "b750fe79269d" [label="b750fe79269d\nubuntu",shape=box,fillcolor="paleturquoise",style="filled,rounded"]; - "e9aa60c60128" [label="e9aa60c60128\ncentos",shape=box,fillcolor="paleturquoise",style="filled,rounded"]; - "9a33b36209ed" [label="9a33b36209ed\nfedora",shape=box,fillcolor="paleturquoise",style="filled,rounded"]; - base [style=invisible] - } - -Query Parameters: - -- **all** – 1/True/true or 0/False/false, Show all containers. - Only running containers are shown by defaul - -Status Codes: - -- **200** – no error -- **400** – bad parameter -- **500** – server error - -### Create an image - -`POST /images/create` - -Create an image, either by pull it from the registry or by importing i - -**Example request**: - - POST /images/create?fromImage=ubuntu HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - {"status":"Pulling..."} - {"status":"Pulling", "progress":"1/? (n/a)"} - {"error":"Invalid..."} - ... - -Query Parameters: - -- **fromImage** – name of the image to pull -- **fromSrc** – source to import, - means stdin -- **repo** – repository -- **tag** – tag -- **registry** – the registry to pull from - -Status Codes: - -- **200** – no error -- **500** – server error - -### Insert a file in an image - -`POST /images/(name)/insert` - -Insert a file from `url` in the image `name` at `path` - -**Example request**: - - POST /images/test/insert?path=/usr&url=myurl HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - {"status":"Inserting..."} - {"status":"Inserting", "progress":"1/? (n/a)"} - {"error":"Invalid..."} - ... - -Query Parameters: - -- **url** – The url from where the file is taken -- **path** – The path where the file is stored - -Status Codes: - -- **200** – no error -- **500** – server error - -### Inspect an image - -`GET /images/(name)/json` - -Return low-level information on the image `name` - -**Example request**: - - GET /images/centos/json HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - { - "id":"b750fe79269d2ec9a3c593ef05b4332b1d1a02a62b4accb2c21d589ff2f5f2dc", - "parent":"27cf784147099545", - "created":"2013-03-23T22:24:18.818426-07:00", - "container":"3d67245a8d72ecf13f33dffac9f79dcdf70f75acb84d308770391510e0c23ad0", - "container_config": - { - "Hostname":"", - "User":"", - "Memory":0, - "MemorySwap":0, - "AttachStdin":false, - "AttachStdout":false, - "AttachStderr":false, - "PortSpecs":null, - "Tty":true, - "OpenStdin":true, - "StdinOnce":false, - "Env":null, - "Cmd": ["/bin/bash"], - "Dns":null, - "Image":"centos", - "Volumes":null, - "VolumesFrom":"", - "WorkingDir":"" - }, - "Size": 6824592 - } - -Status Codes: - -- **200** – no error -- **404** – no such image -- **409** – conflict between containers and images -- **500** – server error - -### Get the history of an image - -`GET /images/(name)/history` - -Return the history of the image `name` - -**Example request**: - - GET /images/fedora/history HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - [ - { - "Id": "b750fe79269d", - "Created": 1364102658, - "CreatedBy": "/bin/bash" - }, - { - "Id": "27cf78414709", - "Created": 1364068391, - "CreatedBy": "" - } - ] - -Status Codes: - -- **200** – no error -- **404** – no such image -- **500** – server error - -### Push an image on the registry - -`POST /images/(name)/push` - -Push the image `name` on the registry - -**Example request**: - - POST /images/test/push HTTP/1.1 - {{ authConfig }} - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - {"status":"Pushing..."} {"status":"Pushing", "progress":"1/? (n/a)"} - {"error":"Invalid..."} ... - -Status Codes: - -- **200** – no error :statuscode 404: no such image :statuscode - 500: server error - -### Tag an image into a repository - -`POST /images/(name)/tag` - -Tag the image `name` into a repository - -**Example request**: - - POST /images/test/tag?repo=myrepo&force=0&tag=v42 HTTP/1.1 - -**Example response**: - - HTTP/1.1 201 OK - -Query Parameters: - -- **repo** – The repository to tag in -- **force** – 1/True/true or 0/False/false, default false -- **tag** - The new tag name - -Status Codes: - -- **201** – no error -- **400** – bad parameter -- **404** – no such image -- **409** – conflict -- **500** – server error - -### Remove an image - -`DELETE /images/(name)` - -Remove the image `name` from the filesystem - -**Example request**: - - DELETE /images/test HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-type: application/json - - [ - {"Untagged": "3e2f21a89f"}, - {"Deleted": "3e2f21a89f"}, - {"Deleted": "53b4f83ac9"} - ] - -Status Codes: - -- **200** – no error -- **404** – no such image -- **409** – conflict -- **500** – server error - -### Search images - -`GET /images/search` - -Search for an image on [Docker Hub](https://hub.docker.com) - -**Example request**: - - GET /images/search?term=sshd HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - [ - { - "Name":"cespare/sshd", - "Description":"" - }, - { - "Name":"johnfuller/sshd", - "Description":"" - }, - { - "Name":"dhrp/mongodb-sshd", - "Description":"" - } - ] - - :query term: term to search - :statuscode 200: no error - :statuscode 500: server error - -## 2.3 Misc - -### Build an image from Dockerfile via stdin - -`POST /build` - -Build an image from Dockerfile via stdin - -**Example request**: - - POST /build HTTP/1.1 - - {{ TAR STREAM }} - -**Example response**: - - HTTP/1.1 200 OK - - {{ STREAM }} - - The stream must be a tar archive compressed with one of the - following algorithms: identity (no compression), gzip, bzip2, xz. - The archive must include a file called Dockerfile at its root. I - may include any number of other files, which will be accessible in - the build context (See the ADD build command). - - The Content-type header should be set to "application/tar". - -Query Parameters: - -- **t** – repository name (and optionally a tag) to be applied to - the resulting image in case of success -- **remote** – build source URI (git or HTTPS/HTTP) -- **q** – suppress verbose build output -- **nocache** – do not use the cache when building the image - -Status Codes: - -- **200** – no error -- **500** – server error - -### Check auth configuration - -`POST /auth` - -Get the default username and email - -**Example request**: - - POST /auth HTTP/1.1 - Content-Type: application/json - - { - "username":" hannibal", - "password: "xxxx", - "email": "hannibal@a-team.com", - "serveraddress": "https://index.docker.io/v1/" - } - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: text/plain - -Status Codes: - -- **200** – no error -- **204** – no error -- **500** – server error - -### Display system-wide information - -`GET /info` - -Display system-wide information - -**Example request**: - - GET /info HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - { - "Containers":11, - "Images":16, - "Debug":false, - "NFd": 11, - "NGoroutines":21, - "MemoryLimit":true, - "SwapLimit":false, - "IPv4Forwarding":true - } - -Status Codes: - -- **200** – no error -- **500** – server error - -### Show the docker version information - -`GET /version` - -Show the docker version information - - -**Example request**: - - GET /version HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - { - "Version":"0.2.2", - "GitCommit":"5a2a5cc+CHANGES", - "GoVersion":"go1.0.3" - } - -Status Codes: - -- **200** – no error -- **500** – server error - -### Create a new image from a container's changes - -`POST /commit` - -Create a new image from a container's changes - -**Example request**: - - POST /commit?container=44c004db4b17&m=message&repo=myrepo HTTP/1.1 - Content-Type: application/json - - { - "Cmd": ["cat", "/world"], - "PortSpecs":["22"] - } - -**Example response**: - - HTTP/1.1 201 OK - Content-Type: application/vnd.docker.raw-stream - - {"Id": "596069db4bf5"} - -Query Parameters: - -- **container** – source container -- **repo** – repository -- **tag** – tag -- **m** – commit message -- **author** – author (e.g., "John Hannibal Smith - <[hannibal@a-team.com](mailto:hannibal%40a-team.com)>") - -Status Codes: - -- **201** – no error -- **404** – no such container -- **500** – server error - -### Monitor Docker's events - -`GET /events` - -Get events from docker, either in real time via streaming, or via -polling (using since). - -Docker containers will report the following events: - - create, destroy, die, export, kill, pause, restart, start, stop, unpause - -and Docker images will report: - - untag, delete - -**Example request**: - - GET /events?since=1374067924 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - {"status":"create","id":"dfdf82bd3881","from":"ubuntu:latest","time":1374067924} - {"status":"start","id":"dfdf82bd3881","from":"ubuntu:latest","time":1374067924} - {"status":"stop","id":"dfdf82bd3881","from":"ubuntu:latest","time":1374067966} - {"status":"destroy","id":"dfdf82bd3881","from":"ubuntu:latest","time":1374067970} - -Query Parameters: - -- **since** – timestamp used for polling - -Status Codes: - -- **200** – no error -- **500** – server error - -# 3. Going further - -## 3.1 Inside `docker run` - -Here are the steps of `docker run` : - - - Create the container - - - If the status code is 404, it means the image doesn't exist: - - Try to pull it - - Then retry to create the container - - - Start the container - - - If you are not in detached mode: - - Attach to the container, using logs=1 (to have stdout and - stderr from the container's start) and stream=1 - - - If in detached mode or only stdin is attached: - - Display the container's id - -## 3.2 Hijacking - -In this version of the API, /attach, uses hijacking to transport stdin, -stdout and stderr on the same socket. This might change in the future. - -## 3.3 CORS Requests - -To enable cross origin requests to the remote api add the flag -"--api-enable-cors" when running docker in daemon mode. - - $ docker -d -H="192.168.1.9:2375" --api-enable-cors diff --git a/reference/api/docker_remote_api_v1.5.md~ b/reference/api/docker_remote_api_v1.5.md~ deleted file mode 100644 index c2f0a7add0..0000000000 --- a/reference/api/docker_remote_api_v1.5.md~ +++ /dev/null @@ -1,1152 +0,0 @@ -page_title: Remote API v1.5 -page_description: API Documentation for Docker -page_keywords: API, Docker, rcli, REST, documentation - -# Docker Remote API v1.5 - -# 1. Brief introduction - -- The Remote API is replacing rcli -- Default port in the docker daemon is 2375 -- The API tends to be REST, but for some complex commands, like attach - or pull, the HTTP connection is hijacked to transport stdout stdin - and stderr - -# 2. Endpoints - -## 2.1 Containers - -### List containers - -`GET /containers/json` - -List containers - -**Example request**: - - GET /containers/json?all=1&before=8dfafdbc3a40&size=1 HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - [ - { - "Id": "8dfafdbc3a40", - "Image": "ubuntu:latest", - "Command": "echo 1", - "Created": 1367854155, - "Status": "Exit 0", - "Ports":[{"PrivatePort": 2222, "PublicPort": 3333, "Type": "tcp"}], - "SizeRw":12288, - "SizeRootFs":0 - }, - { - "Id": "9cd87474be90", - "Image": "ubuntu:latest", - "Command": "echo 222222", - "Created": 1367854155, - "Status": "Exit 0", - "Ports":[], - "SizeRw":12288, - "SizeRootFs":0 - }, - { - "Id": "3176a2479c92", - "Image": "centos:latest", - "Command": "echo 3333333333333333", - "Created": 1367854154, - "Status": "Exit 0", - "Ports":[], - "SizeRw":12288, - "SizeRootFs":0 - }, - { - "Id": "4cb07b47f9fb", - "Image": "fedora:latest", - "Command": "echo 444444444444444444444444444444444", - "Created": 1367854152, - "Status": "Exit 0", - "Ports":[], - "SizeRw":12288, - "SizeRootFs":0 - } - ] - -Query Parameters: - -   - -- **all** – 1/True/true or 0/False/false, Show all containers. - Only running containers are shown by default (i.e., this defaults to false) -- **limit** – Show `limit` last created containers, include non-running ones. -- **since** – Show only containers created since Id, include non-running ones. -- **before** – Show only containers created before Id, include non-running ones. -- **size** – 1/True/true or 0/False/false, Show the containers sizes - -Status Codes: - -- **200** – no error -- **400** – bad parameter -- **500** – server error - -### Create a container - -`POST /containers/create` - -Create a container - -**Example request**: - - POST /containers/create HTTP/1.1 - Content-Type: application/json - - { - "Hostname":"", - "User":"", - "Memory":0, - "MemorySwap":0, - "AttachStdin":false, - "AttachStdout":true, - "AttachStderr":true, - "PortSpecs":null, - "Privileged": false, - "Tty":false, - "OpenStdin":false, - "StdinOnce":false, - "Env":null, - "Cmd":[ - "date" - ], - "Dns":null, - "Image":"ubuntu", - "Volumes":{}, - "VolumesFrom":"", - "WorkingDir":"" - } - -**Example response**: - - HTTP/1.1 201 Created - Content-Type: application/json - - { - "Id":"e90e34656806" - "Warnings":[] - } - -Json Parameters: - -- **config** – the container's configuration - -Status Codes: - -- **201** – no error -- **404** – no such container -- **406** – impossible to attach (container not running) -- **500** – server error - -### Inspect a container - -`GET /containers/(id)/json` - -Return low-level information on the container `id` - - -**Example request**: - - GET /containers/4fa6e0f0c678/json HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - { - "Id": "4fa6e0f0c6786287e131c3852c58a2e01cc697a68231826813597e4994f1d6e2", - "Created": "2013-05-07T14:51:42.041847+02:00", - "Path": "date", - "Args": [], - "Config": { - "Hostname": "4fa6e0f0c678", - "User": "", - "Memory": 0, - "MemorySwap": 0, - "AttachStdin": false, - "AttachStdout": true, - "AttachStderr": true, - "PortSpecs": null, - "Tty": false, - "OpenStdin": false, - "StdinOnce": false, - "Env": null, - "Cmd": [ - "date" - ], - "Dns": null, - "Image": "ubuntu", - "Volumes": {}, - "VolumesFrom": "", - "WorkingDir":"" - }, - "State": { - "Running": false, - "Pid": 0, - "ExitCode": 0, - "StartedAt": "2013-05-07T14:51:42.087658+02:01360", - "Ghost": false - }, - "Image": "b750fe79269d2ec9a3c593ef05b4332b1d1a02a62b4accb2c21d589ff2f5f2dc", - "NetworkSettings": { - "IpAddress": "", - "IpPrefixLen": 0, - "Gateway": "", - "Bridge": "", - "PortMapping": null - }, - "SysInitPath": "/home/kitty/go/src/github.com/docker/docker/bin/docker", - "ResolvConfPath": "/etc/resolv.conf", - "Volumes": {} - } - -Status Codes: - -- **200** – no error -- **404** – no such container -- **500** – server error - -### List processes running inside a container - -`GET /containers/(id)/top` - -List processes running inside the container `id` - -**Example request**: - - GET /containers/4fa6e0f0c678/top HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - { - "Titles":[ - "USER", - "PID", - "%CPU", - "%MEM", - "VSZ", - "RSS", - "TTY", - "STAT", - "START", - "TIME", - "COMMAND" - ], - "Processes":[ - ["root","20147","0.0","0.1","18060","1864","pts/4","S","10:06","0:00","bash"], - ["root","20271","0.0","0.0","4312","352","pts/4","S+","10:07","0:00","sleep","10"] - ] - } - -Query Parameters: - -- **ps_args** – ps arguments to use (e.g., aux) - -Status Codes: - -- **200** – no error -- **404** – no such container -- **500** – server error - -### Inspect changes on a container's filesystem - -`GET /containers/(id)/changes` - -Inspect changes on container `id`'s filesystem - -**Example request**: - - GET /containers/4fa6e0f0c678/changes HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - [ - { - "Path":"/dev", - "Kind":0 - }, - { - "Path":"/dev/kmsg", - "Kind":1 - }, - { - "Path":"/test", - "Kind":1 - } - ] - -Status Codes: - -- **200** – no error -- **404** – no such container -- **500** – server error - -### Export a container - -`GET /containers/(id)/export` - -Export the contents of container `id` - -**Example request**: - - GET /containers/4fa6e0f0c678/export HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/octet-stream - - {{ TAR STREAM }} - -Status Codes: - -- **200** – no error -- **404** – no such container -- **500** – server error - -### Start a container - -`POST /containers/(id)/start` - -Start the container `id` - -**Example request**: - - POST /containers/(id)/start HTTP/1.1 - Content-Type: application/json - - { - "Binds":["/tmp:/tmp"], - "LxcConf":[{"Key":"lxc.utsname","Value":"docker"}] - } - -**Example response**: - - HTTP/1.1 204 No Content - Content-Type: text/plain - -Json Parameters: - -   - -- **hostConfig** – the container's host configuration (optional) - -Status Codes: - -- **204** – no error -- **404** – no such container -- **500** – server error - -### Stop a container - -`POST /containers/(id)/stop` - -Stop the container `id` - -**Example request**: - - POST /containers/e90e34656806/stop?t=5 HTTP/1.1 - -**Example response**: - - HTTP/1.1 204 OK - -Query Parameters: - -- **t** – number of seconds to wait before killing the container - -Status Codes: - -- **204** – no error -- **404** – no such container -- **500** – server error - -### Restart a container - -`POST /containers/(id)/restart` - -Restart the container `id` - -**Example request**: - - POST /containers/e90e34656806/restart?t=5 HTTP/1.1 - -**Example response**: - - HTTP/1.1 204 No Content - -Query Parameters: - -- **t** – number of seconds to wait before killing the container - -Status Codes: - -- **204** – no error -- **404** – no such container -- **500** – server error - -### Kill a container - -`POST /containers/(id)/kill` - -Kill the container `id` - -**Example request**: - - POST /containers/e90e34656806/kill HTTP/1.1 - -**Example response**: - - HTTP/1.1 204 No Content - -Status Codes: - -- **204** – no error -- **404** – no such container -- **500** – server error - -### Attach to a container - -`POST /containers/(id)/attach` - -Attach to the container `id` - -**Example request**: - - POST /containers/16253994b7c4/attach?logs=1&stream=0&stdout=1 HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/vnd.docker.raw-stream - - {{ STREAM }} - -Query Parameters: - -- **logs** – 1/True/true or 0/False/false, return logs. Defaul - false -- **stream** – 1/True/true or 0/False/false, return stream. - Default false -- **stdin** – 1/True/true or 0/False/false, if stream=true, attach - to stdin. Default false -- **stdout** – 1/True/true or 0/False/false, if logs=true, return - stdout log, if stream=true, attach to stdout. Default false -- **stderr** – 1/True/true or 0/False/false, if logs=true, return - stderr log, if stream=true, attach to stderr. Default false - -Status Codes: - -- **200** – no error -- **400** – bad parameter -- **404** – no such container -- **500** – server error - -### Attach to a container (websocket) - -`GET /containers/(id)/attach/ws` - -Attach to the container `id` via websocket - -Implements websocket protocol handshake according to [RFC 6455](http://tools.ietf.org/html/rfc6455) - -**Example request** - - GET /containers/e90e34656806/attach/ws?logs=0&stream=1&stdin=1&stdout=1&stderr=1 HTTP/1.1 - -**Example response** - - {{ STREAM }} - -Query Parameters: - -- **logs** – 1/True/true or 0/False/false, return logs. Default false -- **stream** – 1/True/true or 0/False/false, return stream. - Default false -- **stdin** – 1/True/true or 0/False/false, if stream=true, attach - to stdin. Default false -- **stdout** – 1/True/true or 0/False/false, if logs=true, return - stdout log, if stream=true, attach to stdout. Default false -- **stderr** – 1/True/true or 0/False/false, if logs=true, return - stderr log, if stream=true, attach to stderr. Default false - -Status Codes: - -- **200** – no error -- **400** – bad parameter -- **404** – no such container -- **500** – server error - -### Wait a container - -`POST /containers/(id)/wait` - -Block until container `id` stops, then returns the exit code - -**Example request**: - - POST /containers/16253994b7c4/wait HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - {"StatusCode": 0} - -Status Codes: - -- **200** – no error -- **404** – no such container -- **500** – server error - -### Remove a container - -`DELETE /containers/(id)` - -Remove the container `id` from the filesystem - -**Example request**: - - DELETE /containers/16253994b7c4?v=1 HTTP/1.1 - -**Example response**: - - HTTP/1.1 204 No Content - -Query Parameters: - -- **v** – 1/True/true or 0/False/false, Remove the volumes - associated to the container. Default false - -Status Codes: - -- **204** – no error -- **400** – bad parameter -- **404** – no such container -- **500** – server error - -### Copy files or folders from a container - -`POST /containers/(id)/copy` - -Copy files or folders of container `id` - -**Example request**: - - POST /containers/4fa6e0f0c678/copy HTTP/1.1 - Content-Type: application/json - - { - "Resource":"test.txt" - } - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/octet-stream - - {{ TAR STREAM }} - -Status Codes: - -- **200** – no error -- **404** – no such container -- **500** – server error - -## 2.2 Images - -### List Images - -`GET /images/(format)` - -List images `format` could be json or viz (json default) - -**Example request**: - - GET /images/json?all=0 HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - [ - { - "Repository":"ubuntu", - "Tag":"precise", - "Id":"b750fe79269d", - "Created":1364102658, - "Size":24653, - "VirtualSize":180116135 - }, - { - "Repository":"ubuntu", - "Tag":"12.04", - "Id":"b750fe79269d", - "Created":1364102658, - "Size":24653, - "VirtualSize":180116135 - } - ] - -**Example request**: - - GET /images/viz HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: text/plain - - digraph docker { - "d82cbacda43a" -> "074be284591f" - "1496068ca813" -> "08306dc45919" - "08306dc45919" -> "0e7893146ac2" - "b750fe79269d" -> "1496068ca813" - base -> "27cf78414709" [style=invis] - "f71189fff3de" -> "9a33b36209ed" - "27cf78414709" -> "b750fe79269d" - "0e7893146ac2" -> "d6434d954665" - "d6434d954665" -> "d82cbacda43a" - base -> "e9aa60c60128" [style=invis] - "074be284591f" -> "f71189fff3de" - "b750fe79269d" [label="b750fe79269d\nubuntu",shape=box,fillcolor="paleturquoise",style="filled,rounded"]; - "e9aa60c60128" [label="e9aa60c60128\ncentos",shape=box,fillcolor="paleturquoise",style="filled,rounded"]; - "9a33b36209ed" [label="9a33b36209ed\nfedora",shape=box,fillcolor="paleturquoise",style="filled,rounded"]; - base [style=invisible] - } - -Query Parameters: - -- **all** – 1/True/true or 0/False/false, Show all containers. - Only running containers are shown by defaul - -Status Codes: - -- **200** – no error -- **400** – bad parameter -- **500** – server error - -### Create an image - -`POST /images/create` - -Create an image, either by pull it from the registry or by importing i - -**Example request**: - - POST /images/create?fromImage=ubuntu HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - {"status":"Pulling..."} - {"status":"Pulling", "progress":"1/? (n/a)"} - {"error":"Invalid..."} - ... - - When using this endpoint to pull an image from the registry, the - `X-Registry-Auth` header can be used to include - a base64-encoded AuthConfig object. - -Query Parameters: - -- **fromImage** – name of the image to pull -- **fromSrc** – source to import, - means stdin -- **repo** – repository -- **tag** – tag -- **registry** – the registry to pull from - -Status Codes: - -- **200** – no error -- **500** – server error - -### Insert a file in an image - -`POST /images/(name)/insert` - -Insert a file from `url` in the image `name` at `path` - -**Example request**: - - POST /images/test/insert?path=/usr&url=myurl HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - {"status":"Inserting..."} - {"status":"Inserting", "progress":"1/? (n/a)"} - {"error":"Invalid..."} - ... - -Query Parameters: - -- **url** – The url from where the file is taken -- **path** – The path where the file is stored - -Status Codes: - -- **200** – no error -- **500** – server error - -### Inspect an image - -`GET /images/(name)/json` - -Return low-level information on the image `name` - -**Example request**: - - GET /images/centos/json HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - { - "id":"b750fe79269d2ec9a3c593ef05b4332b1d1a02a62b4accb2c21d589ff2f5f2dc", - "parent":"27cf784147099545", - "created":"2013-03-23T22:24:18.818426-07:00", - "container":"3d67245a8d72ecf13f33dffac9f79dcdf70f75acb84d308770391510e0c23ad0", - "container_config": - { - "Hostname":"", - "User":"", - "Memory":0, - "MemorySwap":0, - "AttachStdin":false, - "AttachStdout":false, - "AttachStderr":false, - "PortSpecs":null, - "Tty":true, - "OpenStdin":true, - "StdinOnce":false, - "Env":null, - "Cmd": ["/bin/bash"], - "Dns":null, - "Image":"centos", - "Volumes":null, - "VolumesFrom":"", - "WorkingDir":"" - }, - "Size": 6824592 - } - -Status Codes: - -- **200** – no error -- **404** – no such image -- **500** – server error - -### Get the history of an image - -`GET /images/(name)/history` - -Return the history of the image `name` - -**Example request**: - - GET /images/fedora/history HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - [ - { - "Id":"b750fe79269d", - "Created":1364102658, - "CreatedBy":"/bin/bash" - }, - { - "Id":"27cf78414709", - "Created":1364068391, - "CreatedBy":"" - } - ] - -Status Codes: - -- **200** – no error -- **404** – no such image -- **500** – server error - -### Push an image on the registry - -`POST /images/(name)/push` - -Push the image `name` on the registry - -**Example request**: - - POST /images/test/push HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - {"status":"Pushing..."} - {"status":"Pushing", "progress":"1/? (n/a)"} - {"error":"Invalid..."} - ... - - The `X-Registry-Auth` header can be used to - include a base64-encoded AuthConfig object. - -Status Codes: - -- **200** – no error -- **404** – no such image -- **500** – server error - -### Tag an image into a repository - -`POST /images/(name)/tag` - -Tag the image `name` into a repository - -**Example request**: - - POST /images/test/tag?repo=myrepo&force=0&tag=v42 HTTP/1.1 - -**Example response**: - - HTTP/1.1 201 OK - -Query Parameters: - -- **repo** – The repository to tag in -- **force** – 1/True/true or 0/False/false, default false -- **tag** - The new tag name - -Status Codes: - -- **201** – no error -- **400** – bad parameter -- **404** – no such image -- **409** – conflict -- **500** – server error - -### Remove an image - -`DELETE /images/(name)` - -Remove the image `name` from the filesystem - -**Example request**: - - DELETE /images/test HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-type: application/json - - [ - {"Untagged":"3e2f21a89f"}, - {"Deleted":"3e2f21a89f"}, - {"Deleted":"53b4f83ac9"} - ] - -Status Codes: - -- **200** – no error -- **404** – no such image -- **409** – conflict -- **500** – server error - -### Search images - -`GET /images/search` - -Search for an image on [Docker Hub](https://hub.docker.com) - -**Example request**: - - GET /images/search?term=sshd HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - [ - { - "Name":"cespare/sshd", - "Description":"" - }, - { - "Name":"johnfuller/sshd", - "Description":"" - }, - { - "Name":"dhrp/mongodb-sshd", - "Description":"" - } - ] - -Query Parameters: - -- **term** – term to search - -Status Codes: - -- **200** – no error -- **500** – server error - -## 2.3 Misc - -### Build an image from Dockerfile via stdin - -`POST /build` - -Build an image from Dockerfile via stdin - -**Example request**: - - POST /build HTTP/1.1 - - {{ TAR STREAM }} - -**Example response**: - - HTTP/1.1 200 OK - - {{ STREAM }} - - The stream must be a tar archive compressed with one of the - following algorithms: identity (no compression), gzip, bzip2, xz. - The archive must include a file called Dockerfile at its root. I - may include any number of other files, which will be accessible in - the build context (See the ADD build command). - - The Content-type header should be set to "application/tar". - -Query Parameters: - -- **t** – repository name (and optionally a tag) to be applied to - the resulting image in case of success -- **remote** – build source URI (git or HTTPS/HTTP) -- **q** – suppress verbose build output -- **nocache** – do not use the cache when building the image -- **rm** – remove intermediate containers after a successful build - -Status Codes: - -- **200** – no error -- **500** – server error - -### Check auth configuration - -`POST /auth` - -Get the default username and email - -**Example request**: - - POST /auth HTTP/1.1 - Content-Type: application/json - - { - "username":"hannibal", - "password:"xxxx", - "email":"hannibal@a-team.com", - "serveraddress":"https://index.docker.io/v1/" - } - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: text/plain - -Status Codes: - -- **200** – no error -- **204** – no error -- **500** – server error - -### Display system-wide information - -`GET /info` - -Display system-wide information - -**Example request**: - - GET /info HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - { - "Containers":11, - "Images":16, - "Debug":false, - "NFd": 11, - "NGoroutines":21, - "MemoryLimit":true, - "SwapLimit":false, - "IPv4Forwarding":true - } - -Status Codes: - -- **200** – no error -- **500** – server error - -### Show the docker version information - -`GET /version` - -Show the docker version information - -**Example request**: - - GET /version HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - { - "Version":"0.2.2", - "GitCommit":"5a2a5cc+CHANGES", - "GoVersion":"go1.0.3" - } - -Status Codes: - -- **200** – no error -- **500** – server error - -### Create a new image from a container's changes - -`POST /commit` - -Create a new image from a container's changes - -**Example request**: - - POST /commit?container=44c004db4b17&m=message&repo=myrepo HTTP/1.1 - Content-Type: application/json - - { - "Cmd": ["cat", "/world"], - "PortSpecs":["22"] - } - -**Example response**: - - HTTP/1.1 201 OK - Content-Type: application/vnd.docker.raw-stream - - {"Id": "596069db4bf5"} - -Query Parameters: - -- **container** – source container -- **repo** – repository -- **tag** – tag -- **m** – commit message -- **author** – author (e.g., "John Hannibal Smith - <[hannibal@a-team.com](mailto:hannibal%40a-team.com)>") - -Status Codes: - -- **201** – no error -- **404** – no such container -- **500** – server error - -### Monitor Docker's events - -`GET /events` - -Get events from docker, either in real time via streaming, or via -polling (using since). - -Docker containers will report the following events: - - create, destroy, die, export, kill, pause, restart, start, stop, unpause - -and Docker images will report: - - untag, delete - -**Example request**: - - GET /events?since=1374067924 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - {"status":"create","id":"dfdf82bd3881","from":"ubuntu:latest","time":1374067924} - {"status":"start","id":"dfdf82bd3881","from":"ubuntu:latest","time":1374067924} - {"status":"stop","id":"dfdf82bd3881","from":"ubuntu:latest","time":1374067966} - {"status":"destroy","id":"dfdf82bd3881","from":"ubuntu:latest","time":1374067970} - -Query Parameters: - -- **since** – timestamp used for polling - -Status Codes: - -- **200** – no error -- **500** – server error - -# 3. Going further - -## 3.1 Inside `docker run` - -Here are the steps of `docker run`: - - - Create the container - - If the status code is 404, it means the image doesn't exist: - Try to pull it - Then retry to create the container - - Start the container - - If you are not in detached mode: - Attach to the container, using logs=1 (to have stdout and stderr - from the container's start) and stream=1 - - If in detached mode or only stdin is attached: - Display the container's id - -## 3.2 Hijacking - -In this version of the API, /attach, uses hijacking to transport stdin, -stdout and stderr on the same socket. This might change in the future. - -## 3.3 CORS Requests - -To enable cross origin requests to the remote api add the flag -"--api-enable-cors" when running docker in daemon mode. - - $ docker -d -H="192.168.1.9:2375" --api-enable-cors diff --git a/reference/api/docker_remote_api_v1.6.md~ b/reference/api/docker_remote_api_v1.6.md~ deleted file mode 100644 index d0f9661e50..0000000000 --- a/reference/api/docker_remote_api_v1.6.md~ +++ /dev/null @@ -1,1254 +0,0 @@ -page_title: Remote API v1.6 -page_description: API Documentation for Docker -page_keywords: API, Docker, rcli, REST, documentation - -# Docker Remote API v1.6 - -# 1. Brief introduction - - - The Remote API has replaced rcli - - The daemon listens on `unix:///var/run/docker.sock` but you can bind - Docker to another host/port or a Unix socket. - - The API tends to be REST, but for some complex commands, like `attach` - or `pull`, the HTTP connection is hijacked to transport `stdout, stdin` - and `stderr` - -# 2. Endpoints - -## 2.1 Containers - -### List containers - -`GET /containers/json` - -List containers - -**Example request**: - - GET /containers/json?all=1&before=8dfafdbc3a40&size=1 HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - [ - { - "Id": "8dfafdbc3a40", - "Image": "base:latest", - "Command": "echo 1", - "Created": 1367854155, - "Status": "Exit 0", - "Ports": [{"PrivatePort": 2222, "PublicPort": 3333, "Type": "tcp"}], - "SizeRw": 12288, - "SizeRootFs": 0 - }, - { - "Id": "9cd87474be90", - "Image": "base:latest", - "Command": "echo 222222", - "Created": 1367854155, - "Status": "Exit 0", - "Ports": [], - "SizeRw": 12288, - "SizeRootFs": 0 - }, - { - "Id": "3176a2479c92", - "Image": "base:latest", - "Command": "echo 3333333333333333", - "Created": 1367854154, - "Status": "Exit 0", - "Ports":[], - "SizeRw":12288, - "SizeRootFs":0 - }, - { - "Id": "4cb07b47f9fb", - "Image": "base:latest", - "Command": "echo 444444444444444444444444444444444", - "Created": 1367854152, - "Status": "Exit 0", - "Ports": [], - "SizeRw": 12288, - "SizeRootFs": 0 - } - ] - -Query Parameters: - -   - -- **all** – 1/True/true or 0/False/false, Show all containers. - Only running containers are shown by default (i.e., this defaults to false) -- **limit** – Show `limit` last created containers, include non-running ones. -- **since** – Show only containers created since Id, include non-running ones. -- **before** – Show only containers created before Id, include non-running ones. -- **size** – 1/True/true or 0/False/false, Show the containers sizes - -Status Codes: - -- **200** – no error -- **400** – bad parameter -- **500** – server error - -### Create a container - -`POST /containers/create` - -Create a container - -**Example request**: - - POST /containers/create HTTP/1.1 - Content-Type: application/json - - { - "Hostname":"", - "User":"", - "Memory":0, - "MemorySwap":0, - "AttachStdin":false, - "AttachStdout":true, - "AttachStderr":true, - "ExposedPorts":{}, - "Tty":false, - "OpenStdin":false, - "StdinOnce":false, - "Env":null, - "Cmd":[ - "date" - ], - "Dns":null, - "Image":"base", - "Volumes":{}, - "VolumesFrom":"", - "WorkingDir":"" - } - -**Example response**: - - HTTP/1.1 201 Created - Content-Type: application/json - - { - "Id":"e90e34656806" - "Warnings":[] - } - -Json Parameters: - -- **config** – the container's configuration - -Query Parameters: - -   - -- **name** – container name to use - -Status Codes: - -- **201** – no error -- **404** – no such container -- **406** – impossible to attach (container not running) -- **500** – server error - - **More Complex Example request, in 2 steps.** **First, use create to - expose a Private Port, which can be bound back to a Public Port a - startup**: - - POST /containers/create HTTP/1.1 - Content-Type: application/json - - { - "Cmd":[ - "/usr/sbin/sshd","-D" - ], - "Image":"image-with-sshd", - "ExposedPorts":{"22/tcp":{}} - } - -**Example response**: - - HTTP/1.1 201 OK - Content-Type: application/json - - { - "Id":"e90e34656806" - "Warnings":[] - } - - **Second, start (using the ID returned above) the image we jus - created, mapping the ssh port 22 to something on the host**: - - POST /containers/e90e34656806/start HTTP/1.1 - Content-Type: application/json - - { - "PortBindings": { "22/tcp": [{ "HostPort": "11022" }]} - } - -**Example response**: - - HTTP/1.1 204 No Conten - Content-Type: text/plain; charset=utf-8 - Content-Length: 0 - - **Now you can ssh into your new container on port 11022.** - -### Inspect a container - -`GET /containers/(id)/json` - -Return low-level information on the container `id` - - -**Example request**: - - GET /containers/4fa6e0f0c678/json HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - { - "Id": "4fa6e0f0c6786287e131c3852c58a2e01cc697a68231826813597e4994f1d6e2", - "Created": "2013-05-07T14:51:42.041847+02:00", - "Path": "date", - "Args": [], - "Config": { - "Hostname": "4fa6e0f0c678", - "User": "", - "Memory": 0, - "MemorySwap": 0, - "AttachStdin": false, - "AttachStdout": true, - "AttachStderr": true, - "ExposedPorts": {}, - "Tty": false, - "OpenStdin": false, - "StdinOnce": false, - "Env": null, - "Cmd": [ - "date" - ], - "Dns": null, - "Image": "base", - "Volumes": {}, - "VolumesFrom": "", - "WorkingDir": "" - }, - "State": { - "Running": false, - "Pid": 0, - "ExitCode": 0, - "StartedAt": "2013-05-07T14:51:42.087658+02:01360", - "Ghost": false - }, - "Image": "b750fe79269d2ec9a3c593ef05b4332b1d1a02a62b4accb2c21d589ff2f5f2dc", - "NetworkSettings": { - "IpAddress": "", - "IpPrefixLen": 0, - "Gateway": "", - "Bridge": "", - "PortMapping": null - }, - "SysInitPath": "/home/kitty/go/src/github.com/docker/docker/bin/docker", - "ResolvConfPath": "/etc/resolv.conf", - "Volumes": {} - } - -Status Codes: - -- **200** – no error -- **404** – no such container -- **500** – server error - -### List processes running inside a container - -`GET /containers/(id)/top` - -List processes running inside the container `id` - -**Example request**: - - GET /containers/4fa6e0f0c678/top HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - { - "Titles": [ - "USER", - "PID", - "%CPU", - "%MEM", - "VSZ", - "RSS", - "TTY", - "STAT", - "START", - "TIME", - "COMMAND" - ], - "Processes": [ - ["root","20147","0.0","0.1","18060","1864","pts/4","S","10:06","0:00","bash"], - ["root","20271","0.0","0.0","4312","352","pts/4","S+","10:07","0:00","sleep","10"] - ] - } - -Query Parameters: - -- **ps_args** – ps arguments to use (e.g., aux) - -Status Codes: - -- **200** – no error -- **404** – no such container -- **500** – server error - -### Inspect changes on a container's filesystem - -`GET /containers/(id)/changes` - -Inspect changes on container `id`'s filesystem - -**Example request**: - - GET /containers/4fa6e0f0c678/changes HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - [ - { - "Path": "/dev", - "Kind": 0 - }, - { - "Path": "/dev/kmsg", - "Kind": 1 - }, - { - "Path": "/test", - "Kind": 1 - } - ] - -Status Codes: - -- **200** – no error -- **404** – no such container -- **500** – server error - -### Export a container - -`GET /containers/(id)/export` - -Export the contents of container `id` - -**Example request**: - - GET /containers/4fa6e0f0c678/export HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/octet-stream - - {{ TAR STREAM }} - -Status Codes: - -- **200** – no error -- **404** – no such container -- **500** – server error - -### Start a container - -`POST /containers/(id)/start` - -Start the container `id` - -**Example request**: - - POST /containers/(id)/start HTTP/1.1 - Content-Type: application/json - - { - "Binds":["/tmp:/tmp"], - "LxcConf":[{"Key":"lxc.utsname","Value":"docker"}], - "ContainerIDFile": "", - "Privileged": false, - "PortBindings": {"22/tcp": [{HostIp:"", HostPort:""}]}, - "Links": [], - "PublishAllPorts": false - } - -**Example response**: - - HTTP/1.1 204 No Content - Content-Type: text/plain - -Json Parameters: - -   - -- **hostConfig** – the container's host configuration (optional) - -Status Codes: - -- **204** – no error -- **404** – no such container -- **500** – server error - -### Stop a container - -`POST /containers/(id)/stop` - -Stop the container `id` - -**Example request**: - - POST /containers/e90e34656806/stop?t=5 HTTP/1.1 - -**Example response**: - - HTTP/1.1 204 OK - -Query Parameters: - -- **t** – number of seconds to wait before killing the container - -Status Codes: - -- **204** – no error -- **404** – no such container -- **500** – server error - -### Restart a container - -`POST /containers/(id)/restart` - -Restart the container `id` - -**Example request**: - - POST /containers/e90e34656806/restart?t=5 HTTP/1.1 - -**Example response**: - - HTTP/1.1 204 No Content - -Query Parameters: - -- **t** – number of seconds to wait before killing the container - -Status Codes: - -- **204** – no error -- **404** – no such container -- **500** – server error - -### Kill a container - -`POST /containers/(id)/kill` - -Kill the container `id` - -**Example request**: - - POST /containers/e90e34656806/kill HTTP/1.1 - -**Example response**: - - HTTP/1.1 204 No Content - -Query Parameters: - -   - -- **signal** – Signal to send to the container (integer). When no - set, SIGKILL is assumed and the call will waits for the - container to exit. - -Status Codes: - -- **204** – no error -- **404** – no such container -- **500** – server error - -### Attach to a container - -`POST /containers/(id)/attach` - -Attach to the container `id` - -**Example request**: - - POST /containers/16253994b7c4/attach?logs=1&stream=0&stdout=1 HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/vnd.docker.raw-stream - - {{ STREAM }} - -Query Parameters: - -- **logs** – 1/True/true or 0/False/false, return logs. Defaul - false -- **stream** – 1/True/true or 0/False/false, return stream. - Default false -- **stdin** – 1/True/true or 0/False/false, if stream=true, attach - to stdin. Default false -- **stdout** – 1/True/true or 0/False/false, if logs=true, return - stdout log, if stream=true, attach to stdout. Default false -- **stderr** – 1/True/true or 0/False/false, if logs=true, return - stderr log, if stream=true, attach to stderr. Default false - -Status Codes: - -- **200** – no error -- **400** – bad parameter -- **404** – no such container -- **500** – server error - - **Stream details**: - - When using the TTY setting is enabled in - [`POST /containers/create` - ](/reference/api/docker_remote_api_v1.9/#create-a-container "POST /containers/create"), - the stream is the raw data from the process PTY and client's stdin. - When the TTY is disabled, then the stream is multiplexed to separate - stdout and stderr. - - The format is a **Header** and a **Payload** (frame). - - **HEADER** - - The header will contain the information on which stream write the - stream (stdout or stderr). It also contain the size of the - associated frame encoded on the last 4 bytes (uint32). - - It is encoded on the first 8 bytes like this: - - header := [8]byte{STREAM_TYPE, 0, 0, 0, SIZE1, SIZE2, SIZE3, SIZE4} - - `STREAM_TYPE` can be: - -- 0: stdin (will be written on stdout) -- 1: stdout -- 2: stderr - - `SIZE1, SIZE2, SIZE3, SIZE4` are the 4 bytes of - the uint32 size encoded as big endian. - - **PAYLOAD** - - The payload is the raw stream. - - **IMPLEMENTATION** - - The simplest way to implement the Attach protocol is the following: - - 1. Read 8 bytes - 2. chose stdout or stderr depending on the first byte - 3. Extract the frame size from the last 4 byets - 4. Read the extracted size and output it on the correct output - 5. Goto 1) - -### Attach to a container (websocket) - -`GET /containers/(id)/attach/ws` - -Attach to the container `id` via websocket - -Implements websocket protocol handshake according to [RFC 6455](http://tools.ietf.org/html/rfc6455) - -**Example request** - - GET /containers/e90e34656806/attach/ws?logs=0&stream=1&stdin=1&stdout=1&stderr=1 HTTP/1.1 - -**Example response** - - {{ STREAM }} - -Query Parameters: - -- **logs** – 1/True/true or 0/False/false, return logs. Default false -- **stream** – 1/True/true or 0/False/false, return stream. - Default false -- **stdin** – 1/True/true or 0/False/false, if stream=true, attach - to stdin. Default false -- **stdout** – 1/True/true or 0/False/false, if logs=true, return - stdout log, if stream=true, attach to stdout. Default false -- **stderr** – 1/True/true or 0/False/false, if logs=true, return - stderr log, if stream=true, attach to stderr. Default false - -Status Codes: - -- **200** – no error -- **400** – bad parameter -- **404** – no such container -- **500** – server error - -### Wait a container - -`POST /containers/(id)/wait` - -Block until container `id` stops, then returns the exit code - -**Example request**: - - POST /containers/16253994b7c4/wait HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - {"StatusCode": 0} - -Status Codes: - -- **200** – no error -- **404** – no such container -- **500** – server error - -### Remove a container - -`DELETE /containers/(id)` - -Remove the container `id` from the filesystem - -**Example request**: - - DELETE /containers/16253994b7c4?v=1 HTTP/1.1 - -**Example response**: - - HTTP/1.1 204 No Content - -Query Parameters: - -- **v** – 1/True/true or 0/False/false, Remove the volumes - associated to the container. Default false - -Status Codes: - -- **204** – no error -- **400** – bad parameter -- **404** – no such container -- **500** – server error - -### Copy files or folders from a container - -`POST /containers/(id)/copy` - -Copy files or folders of container `id` - -**Example request**: - - POST /containers/4fa6e0f0c678/copy HTTP/1.1 - Content-Type: application/json - - { - "Resource": "test.txt" - } - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/octet-stream - - {{ TAR STREAM }} - -Status Codes: - -- **200** – no error -- **404** – no such container -- **500** – server error - -## 2.2 Images - -### List Images - -`GET /images/(format)` - -List images `format` could be json or viz (json default) - -**Example request**: - - GET /images/json?all=0 HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - [ - { - "Repository":"base", - "Tag":"ubuntu-12.10", - "Id":"b750fe79269d", - "Created":1364102658, - "Size":24653, - "VirtualSize":180116135 - }, - { - "Repository":"base", - "Tag":"ubuntu-quantal", - "Id":"b750fe79269d", - "Created":1364102658, - "Size":24653, - "VirtualSize":180116135 - } - ] - -**Example request**: - - GET /images/viz HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: text/plain - - digraph docker { - "d82cbacda43a" -> "074be284591f" - "1496068ca813" -> "08306dc45919" - "08306dc45919" -> "0e7893146ac2" - "b750fe79269d" -> "1496068ca813" - base -> "27cf78414709" [style=invis] - "f71189fff3de" -> "9a33b36209ed" - "27cf78414709" -> "b750fe79269d" - "0e7893146ac2" -> "d6434d954665" - "d6434d954665" -> "d82cbacda43a" - base -> "e9aa60c60128" [style=invis] - "074be284591f" -> "f71189fff3de" - "b750fe79269d" [label="b750fe79269d\nbase",shape=box,fillcolor="paleturquoise",style="filled,rounded"]; - "e9aa60c60128" [label="e9aa60c60128\nbase2",shape=box,fillcolor="paleturquoise",style="filled,rounded"]; - "9a33b36209ed" [label="9a33b36209ed\ntest",shape=box,fillcolor="paleturquoise",style="filled,rounded"]; - base [style=invisible] - } - -Query Parameters: - -- **all** – 1/True/true or 0/False/false, Show all containers. - Only running containers are shown by defaul - -Status Codes: - -- **200** – no error -- **400** – bad parameter -- **500** – server error - -### Create an image - -`POST /images/create` - -Create an image, either by pull it from the registry or by importing i - -**Example request**: - - POST /images/create?fromImage=base HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - {"status":"Pulling..."} - {"status":"Pulling", "progress":"1/? (n/a)"} - {"error":"Invalid..."} - ... - - When using this endpoint to pull an image from the registry, the - `X-Registry-Auth` header can be used to include - a base64-encoded AuthConfig object. - -Query Parameters: - -- **fromImage** – name of the image to pull -- **fromSrc** – source to import, - means stdin -- **repo** – repository -- **tag** – tag -- **registry** – the registry to pull from - -Status Codes: - -- **200** – no error -- **500** – server error - -### Insert a file in an image - -`POST /images/(name)/insert` - -Insert a file from `url` in the image `name` at `path` - -**Example request**: - - POST /images/test/insert?path=/usr&url=myurl HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - {"status":"Inserting..."} - {"status":"Inserting", "progress":"1/? (n/a)"} - {"error":"Invalid..."} - ... - -Query Parameters: - -- **url** – The url from where the file is taken -- **path** – The path where the file is stored - -Status Codes: - -- **200** – no error -- **500** – server error - -### Inspect an image - -`GET /images/(name)/json` - -Return low-level information on the image `name` - -**Example request**: - - GET /images/base/json HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - { - "id":"b750fe79269d2ec9a3c593ef05b4332b1d1a02a62b4accb2c21d589ff2f5f2dc", - "parent":"27cf784147099545", - "created":"2013-03-23T22:24:18.818426-07:00", - "container":"3d67245a8d72ecf13f33dffac9f79dcdf70f75acb84d308770391510e0c23ad0", - "container_config": - { - "Hostname":"", - "User":"", - "Memory":0, - "MemorySwap":0, - "AttachStdin":false, - "AttachStdout":false, - "AttachStderr":false, - "ExposedPorts":{}, - "Tty":true, - "OpenStdin":true, - "StdinOnce":false, - "Env":null, - "Cmd": ["/bin/bash"], - "Dns":null, - "Image":"base", - "Volumes":null, - "VolumesFrom":"", - "WorkingDir":"" - }, - "Size": 6824592 - } - -Status Codes: - -- **200** – no error -- **404** – no such image -- **500** – server error - -### Get the history of an image - -`GET /images/(name)/history` - -Return the history of the image `name` - -**Example request**: - - GET /images/base/history HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - [ - { - "Id": "b750fe79269d", - "Created": 1364102658, - "CreatedBy": "/bin/bash" - }, - { - "Id": "27cf78414709", - "Created": 1364068391, - "CreatedBy": "" - } - ] - -Status Codes: - -- **200** – no error -- **404** – no such image -- **500** – server error - -### Push an image on the registry - -`POST /images/(name)/push` - -Push the image `name` on the registry - -**Example request**: - - POST /images/test/push HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - {"status":"Pushing..."} {"status":"Pushing", "progress":"1/? (n/a)"} - {"error":"Invalid..."} ... - - > The `X-Registry-Auth` header can be used to - > include a base64-encoded AuthConfig object. - -Status Codes: - -- **200** – no error :statuscode 404: no such image :statuscode - 500: server error - -### Tag an image into a repository - -`POST /images/(name)/tag` - -Tag the image `name` into a repository - -**Example request**: - - POST /images/test/tag?repo=myrepo&force=0&tag=v42 HTTP/1.1 - -**Example response**: - - HTTP/1.1 201 OK - -Query Parameters: - -- **repo** – The repository to tag in -- **force** – 1/True/true or 0/False/false, default false -- **tag** - The new tag name - -Status Codes: - -- **201** – no error -- **400** – bad parameter -- **404** – no such image -- **409** – conflict -- **500** – server error - -### Remove an image - -`DELETE /images/(name)` - -Remove the image `name` from the filesystem - -**Example request**: - - DELETE /images/test HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-type: application/json - - [ - {"Untagged": "3e2f21a89f"}, - {"Deleted": "3e2f21a89f"}, - {"Deleted": "53b4f83ac9"} - ] - -Status Codes: - -- **200** – no error -- **404** – no such image -- **409** – conflict -- **500** – server error - -### Search images - -`GET /images/search` - -Search for an image on [Docker Hub](https://hub.docker.com) - -**Example request**: - - GET /images/search?term=sshd HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - [ - { - "Name":"cespare/sshd", - "Description":"" - }, - { - "Name":"johnfuller/sshd", - "Description":"" - }, - { - "Name":"dhrp/mongodb-sshd", - "Description":"" - } - ] - - :query term: term to search - :statuscode 200: no error - :statuscode 500: server error - -## 2.3 Misc - -### Build an image from Dockerfile via stdin - -`POST /build` - -Build an image from Dockerfile via stdin - -**Example request**: - - POST /build HTTP/1.1 - - {{ TAR STREAM }} - -**Example response**: - - HTTP/1.1 200 OK - - {{ STREAM }} - - The stream must be a tar archive compressed with one of the - following algorithms: identity (no compression), gzip, bzip2, xz. - The archive must include a file called Dockerfile at its root. I - may include any number of other files, which will be accessible in - the build context (See the ADD build command). - - The Content-type header should be set to "application/tar". - -Query Parameters: - -- **t** – repository name (and optionally a tag) to be applied to - the resulting image in case of success -- **remote** – build source URI (git or HTTPS/HTTP) -- **q** – suppress verbose build output -- **nocache** – do not use the cache when building the image - -Status Codes: - -- **200** – no error -- **500** – server error - -### Check auth configuration - -`POST /auth` - -Get the default username and email - -**Example request**: - - POST /auth HTTP/1.1 - Content-Type: application/json - - { - "username":" hannibal", - "password: "xxxx", - "email": "hannibal@a-team.com", - "serveraddress": "https://index.docker.io/v1/" - } - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: text/plain - -Status Codes: - -- **200** – no error -- **204** – no error -- **500** – server error - -### Display system-wide information - -`GET /info` - -Display system-wide information - -**Example request**: - - GET /info HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - { - "Containers":11, - "Images":16, - "Debug":false, - "NFd": 11, - "NGoroutines":21, - "MemoryLimit":true, - "SwapLimit":false, - "IPv4Forwarding":true - } - -Status Codes: - -- **200** – no error -- **500** – server error - -### Show the docker version information - -`GET /version` - -Show the docker version information - -**Example request**: - - GET /version HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - { - "Version":"0.2.2", - "GitCommit":"5a2a5cc+CHANGES", - "GoVersion":"go1.0.3" - } - -Status Codes: - -- **200** – no error -- **500** – server error - -### Create a new image from a container's changes - -`POST /commit` - -Create a new image from a container's changes - -**Example request**: - - POST /commit?container=44c004db4b17&m=message&repo=myrepo HTTP/1.1 - Content-Type: application/json - - { - "Cmd": ["cat", "/world"], - "ExposedPorts":{"22/tcp":{}} - } - -**Example response**: - - HTTP/1.1 201 OK - Content-Type: application/vnd.docker.raw-stream - - {"Id": "596069db4bf5"} - -Query Parameters: - -- **container** – source container -- **repo** – repository -- **tag** – tag -- **m** – commit message -- **author** – author (e.g., "John Hannibal Smith - <[hannibal@a-team.com](mailto:hannibal%40a-team.com)>") - -Status Codes: - -- **201** – no error -- **404** – no such container -- **500** – server error - -### Monitor Docker's events - -`GET /events` - -Get events from docker, either in real time via streaming, or via -polling (using since). - -Docker containers will report the following events: - - create, destroy, die, export, kill, pause, restart, start, stop, unpause - -and Docker images will report: - - untag, delete - -**Example request**: - - GET /events?since=1374067924 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - {"status": "create", "id": "dfdf82bd3881","from": "base:latest", "time":1374067924} - {"status": "start", "id": "dfdf82bd3881","from": "base:latest", "time":1374067924} - {"status": "stop", "id": "dfdf82bd3881","from": "base:latest", "time":1374067966} - {"status": "destroy", "id": "dfdf82bd3881","from": "base:latest", "time":1374067970} - -Query Parameters: - -- **since** – timestamp used for polling - -Status Codes: - -- **200** – no error -- **500** – server error - -# 3. Going further - -## 3.1 Inside `docker run` - -Here are the steps of `docker run` : - -- Create the container - -- If the status code is 404, it means the image doesn't exist: - - Try to pull it - - Then retry to create the container - -- Start the container - -- If you are not in detached mode: - - Attach to the container, using logs=1 (to have stdout and - stderr from the container's start) and stream=1 - -- If in detached mode or only stdin is attached: - - Display the container's id - -## 3.2 Hijacking - -In this version of the API, /attach, uses hijacking to transport stdin, -stdout and stderr on the same socket. This might change in the future. - -## 3.3 CORS Requests - -To enable cross origin requests to the remote api add the flag -"--api-enable-cors" when running docker in daemon mode. - - $ docker -d -H="192.168.1.9:2375" --api-enable-cors diff --git a/reference/api/docker_remote_api_v1.7.md~ b/reference/api/docker_remote_api_v1.7.md~ deleted file mode 100644 index 6cdd60374f..0000000000 --- a/reference/api/docker_remote_api_v1.7.md~ +++ /dev/null @@ -1,1242 +0,0 @@ -page_title: Remote API v1.7 -page_description: API Documentation for Docker -page_keywords: API, Docker, rcli, REST, documentation - -# Docker Remote API v1.7 - -# 1. Brief introduction - - - The Remote API has replaced rcli - - The daemon listens on `unix:///var/run/docker.sock` but you can bind - Docker to another host/port or a Unix socket. - - The API tends to be REST, but for some complex commands, like `attach` - or `pull`, the HTTP connection is hijacked to transport `stdout, stdin` - and `stderr` - -# 2. Endpoints - -## 2.1 Containers - -### List containers - -`GET /containers/json` - -List containers - -**Example request**: - - GET /containers/json?all=1&before=8dfafdbc3a40&size=1 HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - [ - { - "Id": "8dfafdbc3a40", - "Image": "base:latest", - "Command": "echo 1", - "Created": 1367854155, - "Status": "Exit 0", - "Ports": [{"PrivatePort": 2222, "PublicPort": 3333, "Type": "tcp"}], - "SizeRw": 12288, - "SizeRootFs": 0 - }, - { - "Id": "9cd87474be90", - "Image": "base:latest", - "Command": "echo 222222", - "Created": 1367854155, - "Status": "Exit 0", - "Ports": [], - "SizeRw": 12288, - "SizeRootFs": 0 - }, - { - "Id": "3176a2479c92", - "Image": "base:latest", - "Command": "echo 3333333333333333", - "Created": 1367854154, - "Status": "Exit 0", - "Ports":[], - "SizeRw":12288, - "SizeRootFs":0 - }, - { - "Id": "4cb07b47f9fb", - "Image": "base:latest", - "Command": "echo 444444444444444444444444444444444", - "Created": 1367854152, - "Status": "Exit 0", - "Ports": [], - "SizeRw": 12288, - "SizeRootFs": 0 - } - ] - -Query Parameters: - -- **all** – 1/True/true or 0/False/false, Show all containers. - Only running containers are shown by default (i.e., this defaults to false) -- **limit** – Show `limit` last created containers, include non-running ones. -- **since** – Show only containers created since Id, include non-running ones. -- **before** – Show only containers created before Id, include non-running ones. -- **size** – 1/True/true or 0/False/false, Show the containers sizes - -Status Codes: - -- **200** – no error -- **400** – bad parameter -- **500** – server error - -### Create a container - -`POST /containers/create` - -Create a container - -**Example request**: - - POST /containers/create HTTP/1.1 - Content-Type: application/json - - { - "Hostname":"", - "User":"", - "Memory":0, - "MemorySwap":0, - "AttachStdin":false, - "AttachStdout":true, - "AttachStderr":true, - "PortSpecs":null, - "Tty":false, - "OpenStdin":false, - "StdinOnce":false, - "Env":null, - "Cmd":[ - "date" - ], - "Dns":null, - "Image":"base", - "Volumes":{ - "/tmp": {} - }, - "VolumesFrom":"", - "WorkingDir":"", - "ExposedPorts":{ - "22/tcp": {} - } - } - -**Example response**: - - HTTP/1.1 201 Created - Content-Type: application/json - - { - "Id":"e90e34656806" - "Warnings":[] - } - -Json Parameters: - -- **config** – the container's configuration - -Status Codes: - -- **201** – no error -- **404** – no such container -- **406** – impossible to attach (container not running) -- **500** – server error - -### Inspect a container - -`GET /containers/(id)/json` - -Return low-level information on the container `id` - - -**Example request**: - - GET /containers/4fa6e0f0c678/json HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - { - "Id": "4fa6e0f0c6786287e131c3852c58a2e01cc697a68231826813597e4994f1d6e2", - "Created": "2013-05-07T14:51:42.041847+02:00", - "Path": "date", - "Args": [], - "Config": { - "Hostname": "4fa6e0f0c678", - "User": "", - "Memory": 0, - "MemorySwap": 0, - "AttachStdin": false, - "AttachStdout": true, - "AttachStderr": true, - "PortSpecs": null, - "Tty": false, - "OpenStdin": false, - "StdinOnce": false, - "Env": null, - "Cmd": [ - "date" - ], - "Dns": null, - "Image": "base", - "Volumes": {}, - "VolumesFrom": "", - "WorkingDir": "" - }, - "State": { - "Running": false, - "Pid": 0, - "ExitCode": 0, - "StartedAt": "2013-05-07T14:51:42.087658+02:01360", - "Ghost": false - }, - "Image": "b750fe79269d2ec9a3c593ef05b4332b1d1a02a62b4accb2c21d589ff2f5f2dc", - "NetworkSettings": { - "IpAddress": "", - "IpPrefixLen": 0, - "Gateway": "", - "Bridge": "", - "PortMapping": null - }, - "SysInitPath": "/home/kitty/go/src/github.com/docker/docker/bin/docker", - "ResolvConfPath": "/etc/resolv.conf", - "Volumes": {} - } - -Status Codes: - -- **200** – no error -- **404** – no such container -- **500** – server error - -### List processes running inside a container - -`GET /containers/(id)/top` - -List processes running inside the container `id` - -**Example request**: - - GET /containers/4fa6e0f0c678/top HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - { - "Titles": [ - "USER", - "PID", - "%CPU", - "%MEM", - "VSZ", - "RSS", - "TTY", - "STAT", - "START", - "TIME", - "COMMAND" - ], - "Processes": [ - ["root","20147","0.0","0.1","18060","1864","pts/4","S","10:06","0:00","bash"], - ["root","20271","0.0","0.0","4312","352","pts/4","S+","10:07","0:00","sleep","10"] - ] - } - -Query Parameters: - -- **ps_args** – ps arguments to use (e.g., aux) - -Status Codes: - -- **200** – no error -- **404** – no such container -- **500** – server error - -### Inspect changes on a container's filesystem - -`GET /containers/(id)/changes` - -Inspect changes on container `id`'s filesystem - -**Example request**: - - GET /containers/4fa6e0f0c678/changes HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - [ - { - "Path": "/dev", - "Kind": 0 - }, - { - "Path": "/dev/kmsg", - "Kind": 1 - }, - { - "Path": "/test", - "Kind": 1 - } - ] - -Status Codes: - -- **200** – no error -- **404** – no such container -- **500** – server error - -### Export a container - -`GET /containers/(id)/export` - -Export the contents of container `id` - -**Example request**: - - GET /containers/4fa6e0f0c678/export HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/octet-stream - - {{ TAR STREAM }} - -Status Codes: - -- **200** – no error -- **404** – no such container -- **500** – server error - -### Start a container - -`POST /containers/(id)/start` - -Start the container `id` - -**Example request**: - - POST /containers/(id)/start HTTP/1.1 - Content-Type: application/json - - { - "Binds":["/tmp:/tmp"], - "LxcConf":[{"Key":"lxc.utsname","Value":"docker"}], - "PortBindings":{ "22/tcp": [{ "HostPort": "11022" }] }, - "Privileged":false, - "PublishAllPorts":false - } - - Binds need to reference Volumes that were defined during container - creation. - -**Example response**: - - HTTP/1.1 204 No Content - Content-Type: text/plain - -Json Parameters: - -- **hostConfig** – the container's host configuration (optional) - -Status Codes: - -- **204** – no error -- **404** – no such container -- **500** – server error - -### Stop a container - -`POST /containers/(id)/stop` - -Stop the container `id` - -**Example request**: - - POST /containers/e90e34656806/stop?t=5 HTTP/1.1 - -**Example response**: - - HTTP/1.1 204 OK - -Query Parameters: - -- **t** – number of seconds to wait before killing the container - -Status Codes: - -- **204** – no error -- **404** – no such container -- **500** – server error - -### Restart a container - -`POST /containers/(id)/restart` - -Restart the container `id` - -**Example request**: - - POST /containers/e90e34656806/restart?t=5 HTTP/1.1 - -**Example response**: - - HTTP/1.1 204 No Content - -Query Parameters: - -- **t** – number of seconds to wait before killing the container - -Status Codes: - -- **204** – no error -- **404** – no such container -- **500** – server error - -### Kill a container - -`POST /containers/(id)/kill` - -Kill the container `id` - -**Example request**: - - POST /containers/e90e34656806/kill HTTP/1.1 - -**Example response**: - - HTTP/1.1 204 No Content - -Status Codes: - -- **204** – no error -- **404** – no such container -- **500** – server error - -### Attach to a container - -`POST /containers/(id)/attach` - -Attach to the container `id` - -**Example request**: - - POST /containers/16253994b7c4/attach?logs=1&stream=0&stdout=1 HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/vnd.docker.raw-stream - - {{ STREAM }} - -Query Parameters: - -- **logs** – 1/True/true or 0/False/false, return logs. Defaul - false -- **stream** – 1/True/true or 0/False/false, return stream. - Default false -- **stdin** – 1/True/true or 0/False/false, if stream=true, attach - to stdin. Default false -- **stdout** – 1/True/true or 0/False/false, if logs=true, return - stdout log, if stream=true, attach to stdout. Default false -- **stderr** – 1/True/true or 0/False/false, if logs=true, return - stderr log, if stream=true, attach to stderr. Default false - -Status Codes: - -- **200** – no error -- **400** – bad parameter -- **404** – no such container -- **500** – server error - - **Stream details**: - - When using the TTY setting is enabled in - [`POST /containers/create` - ](/reference/api/docker_remote_api_v1.7/#create-a-container), - the stream is the raw data from the process PTY and client's stdin. - When the TTY is disabled, then the stream is multiplexed to separate - stdout and stderr. - - The format is a **Header** and a **Payload** (frame). - - **HEADER** - - The header will contain the information on which stream write the - stream (stdout or stderr). It also contain the size of the - associated frame encoded on the last 4 bytes (uint32). - - It is encoded on the first 8 bytes like this: - - header := [8]byte{STREAM_TYPE, 0, 0, 0, SIZE1, SIZE2, SIZE3, SIZE4} - - `STREAM_TYPE` can be: - -- 0: stdin (will be written on stdout) -- 1: stdout -- 2: stderr - - `SIZE1, SIZE2, SIZE3, SIZE4` are the 4 bytes of - the uint32 size encoded as big endian. - - **PAYLOAD** - - The payload is the raw stream. - - **IMPLEMENTATION** - - The simplest way to implement the Attach protocol is the following: - - 1. Read 8 bytes - 2. chose stdout or stderr depending on the first byte - 3. Extract the frame size from the last 4 byets - 4. Read the extracted size and output it on the correct output - 5. Goto 1) - -### Attach to a container (websocket) - -`GET /containers/(id)/attach/ws` - -Attach to the container `id` via websocket - -Implements websocket protocol handshake according to [RFC 6455](http://tools.ietf.org/html/rfc6455) - -**Example request** - - GET /containers/e90e34656806/attach/ws?logs=0&stream=1&stdin=1&stdout=1&stderr=1 HTTP/1.1 - -**Example response** - - {{ STREAM }} - -Query Parameters: - -- **logs** – 1/True/true or 0/False/false, return logs. Default false -- **stream** – 1/True/true or 0/False/false, return stream. - Default false -- **stdin** – 1/True/true or 0/False/false, if stream=true, attach - to stdin. Default false -- **stdout** – 1/True/true or 0/False/false, if logs=true, return - stdout log, if stream=true, attach to stdout. Default false -- **stderr** – 1/True/true or 0/False/false, if logs=true, return - stderr log, if stream=true, attach to stderr. Default false - -Status Codes: - -- **200** – no error -- **400** – bad parameter -- **404** – no such container -- **500** – server error - -### Wait a container - -`POST /containers/(id)/wait` - -Block until container `id` stops, then returns the exit code - -**Example request**: - - POST /containers/16253994b7c4/wait HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - {"StatusCode": 0} - -Status Codes: - -- **200** – no error -- **404** – no such container -- **500** – server error - -### Remove a container - -`DELETE /containers/(id)` - -Remove the container `id` from the filesystem - -**Example request**: - - DELETE /containers/16253994b7c4?v=1 HTTP/1.1 - -**Example response**: - - HTTP/1.1 204 No Content - -Query Parameters: - -- **v** – 1/True/true or 0/False/false, Remove the volumes - associated to the container. Default false - -Status Codes: - -- **204** – no error -- **400** – bad parameter -- **404** – no such container -- **500** – server error - -### Copy files or folders from a container - -`POST /containers/(id)/copy` - -Copy files or folders of container `id` - -**Example request**: - - POST /containers/4fa6e0f0c678/copy HTTP/1.1 - Content-Type: application/json - - { - "Resource": "test.txt" - } - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/octet-stream - - {{ TAR STREAM }} - -Status Codes: - -- **200** – no error -- **404** – no such container -- **500** – server error - -## 2.2 Images - -### List Images - -`GET /images/json` - -**Example request**: - - GET /images/json?all=0 HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - [ - { - "RepoTags": [ - "ubuntu:12.04", - "ubuntu:precise", - "ubuntu:latest" - ], - "Id": "8dbd9e392a964056420e5d58ca5cc376ef18e2de93b5cc90e868a1bbc8318c1c", - "Created": 1365714795, - "Size": 131506275, - "VirtualSize": 131506275 - }, - { - "RepoTags": [ - "ubuntu:12.10", - "ubuntu:quantal" - ], - "ParentId": "27cf784147099545", - "Id": "b750fe79269d2ec9a3c593ef05b4332b1d1a02a62b4accb2c21d589ff2f5f2dc", - "Created": 1364102658, - "Size": 24653, - "VirtualSize": 180116135 - } - ] - -### Create an image - -`POST /images/create` - -Create an image, either by pull it from the registry or by importing i - -**Example request**: - - POST /images/create?fromImage=base HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - {"status":"Pulling..."} - {"status":"Pulling", "progress":"1/? (n/a)"} - {"error":"Invalid..."} - ... - - When using this endpoint to pull an image from the registry, the - `X-Registry-Auth` header can be used to include - a base64-encoded AuthConfig object. - -Query Parameters: - -- **fromImage** – name of the image to pull -- **fromSrc** – source to import, - means stdin -- **repo** – repository -- **tag** – tag -- **registry** – the registry to pull from - -Request Headers: - -- **X-Registry-Auth** – base64-encoded AuthConfig object - -Status Codes: - -- **200** – no error -- **500** – server error - -### Insert a file in an image - -`POST /images/(name)/insert` - -Insert a file from `url` in the image `name` at `path` - -**Example request**: - - POST /images/test/insert?path=/usr&url=myurl HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - {"status":"Inserting..."} - {"status":"Inserting", "progress":"1/? (n/a)"} - {"error":"Invalid..."} - ... - -Query Parameters: - -- **url** – The url from where the file is taken -- **path** – The path where the file is stored - -Status Codes: - -- **200** – no error -- **500** – server error - -### Inspect an image - -`GET /images/(name)/json` - -Return low-level information on the image `name` - -**Example request**: - - GET /images/base/json HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - { - "id":"b750fe79269d2ec9a3c593ef05b4332b1d1a02a62b4accb2c21d589ff2f5f2dc", - "parent":"27cf784147099545", - "created":"2013-03-23T22:24:18.818426-07:00", - "container":"3d67245a8d72ecf13f33dffac9f79dcdf70f75acb84d308770391510e0c23ad0", - "container_config": - { - "Hostname":"", - "User":"", - "Memory":0, - "MemorySwap":0, - "AttachStdin":false, - "AttachStdout":false, - "AttachStderr":false, - "PortSpecs":null, - "Tty":true, - "OpenStdin":true, - "StdinOnce":false, - "Env":null, - "Cmd": ["/bin/bash"], - "Dns":null, - "Image":"base", - "Volumes":null, - "VolumesFrom":"", - "WorkingDir":"" - }, - "Size": 6824592 - } - -Status Codes: - -- **200** – no error -- **404** – no such image -- **500** – server error - -### Get the history of an image - -`GET /images/(name)/history` - -Return the history of the image `name` - -**Example request**: - - GET /images/base/history HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - [ - { - "Id": "b750fe79269d", - "Created": 1364102658, - "CreatedBy": "/bin/bash" - }, - { - "Id": "27cf78414709", - "Created": 1364068391, - "CreatedBy": "" - } - ] - -Status Codes: - -- **200** – no error -- **404** – no such image -- **500** – server error - -### Push an image on the registry - -`POST /images/(name)/push` - -Push the image `name` on the registry - -**Example request**: - - POST /images/test/push HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - {"status":"Pushing..."} - {"status":"Pushing", "progress":"1/? (n/a)"} - {"error":"Invalid..."} - ... - - Request Headers: - -   - -- **X-Registry-Auth** – include a base64-encoded AuthConfig - object. - -Status Codes: - -- **200** – no error -- **404** – no such image -- **500** – server error - -### Tag an image into a repository - -`POST /images/(name)/tag` - -Tag the image `name` into a repository - -**Example request**: - - POST /images/test/tag?repo=myrepo&force=0&tag=v42 HTTP/1.1 - -**Example response**: - - HTTP/1.1 201 OK - -Query Parameters: - -- **repo** – The repository to tag in -- **force** – 1/True/true or 0/False/false, default false -- **tag** - The new tag name - -Status Codes: - -- **201** – no error -- **400** – bad parameter -- **404** – no such image -- **409** – conflict -- **500** – server error - -### Remove an image - -`DELETE /images/(name)` - -Remove the image `name` from the filesystem - -**Example request**: - - DELETE /images/test HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-type: application/json - - [ - {"Untagged": "3e2f21a89f"}, - {"Deleted": "3e2f21a89f"}, - {"Deleted": "53b4f83ac9"} - ] - -Status Codes: - -- **200** – no error -- **404** – no such image -- **409** – conflict -- **500** – server error - -### Search images - -`GET /images/search` - -Search for an image on [Docker Hub](https://hub.docker.com). - -> **Note**: -> The response keys have changed from API v1.6 to reflect the JSON -> sent by the registry server to the docker daemon's request. - -**Example request**: - - GET /images/search?term=sshd HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - [ - { - "description": "", - "is_official": false, - "is_trusted": false, - "name": "wma55/u1210sshd", - "star_count": 0 - }, - { - "description": "", - "is_official": false, - "is_trusted": false, - "name": "jdswinbank/sshd", - "star_count": 0 - }, - { - "description": "", - "is_official": false, - "is_trusted": false, - "name": "vgauthier/sshd", - "star_count": 0 - } - ... - ] - -Query Parameters: - -- **term** – term to search - -Status Codes: - -- **200** – no error -- **500** – server error - -## 2.3 Misc - -### Build an image from Dockerfile via stdin - -`POST /build` - -Build an image from Dockerfile via stdin - -**Example request**: - - POST /build HTTP/1.1 - - {{ TAR STREAM }} - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - {{ STREAM }} - - The stream must be a tar archive compressed with one of the - following algorithms: identity (no compression), gzip, bzip2, xz. - - The archive must include a file called `Dockerfile` - at its root. It may include any number of other files, - which will be accessible in the build context (See the [*ADD build - command*](/builder/#dockerbuilder)). - -Query Parameters: - -- **t** – repository name (and optionally a tag) to be applied to - the resulting image in case of success -- **remote** – build source URI (git or HTTPS/HTTP) -- **q** – suppress verbose build output -- **nocache** – do not use the cache when building the image - - Request Headers: - -   - -- **Content-type** – should be set to - `"application/tar"`. - -Status Codes: - -- **200** – no error -- **500** – server error - -### Check auth configuration - -`POST /auth` - -Get the default username and email - -**Example request**: - - POST /auth HTTP/1.1 - Content-Type: application/json - - { - "username":" hannibal", - "password: "xxxx", - "email": "hannibal@a-team.com", - "serveraddress": "https://index.docker.io/v1/" - } - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: text/plain - -Status Codes: - -- **200** – no error -- **204** – no error -- **500** – server error - -### Display system-wide information - -`GET /info` - -Display system-wide information - -**Example request**: - - GET /info HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - { - "Containers":11, - "Images":16, - "Debug":false, - "NFd": 11, - "NGoroutines":21, - "MemoryLimit":true, - "SwapLimit":false, - "IPv4Forwarding":true - } - -Status Codes: - -- **200** – no error -- **500** – server error - -### Show the docker version information - -`GET /version` - -Show the docker version information - -**Example request**: - - GET /version HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - { - "Version":"0.2.2", - "GitCommit":"5a2a5cc+CHANGES", - "GoVersion":"go1.0.3" - } - -Status Codes: - -- **200** – no error -- **500** – server error - -### Create a new image from a container's changes - -`POST /commit` - -Create a new image from a container's changes - -**Example request**: - - POST /commit?container=44c004db4b17&m=message&repo=myrepo HTTP/1.1 - -**Example response**: - - HTTP/1.1 201 OK - Content-Type: application/vnd.docker.raw-stream - - {"Id": "596069db4bf5"} - -Query Parameters: - -- **container** – source container -- **repo** – repository -- **tag** – tag -- **m** – commit message -- **author** – author (e.g., "John Hannibal Smith - <[hannibal@a-team.com](mailto:hannibal%40a-team.com)>") -- **run** – config automatically applied when the image is run. - (ex: {"Cmd": ["cat", "/world"], "PortSpecs":["22"]}) - -Status Codes: - -- **201** – no error -- **404** – no such container -- **500** – server error - -### Monitor Docker's events - -`GET /events` - -Get events from docker, either in real time via streaming, or via -polling (using since). - -Docker containers will report the following events: - - create, destroy, die, export, kill, pause, restart, start, stop, unpause - -and Docker images will report: - - untag, delete - -**Example request**: - - GET /events?since=1374067924 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - {"status": "create", "id": "dfdf82bd3881","from": "base:latest", "time":1374067924} - {"status": "start", "id": "dfdf82bd3881","from": "base:latest", "time":1374067924} - {"status": "stop", "id": "dfdf82bd3881","from": "base:latest", "time":1374067966} - {"status": "destroy", "id": "dfdf82bd3881","from": "base:latest", "time":1374067970} - -Query Parameters: - -- **since** – timestamp used for polling - -Status Codes: - -- **200** – no error -- **500** – server error - -### Get a tarball containing all images and tags in a repository - -`GET /images/(name)/get` - -Get a tarball containing all images and metadata for the repository -specified by `name`. - -**Example request** - - GET /images/ubuntu/get - -**Example response**: - - .. sourcecode:: http - - HTTP/1.1 200 OK - Content-Type: application/x-tar - - Binary data stream - :statuscode 200: no error - :statuscode 500: server error - -### Load a tarball with a set of images and tags into docker - -`POST /images/load` - -Load a set of images and tags into the docker repository. - -**Example request** - - POST /images/load - - Tarball in body - - **Example response**: - - .. sourcecode:: http - - HTTP/1.1 200 OK - - :statuscode 200: no error - :statuscode 500: server error - -# 3. Going further - -## 3.1 Inside `docker run` - -Here are the steps of `docker run` : - -- Create the container - -- If the status code is 404, it means the image doesn't exist: - - Try to pull it - - Then retry to create the container - -- Start the container - -- If you are not in detached mode: - - Attach to the container, using logs=1 (to have stdout and - stderr from the container's start) and stream=1 - -- If in detached mode or only stdin is attached: - - Display the container's id - -## 3.2 Hijacking - -In this version of the API, /attach, uses hijacking to transport stdin, -stdout and stderr on the same socket. This might change in the future. - -## 3.3 CORS Requests - -To enable cross origin requests to the remote api add the flag -"--api-enable-cors" when running docker in daemon mode. - - $ docker -d -H="192.168.1.9:2375" --api-enable-cors diff --git a/reference/api/docker_remote_api_v1.8.md~ b/reference/api/docker_remote_api_v1.8.md~ deleted file mode 100644 index 409e63a163..0000000000 --- a/reference/api/docker_remote_api_v1.8.md~ +++ /dev/null @@ -1,1318 +0,0 @@ -page_title: Remote API v1.8 -page_description: API Documentation for Docker -page_keywords: API, Docker, rcli, REST, documentation - -# Docker Remote API v1.8 - -# 1. Brief introduction - - - The Remote API has replaced rcli - - The daemon listens on `unix:///var/run/docker.sock` but you can bind - Docker to another host/port or a Unix socket. - - The API tends to be REST, but for some complex commands, like `attach` - or `pull`, the HTTP connection is hijacked to transport `stdout, stdin` - and `stderr` - -# 2. Endpoints - -## 2.1 Containers - -### List containers - -`GET /containers/json` - -List containers - -**Example request**: - - GET /containers/json?all=1&before=8dfafdbc3a40&size=1 HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - [ - { - "Id": "8dfafdbc3a40", - "Image": "base:latest", - "Command": "echo 1", - "Created": 1367854155, - "Status": "Exit 0", - "Ports": [{"PrivatePort": 2222, "PublicPort": 3333, "Type": "tcp"}], - "SizeRw": 12288, - "SizeRootFs": 0 - }, - { - "Id": "9cd87474be90", - "Image": "base:latest", - "Command": "echo 222222", - "Created": 1367854155, - "Status": "Exit 0", - "Ports": [], - "SizeRw": 12288, - "SizeRootFs": 0 - }, - { - "Id": "3176a2479c92", - "Image": "base:latest", - "Command": "echo 3333333333333333", - "Created": 1367854154, - "Status": "Exit 0", - "Ports":[], - "SizeRw":12288, - "SizeRootFs":0 - }, - { - "Id": "4cb07b47f9fb", - "Image": "base:latest", - "Command": "echo 444444444444444444444444444444444", - "Created": 1367854152, - "Status": "Exit 0", - "Ports": [], - "SizeRw": 12288, - "SizeRootFs": 0 - } - ] - -Query Parameters: - -   - -- **all** – 1/True/true or 0/False/false, Show all containers. - Only running containers are shown by default (i.e., this defaults to false) -- **limit** – Show `limit` last created containers, include non-running ones. -- **since** – Show only containers created since Id, include non-running ones. -- **before** – Show only containers created before Id, include non-running ones. -- **size** – 1/True/true or 0/False/false, Show the containers sizes - -Status Codes: - -- **200** – no error -- **400** – bad parameter -- **500** – server error - -### Create a container - -`POST /containers/create` - -Create a container - -**Example request**: - - POST /containers/create HTTP/1.1 - Content-Type: application/json - - { - "Hostname":"", - "User":"", - "Memory":0, - "MemorySwap":0, - "CpuShares":0, - "AttachStdin":false, - "AttachStdout":true, - "AttachStderr":true, - "PortSpecs":null, - "Tty":false, - "OpenStdin":false, - "StdinOnce":false, - "Env":null, - "Cmd":[ - "date" - ], - "Dns":null, - "Image":"base", - "Volumes":{ - "/tmp": {} - }, - "VolumesFrom":"", - "WorkingDir":"", - "ExposedPorts":{ - "22/tcp": {} - } - } - -**Example response**: - - HTTP/1.1 201 Created - Content-Type: application/json - - { - "Id":"e90e34656806" - "Warnings":[] - } - -Json Parameters: - -   - -- **Hostname** – Container host name -- **User** – Username or UID -- **Memory** – Memory Limit in bytes -- **CpuShares** – CPU shares (relative weight) -- **AttachStdin** – 1/True/true or 0/False/false, attach to - standard input. Default false -- **AttachStdout** – 1/True/true or 0/False/false, attach to - standard output. Default false -- **AttachStderr** – 1/True/true or 0/False/false, attach to - standard error. Default false -- **Tty** – 1/True/true or 0/False/false, allocate a pseudo-tty. - Default false -- **OpenStdin** – 1/True/true or 0/False/false, keep stdin open - even if not attached. Default false - -Query Parameters: - -   - -- **name** – Assign the specified name to the container. Mus - match `/?[a-zA-Z0-9_-]+`. - -Status Codes: - -- **201** – no error -- **404** – no such container -- **406** – impossible to attach (container not running) -- **500** – server error - -### Inspect a container - -`GET /containers/(id)/json` - -Return low-level information on the container `id` - -**Example request**: - - GET /containers/4fa6e0f0c678/json HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - { - "Id": "4fa6e0f0c6786287e131c3852c58a2e01cc697a68231826813597e4994f1d6e2", - "Created": "2013-05-07T14:51:42.041847+02:00", - "Path": "date", - "Args": [], - "Config": { - "Hostname": "4fa6e0f0c678", - "User": "", - "Memory": 0, - "MemorySwap": 0, - "AttachStdin": false, - "AttachStdout": true, - "AttachStderr": true, - "PortSpecs": null, - "Tty": false, - "OpenStdin": false, - "StdinOnce": false, - "Env": null, - "Cmd": [ - "date" - ], - "Dns": null, - "Image": "base", - "Volumes": {}, - "VolumesFrom": "", - "WorkingDir": "" - }, - "State": { - "Running": false, - "Pid": 0, - "ExitCode": 0, - "StartedAt": "2013-05-07T14:51:42.087658+02:01360", - "Ghost": false - }, - "Image": "b750fe79269d2ec9a3c593ef05b4332b1d1a02a62b4accb2c21d589ff2f5f2dc", - "NetworkSettings": { - "IpAddress": "", - "IpPrefixLen": 0, - "Gateway": "", - "Bridge": "", - "PortMapping": null - }, - "SysInitPath": "/home/kitty/go/src/github.com/docker/docker/bin/docker", - "ResolvConfPath": "/etc/resolv.conf", - "Volumes": {}, - "HostConfig": { - "Binds": null, - "ContainerIDFile": "", - "LxcConf": [], - "Privileged": false, - "PortBindings": { - "80/tcp": [ - { - "HostIp": "0.0.0.0", - "HostPort": "49153" - } - ] - }, - "Links": null, - "PublishAllPorts": false - } - } - -Status Codes: - -- **200** – no error -- **404** – no such container -- **500** – server error - -### List processes running inside a container - -`GET /containers/(id)/top` - -List processes running inside the container `id` - -**Example request**: - - GET /containers/4fa6e0f0c678/top HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - { - "Titles": [ - "USER", - "PID", - "%CPU", - "%MEM", - "VSZ", - "RSS", - "TTY", - "STAT", - "START", - "TIME", - "COMMAND" - ], - "Processes": [ - ["root","20147","0.0","0.1","18060","1864","pts/4","S","10:06","0:00","bash"], - ["root","20271","0.0","0.0","4312","352","pts/4","S+","10:07","0:00","sleep","10"] - ] - } - -Query Parameters: - -- **ps_args** – ps arguments to use (e.g., aux) - -Status Codes: - -- **200** – no error -- **404** – no such container -- **500** – server error - -### Inspect changes on a container's filesystem - -`GET /containers/(id)/changes` - -Inspect changes on container `id`'s filesystem - -**Example request**: - - GET /containers/4fa6e0f0c678/changes HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - [ - { - "Path": "/dev", - "Kind": 0 - }, - { - "Path": "/dev/kmsg", - "Kind": 1 - }, - { - "Path": "/test", - "Kind": 1 - } - ] - -Status Codes: - -- **200** – no error -- **404** – no such container -- **500** – server error - -### Export a container - -`GET /containers/(id)/export` - -Export the contents of container `id` - -**Example request**: - - GET /containers/4fa6e0f0c678/export HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/octet-stream - - {{ TAR STREAM }} - -Status Codes: - -- **200** – no error -- **404** – no such container -- **500** – server error - -### Start a container - -`POST /containers/(id)/start` - -Start the container `id` - -**Example request**: - - POST /containers/(id)/start HTTP/1.1 - Content-Type: application/json - - { - "Binds":["/tmp:/tmp"], - "LxcConf":[{"Key":"lxc.utsname","Value":"docker"}], - "PortBindings":{ "22/tcp": [{ "HostPort": "11022" }] }, - "PublishAllPorts":false, - "Privileged":false - } - -**Example response**: - - HTTP/1.1 204 No Content - Content-Type: text/plain - -Json Parameters: - -   - -- **Binds** – Create a bind mount to a directory or file with - [host-path]:[container-path]:[rw|ro]. If a directory - "container-path" is missing, then docker creates a new volume. -- **LxcConf** – Map of custom lxc options -- **PortBindings** – Expose ports from the container, optionally - publishing them via the HostPort flag -- **PublishAllPorts** – 1/True/true or 0/False/false, publish all - exposed ports to the host interfaces. Default false -- **Privileged** – 1/True/true or 0/False/false, give extended - privileges to this container. Default false - -Status Codes: - -- **204** – no error -- **404** – no such container -- **500** – server error - -### Stop a container - -`POST /containers/(id)/stop` - -Stop the container `id` - -**Example request**: - - POST /containers/e90e34656806/stop?t=5 HTTP/1.1 - -**Example response**: - - HTTP/1.1 204 OK - -Query Parameters: - -- **t** – number of seconds to wait before killing the container - -Status Codes: - -- **204** – no error -- **404** – no such container -- **500** – server error - -### Restart a container - -`POST /containers/(id)/restart` - -Restart the container `id` - -**Example request**: - - POST /containers/e90e34656806/restart?t=5 HTTP/1.1 - -**Example response**: - - HTTP/1.1 204 No Content - -Query Parameters: - -- **t** – number of seconds to wait before killing the container - -Status Codes: - -- **204** – no error -- **404** – no such container -- **500** – server error - -### Kill a container - -`POST /containers/(id)/kill` - -Kill the container `id` - -**Example request**: - - POST /containers/e90e34656806/kill HTTP/1.1 - -**Example response**: - - HTTP/1.1 204 No Content - -Status Codes: - -- **204** – no error -- **404** – no such container -- **500** – server error - -### Attach to a container - -`POST /containers/(id)/attach` - -Attach to the container `id` - -**Example request**: - - POST /containers/16253994b7c4/attach?logs=1&stream=0&stdout=1 HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/vnd.docker.raw-stream - - {{ STREAM }} - -Query Parameters: - -- **logs** – 1/True/true or 0/False/false, return logs. Defaul - false -- **stream** – 1/True/true or 0/False/false, return stream. - Default false -- **stdin** – 1/True/true or 0/False/false, if stream=true, attach - to stdin. Default false -- **stdout** – 1/True/true or 0/False/false, if logs=true, return - stdout log, if stream=true, attach to stdout. Default false -- **stderr** – 1/True/true or 0/False/false, if logs=true, return - stderr log, if stream=true, attach to stderr. Default false - -Status Codes: - -- **200** – no error -- **400** – bad parameter -- **404** – no such container -- **500** – server error - - **Stream details**: - - When using the TTY setting is enabled in - [`POST /containers/create` - ](/reference/api/docker_remote_api_v1.9/#create-a-container "POST /containers/create"), - the stream is the raw data from the process PTY and client's stdin. - When the TTY is disabled, then the stream is multiplexed to separate - stdout and stderr. - - The format is a **Header** and a **Payload** (frame). - - **HEADER** - - The header will contain the information on which stream write the - stream (stdout or stderr). It also contain the size of the - associated frame encoded on the last 4 bytes (uint32). - - It is encoded on the first 8 bytes like this: - - header := [8]byte{STREAM_TYPE, 0, 0, 0, SIZE1, SIZE2, SIZE3, SIZE4} - - `STREAM_TYPE` can be: - -- 0: stdin (will be written on stdout) -- 1: stdout -- 2: stderr - - `SIZE1, SIZE2, SIZE3, SIZE4` are the 4 bytes of - the uint32 size encoded as big endian. - - **PAYLOAD** - - The payload is the raw stream. - - **IMPLEMENTATION** - - The simplest way to implement the Attach protocol is the following: - - 1. Read 8 bytes - 2. chose stdout or stderr depending on the first byte - 3. Extract the frame size from the last 4 byets - 4. Read the extracted size and output it on the correct output - 5. Goto 1) - -### Attach to a container (websocket) - -`GET /containers/(id)/attach/ws` - -Attach to the container `id` via websocket - -Implements websocket protocol handshake according to [RFC 6455](http://tools.ietf.org/html/rfc6455) - -**Example request** - - GET /containers/e90e34656806/attach/ws?logs=0&stream=1&stdin=1&stdout=1&stderr=1 HTTP/1.1 - -**Example response** - - {{ STREAM }} - -Query Parameters: - -- **logs** – 1/True/true or 0/False/false, return logs. Default false -- **stream** – 1/True/true or 0/False/false, return stream. - Default false -- **stdin** – 1/True/true or 0/False/false, if stream=true, attach - to stdin. Default false -- **stdout** – 1/True/true or 0/False/false, if logs=true, return - stdout log, if stream=true, attach to stdout. Default false -- **stderr** – 1/True/true or 0/False/false, if logs=true, return - stderr log, if stream=true, attach to stderr. Default false - -Status Codes: - -- **200** – no error -- **400** – bad parameter -- **404** – no such container -- **500** – server error - -### Wait a container - -`POST /containers/(id)/wait` - -Block until container `id` stops, then returns the exit code - -**Example request**: - - POST /containers/16253994b7c4/wait HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - {"StatusCode": 0} - -Status Codes: - -- **200** – no error -- **404** – no such container -- **500** – server error - -### Remove a container - -`DELETE /containers/(id)` - -Remove the container `id` from the filesystem - -**Example request**: - - DELETE /containers/16253994b7c4?v=1 HTTP/1.1 - -**Example response**: - - HTTP/1.1 204 No Content - -Query Parameters: - -- **v** – 1/True/true or 0/False/false, Remove the volumes - associated to the container. Default false - -Status Codes: - -- **204** – no error -- **400** – bad parameter -- **404** – no such container -- **500** – server error - -### Copy files or folders from a container - -`POST /containers/(id)/copy` - -Copy files or folders of container `id` - -**Example request**: - - POST /containers/4fa6e0f0c678/copy HTTP/1.1 - Content-Type: application/json - - { - "Resource": "test.txt" - } - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/octet-stream - - {{ TAR STREAM }} - -Status Codes: - -- **200** – no error -- **404** – no such container -- **500** – server error - -## 2.2 Images - -### List Images - -`GET /images/json` - -**Example request**: - - GET /images/json?all=0 HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - [ - { - "RepoTags": [ - "ubuntu:12.04", - "ubuntu:precise", - "ubuntu:latest" - ], - "Id": "8dbd9e392a964056420e5d58ca5cc376ef18e2de93b5cc90e868a1bbc8318c1c", - "Created": 1365714795, - "Size": 131506275, - "VirtualSize": 131506275 - }, - { - "RepoTags": [ - "ubuntu:12.10", - "ubuntu:quantal" - ], - "ParentId": "27cf784147099545", - "Id": "b750fe79269d2ec9a3c593ef05b4332b1d1a02a62b4accb2c21d589ff2f5f2dc", - "Created": 1364102658, - "Size": 24653, - "VirtualSize": 180116135 - } - ] - -### Create an image - -`POST /images/create` - -Create an image, either by pull it from the registry or by importing i - -**Example request**: - - POST /images/create?fromImage=base HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - {"status": "Pulling..."} - {"status": "Pulling", "progress": "1 B/ 100 B", "progressDetail": {"current": 1, "total": 100}} - {"error": "Invalid..."} - ... - - When using this endpoint to pull an image from the registry, the - `X-Registry-Auth` header can be used to include - a base64-encoded AuthConfig object. - -Query Parameters: - -- **fromImage** – name of the image to pull -- **fromSrc** – source to import, - means stdin -- **repo** – repository -- **tag** – tag -- **registry** – the registry to pull from - -Request Headers: - -- **X-Registry-Auth** – base64-encoded AuthConfig object - -Status Codes: - -- **200** – no error -- **500** – server error - -### Insert a file in an image - -`POST /images/(name)/insert` - -Insert a file from `url` in the image `name` at `path` - -**Example request**: - - POST /images/test/insert?path=/usr&url=myurl HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - {"status":"Inserting..."} - {"status":"Inserting", "progress":"1/? (n/a)", "progressDetail":{"current":1}} - {"error":"Invalid..."} - ... - -Query Parameters: - -- **url** – The url from where the file is taken -- **path** – The path where the file is stored - -Status Codes: - -- **200** – no error -- **500** – server error - -### Inspect an image - -`GET /images/(name)/json` - -Return low-level information on the image `name` - -**Example request**: - - GET /images/base/json HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - { - "id":"b750fe79269d2ec9a3c593ef05b4332b1d1a02a62b4accb2c21d589ff2f5f2dc", - "parent":"27cf784147099545", - "created":"2013-03-23T22:24:18.818426-07:00", - "container":"3d67245a8d72ecf13f33dffac9f79dcdf70f75acb84d308770391510e0c23ad0", - "container_config": - { - "Hostname":"", - "User":"", - "Memory":0, - "MemorySwap":0, - "AttachStdin":false, - "AttachStdout":false, - "AttachStderr":false, - "PortSpecs":null, - "Tty":true, - "OpenStdin":true, - "StdinOnce":false, - "Env":null, - "Cmd": ["/bin/bash"], - "Dns":null, - "Image":"base", - "Volumes":null, - "VolumesFrom":"", - "WorkingDir":"" - }, - "Size": 6824592 - } - -Status Codes: - -- **200** – no error -- **404** – no such image -- **500** – server error - -### Get the history of an image - -`GET /images/(name)/history` - -Return the history of the image `name` - -**Example request**: - - GET /images/base/history HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - [ - { - "Id": "b750fe79269d", - "Created": 1364102658, - "CreatedBy": "/bin/bash" - }, - { - "Id": "27cf78414709", - "Created": 1364068391, - "CreatedBy": "" - } - ] - -Status Codes: - -- **200** – no error -- **404** – no such image -- **500** – server error - -### Push an image on the registry - -`POST /images/(name)/push` - -Push the image `name` on the registry - -**Example request**: - - POST /images/test/push HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - {"status": "Pushing..."} - {"status": "Pushing", "progress": "1/? (n/a)", "progressDetail": {"current": 1}}} - {"error": "Invalid..."} - ... - - Request Headers: - -   - -- **X-Registry-Auth** – include a base64-encoded AuthConfig - object. - -Status Codes: - -- **200** – no error -- **404** – no such image -- **500** – server error - -### Tag an image into a repository - -`POST /images/(name)/tag` - -Tag the image `name` into a repository - -**Example request**: - - POST /images/test/tag?repo=myrepo&force=0&tag=v42 HTTP/1.1 - -**Example response**: - - HTTP/1.1 201 OK - -Query Parameters: - -- **repo** – The repository to tag in -- **force** – 1/True/true or 0/False/false, default false -- **tag** - The new tag name - -Status Codes: - -- **201** – no error -- **400** – bad parameter -- **404** – no such image -- **409** – conflict -- **500** – server error - -### Remove an image - -`DELETE /images/(name)` - -Remove the image `name` from the filesystem - -**Example request**: - - DELETE /images/test HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-type: application/json - - [ - {"Untagged": "3e2f21a89f"}, - {"Deleted": "3e2f21a89f"}, - {"Deleted": "53b4f83ac9"} - ] - -Status Codes: - -- **200** – no error -- **404** – no such image -- **409** – conflict -- **500** – server error - -### Search images - -`GET /images/search` - -Search for an image on [Docker Hub](https://hub.docker.com). - -> **Note**: -> The response keys have changed from API v1.6 to reflect the JSON -> sent by the registry server to the docker daemon's request. - -**Example request**: - - GET /images/search?term=sshd HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - [ - { - "description": "", - "is_official": false, - "is_trusted": false, - "name": "wma55/u1210sshd", - "star_count": 0 - }, - { - "description": "", - "is_official": false, - "is_trusted": false, - "name": "jdswinbank/sshd", - "star_count": 0 - }, - { - "description": "", - "is_official": false, - "is_trusted": false, - "name": "vgauthier/sshd", - "star_count": 0 - } - ... - ] - -Query Parameters: - -- **term** – term to search - -Status Codes: - -- **200** – no error -- **500** – server error - -## 2.3 Misc - -### Build an image from Dockerfile via stdin - -`POST /build` - -Build an image from Dockerfile via stdin - -**Example request**: - - POST /build HTTP/1.1 - - {{ TAR STREAM }} - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - {"stream": "Step 1..."} - {"stream": "..."} - {"error": "Error...", "errorDetail": {"code": 123, "message": "Error..."}} - - The stream must be a tar archive compressed with one of the - following algorithms: identity (no compression), gzip, bzip2, xz. - - The archive must include a file called `Dockerfile` - at its root. It may include any number of other files, - which will be accessible in the build context (See the [*ADD build - command*](/reference/builder/#dockerbuilder)). - -Query Parameters: - -- **t** – repository name (and optionally a tag) to be applied to - the resulting image in case of success -- **remote** – build source URI (git or HTTPS/HTTP) -- **q** – suppress verbose build output -- **nocache** – do not use the cache when building the image - - Request Headers: - -   - -- **Content-type** – should be set to - `"application/tar"`. -- **X-Registry-Auth** – base64-encoded AuthConfig objec - -Status Codes: - -- **200** – no error -- **500** – server error - -### Check auth configuration - -`POST /auth` - -Get the default username and email - -**Example request**: - - POST /auth HTTP/1.1 - Content-Type: application/json - - { - "username":" hannibal", - "password: "xxxx", - "email": "hannibal@a-team.com", - "serveraddress": "https://index.docker.io/v1/" - } - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: text/plain - -Status Codes: - -- **200** – no error -- **204** – no error -- **500** – server error - -### Display system-wide information - -`GET /info` - -Display system-wide information - -**Example request**: - - GET /info HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - { - "Containers":11, - "Images":16, - "Debug":false, - "NFd": 11, - "NGoroutines":21, - "MemoryLimit":true, - "SwapLimit":false, - "IPv4Forwarding":true - } - -Status Codes: - -- **200** – no error -- **500** – server error - -### Show the docker version information - -`GET /version` - -Show the docker version information - -**Example request**: - - GET /version HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - { - "Version":"0.2.2", - "GitCommit":"5a2a5cc+CHANGES", - "GoVersion":"go1.0.3" - } - -Status Codes: - -- **200** – no error -- **500** – server error - -### Create a new image from a container's changes - -`POST /commit` - -Create a new image from a container's changes - -**Example request**: - - POST /commit?container=44c004db4b17&m=message&repo=myrepo HTTP/1.1 - -**Example response**: - - HTTP/1.1 201 OK - Content-Type: application/vnd.docker.raw-stream - - {"Id": "596069db4bf5"} - -Query Parameters: - -- **container** – source container -- **repo** – repository -- **tag** – tag -- **m** – commit message -- **author** – author (e.g., "John Hannibal Smith - <[hannibal@a-team.com](mailto:hannibal%40a-team.com)>") -- **run** – config automatically applied when the image is run. - (ex: {"Cmd": ["cat", "/world"], "PortSpecs":["22"]}) - -Status Codes: - -- **201** – no error -- **404** – no such container -- **500** – server error - -### Monitor Docker's events - -`GET /events` - -Get events from docker, either in real time via streaming, -or via polling (using since). - -Docker containers will report the following events: - - create, destroy, die, export, kill, pause, restart, start, stop, unpause - -and Docker images will report: - - untag, delete - -**Example request**: - - GET /events?since=1374067924 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - {"status": "create", "id": "dfdf82bd3881","from": "base:latest", "time":1374067924} - {"status": "start", "id": "dfdf82bd3881","from": "base:latest", "time":1374067924} - {"status": "stop", "id": "dfdf82bd3881","from": "base:latest", "time":1374067966} - {"status": "destroy", "id": "dfdf82bd3881","from": "base:latest", "time":1374067970} - -Query Parameters: - -- **since** – timestamp used for polling - -Status Codes: - -- **200** – no error -- **500** – server error - -### Get a tarball containing all images and tags in a repository - -`GET /images/(name)/get` - -Get a tarball containing all images and metadata for the repository -specified by `name`. -See the [image tarball format](#image-tarball-format) for more details. - -**Example request** - - GET /images/ubuntu/get - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/x-tar - - Binary data stream - -Status Codes: - -- **200** – no error -- **500** – server error - -### Load a tarball with a set of images and tags into docker - -`POST /images/load` - -Load a set of images and tags into the docker repository. - -See the [image tarball format](#image-tarball-format) for more details. - -**Example request** - - POST /images/load - - Tarball in body - -**Example response**: - - HTTP/1.1 200 OK - -Status Codes: - -- **200** – no error -- **500** – server error - -### Image tarball format - -An image tarball contains one directory per image layer (named using its long ID), -each containing three files: - -1. `VERSION`: currently `1.0` - the file format version -2. `json`: detailed layer information, similar to `docker inspect layer_id` -3. `layer.tar`: A tarfile containing the filesystem changes in this layer - -The `layer.tar` file will contain `aufs` style `.wh..wh.aufs` files and directories -for storing attribute changes and deletions. - -If the tarball defines a repository, there will also be a `repositories` file at -the root that contains a list of repository and tag names mapped to layer IDs. - -``` -{"hello-world": - {"latest": "565a9d68a73f6706862bfe8409a7f659776d4d60a8d096eb4a3cbce6999cc2a1"} -} -``` - -# 3. Going further - -## 3.1 Inside `docker run` - -Here are the steps of `docker run`: - - - Create the container - - - If the status code is 404, it means the image doesn't exist: - - Try to pull it - - Then retry to create the container - - - Start the container - - - If you are not in detached mode: - - Attach to the container, using logs=1 (to have stdout and - stderr from the container's start) and stream=1 - - - If in detached mode or only stdin is attached: - - Display the container's id - -## 3.2 Hijacking - -In this version of the API, /attach, uses hijacking to transport stdin, -stdout and stderr on the same socket. This might change in the future. - -## 3.3 CORS Requests - -To enable cross origin requests to the remote api add the flag -"--api-enable-cors" when running docker in daemon mode. - - $ docker -d -H="192.168.1.9:2375" --api-enable-cors diff --git a/reference/api/docker_remote_api_v1.9.md~ b/reference/api/docker_remote_api_v1.9.md~ deleted file mode 100644 index 7ea3fc9ab1..0000000000 --- a/reference/api/docker_remote_api_v1.9.md~ +++ /dev/null @@ -1,1351 +0,0 @@ -page_title: Remote API v1.9 -page_description: API Documentation for Docker -page_keywords: API, Docker, rcli, REST, documentation - -# Docker Remote API v1.9 - -# 1. Brief introduction - - - The Remote API has replaced rcli - - The daemon listens on `unix:///var/run/docker.sock` but you can bind - Docker to another host/port or a Unix socket. - - The API tends to be REST, but for some complex commands, like `attach` - or `pull`, the HTTP connection is hijacked to transport `stdout, stdin` - and `stderr` - -# 2. Endpoints - -## 2.1 Containers - -### List containers - -`GET /containers/json` - -List containers. - -**Example request**: - - GET /containers/json?all=1&before=8dfafdbc3a40&size=1 HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - [ - { - "Id": "8dfafdbc3a40", - "Image": "base:latest", - "Command": "echo 1", - "Created": 1367854155, - "Status": "Exit 0", - "Ports": [{"PrivatePort": 2222, "PublicPort": 3333, "Type": "tcp"}], - "SizeRw": 12288, - "SizeRootFs": 0 - }, - { - "Id": "9cd87474be90", - "Image": "base:latest", - "Command": "echo 222222", - "Created": 1367854155, - "Status": "Exit 0", - "Ports": [], - "SizeRw": 12288, - "SizeRootFs": 0 - }, - { - "Id": "3176a2479c92", - "Image": "base:latest", - "Command": "echo 3333333333333333", - "Created": 1367854154, - "Status": "Exit 0", - "Ports":[], - "SizeRw":12288, - "SizeRootFs":0 - }, - { - "Id": "4cb07b47f9fb", - "Image": "base:latest", - "Command": "echo 444444444444444444444444444444444", - "Created": 1367854152, - "Status": "Exit 0", - "Ports": [], - "SizeRw": 12288, - "SizeRootFs": 0 - } - ] - -Query Parameters: - -   - -- **all** – 1/True/true or 0/False/false, Show all containers. - Only running containers are shown by default (i.e., this defaults to false) -- **limit** – Show `limit` last created containers, include non-running ones. -- **since** – Show only containers created since Id, include non-running ones. -- **before** – Show only containers created before Id, include non-running ones. -- **size** – 1/True/true or 0/False/false, Show the containers sizes - -Status Codes: - -- **200** – no error -- **400** – bad parameter -- **500** – server error - -### Create a container - -`POST /containers/create` - -Create a container - -**Example request**: - - POST /containers/create HTTP/1.1 - Content-Type: application/json - - { - "Hostname":"", - "User":"", - "Memory":0, - "MemorySwap":0, - "CpuShares":0, - "AttachStdin":false, - "AttachStdout":true, - "AttachStderr":true, - "PortSpecs":null, - "Tty":false, - "OpenStdin":false, - "StdinOnce":false, - "Env":null, - "Cmd":[ - "date" - ], - "Dns":null, - "Image":"base", - "Volumes":{ - "/tmp": {} - }, - "VolumesFrom":"", - "WorkingDir":"", - "ExposedPorts":{ - "22/tcp": {} - } - } - -**Example response**: - - HTTP/1.1 201 Created - Content-Type: application/json - - { - "Id":"e90e34656806" - "Warnings":[] - } - -Json Parameters: - -   - -- **Hostname** – Container host name -- **User** – Username or UID -- **Memory** – Memory Limit in bytes -- **CpuShares** – CPU shares (relative weight) -- **AttachStdin** – 1/True/true or 0/False/false, attach to - standard input. Default false -- **AttachStdout** – 1/True/true or 0/False/false, attach to - standard output. Default false -- **AttachStderr** – 1/True/true or 0/False/false, attach to - standard error. Default false -- **Tty** – 1/True/true or 0/False/false, allocate a pseudo-tty. - Default false -- **OpenStdin** – 1/True/true or 0/False/false, keep stdin open - even if not attached. Default false - -Query Parameters: - -   - -- **name** – Assign the specified name to the container. Mus - match `/?[a-zA-Z0-9_-]+`. - -Status Codes: - -- **201** – no error -- **404** – no such container -- **406** – impossible to attach (container not running) -- **500** – server error - -### Inspect a container - -`GET /containers/(id)/json` - -Return low-level information on the container `id` - -**Example request**: - - GET /containers/4fa6e0f0c678/json HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - { - "Id": "4fa6e0f0c6786287e131c3852c58a2e01cc697a68231826813597e4994f1d6e2", - "Created": "2013-05-07T14:51:42.041847+02:00", - "Path": "date", - "Args": [], - "Config": { - "Hostname": "4fa6e0f0c678", - "User": "", - "Memory": 0, - "MemorySwap": 0, - "AttachStdin": false, - "AttachStdout": true, - "AttachStderr": true, - "PortSpecs": null, - "Tty": false, - "OpenStdin": false, - "StdinOnce": false, - "Env": null, - "Cmd": [ - "date" - ], - "Dns": null, - "Image": "base", - "Volumes": {}, - "VolumesFrom": "", - "WorkingDir": "" - }, - "State": { - "Running": false, - "Pid": 0, - "ExitCode": 0, - "StartedAt": "2013-05-07T14:51:42.087658+02:01360", - "Ghost": false - }, - "Image": "b750fe79269d2ec9a3c593ef05b4332b1d1a02a62b4accb2c21d589ff2f5f2dc", - "NetworkSettings": { - "IpAddress": "", - "IpPrefixLen": 0, - "Gateway": "", - "Bridge": "", - "PortMapping": null - }, - "SysInitPath": "/home/kitty/go/src/github.com/docker/docker/bin/docker", - "ResolvConfPath": "/etc/resolv.conf", - "Volumes": {}, - "HostConfig": { - "Binds": null, - "ContainerIDFile": "", - "LxcConf": [], - "Privileged": false, - "PortBindings": { - "80/tcp": [ - { - "HostIp": "0.0.0.0", - "HostPort": "49153" - } - ] - }, - "Links": null, - "PublishAllPorts": false - } - } - -Status Codes: - -- **200** – no error -- **404** – no such container -- **500** – server error - -### List processes running inside a container - -`GET /containers/(id)/top` - -List processes running inside the container `id` - -**Example request**: - - GET /containers/4fa6e0f0c678/top HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - { - "Titles": [ - "USER", - "PID", - "%CPU", - "%MEM", - "VSZ", - "RSS", - "TTY", - "STAT", - "START", - "TIME", - "COMMAND" - ], - "Processes": [ - ["root","20147","0.0","0.1","18060","1864","pts/4","S","10:06","0:00","bash"], - ["root","20271","0.0","0.0","4312","352","pts/4","S+","10:07","0:00","sleep","10"] - ] - } - -Query Parameters: - -- **ps_args** – ps arguments to use (e.g., aux) - -Status Codes: - -- **200** – no error -- **404** – no such container -- **500** – server error - -### Inspect changes on a container's filesystem - -`GET /containers/(id)/changes` - -Inspect changes on container `id`'s filesystem - -**Example request**: - - GET /containers/4fa6e0f0c678/changes HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - [ - { - "Path": "/dev", - "Kind": 0 - }, - { - "Path": "/dev/kmsg", - "Kind": 1 - }, - { - "Path": "/test", - "Kind": 1 - } - ] - -Status Codes: - -- **200** – no error -- **404** – no such container -- **500** – server error - -### Export a container - -`GET /containers/(id)/export` - -Export the contents of container `id` - -**Example request**: - - GET /containers/4fa6e0f0c678/export HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/octet-stream - - {{ TAR STREAM }} - -Status Codes: - -- **200** – no error -- **404** – no such container -- **500** – server error - -### Start a container - -`POST /containers/(id)/start` - -Start the container `id` - -**Example request**: - - POST /containers/(id)/start HTTP/1.1 - Content-Type: application/json - - { - "Binds":["/tmp:/tmp"], - "LxcConf":[{"Key":"lxc.utsname","Value":"docker"}], - "PortBindings":{ "22/tcp": [{ "HostPort": "11022" }] }, - "PublishAllPorts":false, - "Privileged":false - } - -**Example response**: - - HTTP/1.1 204 No Content - Content-Type: text/plain - -Json Parameters: - -   - -- **Binds** – Create a bind mount to a directory or file with - [host-path]:[container-path]:[rw|ro]. If a directory - "container-path" is missing, then docker creates a new volume. -- **LxcConf** – Map of custom lxc options -- **PortBindings** – Expose ports from the container, optionally - publishing them via the HostPort flag -- **PublishAllPorts** – 1/True/true or 0/False/false, publish all - exposed ports to the host interfaces. Default false -- **Privileged** – 1/True/true or 0/False/false, give extended - privileges to this container. Default false - -Status Codes: - -- **204** – no error -- **404** – no such container -- **500** – server error - -### Stop a container - -`POST /containers/(id)/stop` - -Stop the container `id` - -**Example request**: - - POST /containers/e90e34656806/stop?t=5 HTTP/1.1 - -**Example response**: - - HTTP/1.1 204 OK - -Query Parameters: - -- **t** – number of seconds to wait before killing the container - -Status Codes: - -- **204** – no error -- **404** – no such container -- **500** – server error - -### Restart a container - -`POST /containers/(id)/restart` - -Restart the container `id` - -**Example request**: - - POST /containers/e90e34656806/restart?t=5 HTTP/1.1 - -**Example response**: - - HTTP/1.1 204 No Content - -Query Parameters: - -- **t** – number of seconds to wait before killing the container - -Status Codes: - -- **204** – no error -- **404** – no such container -- **500** – server error - -### Kill a container - -`POST /containers/(id)/kill` - -Kill the container `id` - -**Example request**: - - POST /containers/e90e34656806/kill HTTP/1.1 - -**Example response**: - - HTTP/1.1 204 No Content - -Query Parameters - -- **signal** - Signal to send to the container: integer or string like "SIGINT". - When not set, SIGKILL is assumed and the call will wait for the container to exit. - -Status Codes: - -- **204** – no error -- **404** – no such container -- **500** – server error - -### Attach to a container - -`POST /containers/(id)/attach` - -Attach to the container `id` - -**Example request**: - - POST /containers/16253994b7c4/attach?logs=1&stream=0&stdout=1 HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/vnd.docker.raw-stream - - {{ STREAM }} - -Query Parameters: - -- **logs** – 1/True/true or 0/False/false, return logs. Defaul - false -- **stream** – 1/True/true or 0/False/false, return stream. - Default false -- **stdin** – 1/True/true or 0/False/false, if stream=true, attach - to stdin. Default false -- **stdout** – 1/True/true or 0/False/false, if logs=true, return - stdout log, if stream=true, attach to stdout. Default false -- **stderr** – 1/True/true or 0/False/false, if logs=true, return - stderr log, if stream=true, attach to stderr. Default false - -Status Codes: - -- **200** – no error -- **400** – bad parameter -- **404** – no such container -- **500** – server error - - **Stream details**: - - When using the TTY setting is enabled in - [`POST /containers/create`](#create-a-container), the - stream is the raw data from the process PTY and client's stdin. When - the TTY is disabled, then the stream is multiplexed to separate - stdout and stderr. - - The format is a **Header** and a **Payload** (frame). - - **HEADER** - - The header will contain the information on which stream write the - stream (stdout or stderr). It also contain the size of the - associated frame encoded on the last 4 bytes (uint32). - - It is encoded on the first 8 bytes like this: - - header := [8]byte{STREAM_TYPE, 0, 0, 0, SIZE1, SIZE2, SIZE3, SIZE4} - - `STREAM_TYPE` can be: - -- 0: stdin (will be written on stdout) -- 1: stdout -- 2: stderr - - `SIZE1, SIZE2, SIZE3, SIZE4` are the 4 bytes of - the uint32 size encoded as big endian. - - **PAYLOAD** - - The payload is the raw stream. - - **IMPLEMENTATION** - - The simplest way to implement the Attach protocol is the following: - - 1. Read 8 bytes - 2. chose stdout or stderr depending on the first byte - 3. Extract the frame size from the last 4 byets - 4. Read the extracted size and output it on the correct output - 5. Goto 1) - -### Attach to a container (websocket) - -`GET /containers/(id)/attach/ws` - -Attach to the container `id` via websocket - -Implements websocket protocol handshake according to [RFC 6455](http://tools.ietf.org/html/rfc6455) - -**Example request** - - GET /containers/e90e34656806/attach/ws?logs=0&stream=1&stdin=1&stdout=1&stderr=1 HTTP/1.1 - -**Example response** - - {{ STREAM }} - -Query Parameters: - -- **logs** – 1/True/true or 0/False/false, return logs. Default false -- **stream** – 1/True/true or 0/False/false, return stream. - Default false -- **stdin** – 1/True/true or 0/False/false, if stream=true, attach - to stdin. Default false -- **stdout** – 1/True/true or 0/False/false, if logs=true, return - stdout log, if stream=true, attach to stdout. Default false -- **stderr** – 1/True/true or 0/False/false, if logs=true, return - stderr log, if stream=true, attach to stderr. Default false - -Status Codes: - -- **200** – no error -- **400** – bad parameter -- **404** – no such container -- **500** – server error - -### Wait a container - -`POST /containers/(id)/wait` - -Block until container `id` stops, then returns the exit code - -**Example request**: - - POST /containers/16253994b7c4/wait HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - {"StatusCode": 0} - -Status Codes: - -- **200** – no error -- **404** – no such container -- **500** – server error - -### Remove a container - -`DELETE /containers/(id)` - -Remove the container `id` from the filesystem - -**Example request**: - - DELETE /containers/16253994b7c4?v=1 HTTP/1.1 - -**Example response**: - - HTTP/1.1 204 No Content - -Query Parameters: - -- **v** – 1/True/true or 0/False/false, Remove the volumes - associated to the container. Default false - -Status Codes: - -- **204** – no error -- **400** – bad parameter -- **404** – no such container -- **500** – server error - -### Copy files or folders from a container - -`POST /containers/(id)/copy` - -Copy files or folders of container `id` - -**Example request**: - - POST /containers/4fa6e0f0c678/copy HTTP/1.1 - Content-Type: application/json - - { - "Resource": "test.txt" - } - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/octet-stream - - {{ TAR STREAM }} - -Status Codes: - -- **200** – no error -- **404** – no such container -- **500** – server error - -## 2.2 Images - -### List Images - -`GET /images/json` - -**Example request**: - - GET /images/json?all=0 HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - [ - { - "RepoTags": [ - "ubuntu:12.04", - "ubuntu:precise", - "ubuntu:latest" - ], - "Id": "8dbd9e392a964056420e5d58ca5cc376ef18e2de93b5cc90e868a1bbc8318c1c", - "Created": 1365714795, - "Size": 131506275, - "VirtualSize": 131506275 - }, - { - "RepoTags": [ - "ubuntu:12.10", - "ubuntu:quantal" - ], - "ParentId": "27cf784147099545", - "Id": "b750fe79269d2ec9a3c593ef05b4332b1d1a02a62b4accb2c21d589ff2f5f2dc", - "Created": 1364102658, - "Size": 24653, - "VirtualSize": 180116135 - } - ] - -### Create an image - -`POST /images/create` - -Create an image, either by pull it from the registry or by importing i - -**Example request**: - - POST /images/create?fromImage=base HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - {"status": "Pulling..."} - {"status": "Pulling", "progress": "1 B/ 100 B", "progressDetail": {"current": 1, "total": 100}} - {"error": "Invalid..."} - ... - - When using this endpoint to pull an image from the registry, the - `X-Registry-Auth` header can be used to include - a base64-encoded AuthConfig object. - -Query Parameters: - -- **fromImage** – name of the image to pull -- **fromSrc** – source to import, - means stdin -- **repo** – repository -- **tag** – tag -- **registry** – the registry to pull from - -Request Headers: - -- **X-Registry-Auth** – base64-encoded AuthConfig object - -Status Codes: - -- **200** – no error -- **500** – server error - -### Insert a file in an image - -`POST /images/(name)/insert` - -Insert a file from `url` in the image `name` at `path` - -**Example request**: - - POST /images/test/insert?path=/usr&url=myurl HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - {"status":"Inserting..."} - {"status":"Inserting", "progress":"1/? (n/a)", "progressDetail":{"current":1}} - {"error":"Invalid..."} - ... - -Query Parameters: - -- **url** – The url from where the file is taken -- **path** – The path where the file is stored - -Status Codes: - -- **200** – no error -- **500** – server error - -### Inspect an image - -`GET /images/(name)/json` - -Return low-level information on the image `name` - -**Example request**: - - GET /images/base/json HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - { - "id":"b750fe79269d2ec9a3c593ef05b4332b1d1a02a62b4accb2c21d589ff2f5f2dc", - "parent":"27cf784147099545", - "created":"2013-03-23T22:24:18.818426-07:00", - "container":"3d67245a8d72ecf13f33dffac9f79dcdf70f75acb84d308770391510e0c23ad0", - "container_config": - { - "Hostname":"", - "User":"", - "Memory":0, - "MemorySwap":0, - "AttachStdin":false, - "AttachStdout":false, - "AttachStderr":false, - "PortSpecs":null, - "Tty":true, - "OpenStdin":true, - "StdinOnce":false, - "Env":null, - "Cmd": ["/bin/bash"], - "Dns":null, - "Image":"base", - "Volumes":null, - "VolumesFrom":"", - "WorkingDir":"" - }, - "Size": 6824592 - } - -Status Codes: - -- **200** – no error -- **404** – no such image -- **500** – server error - -### Get the history of an image - -`GET /images/(name)/history` - -Return the history of the image `name` - -**Example request**: - - GET /images/base/history HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - [ - { - "Id": "b750fe79269d", - "Created": 1364102658, - "CreatedBy": "/bin/bash" - }, - { - "Id": "27cf78414709", - "Created": 1364068391, - "CreatedBy": "" - } - ] - -Status Codes: - -- **200** – no error -- **404** – no such image -- **500** – server error - -### Push an image on the registry - -`POST /images/(name)/push` - -Push the image `name` on the registry - -**Example request**: - - POST /images/test/push HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - {"status": "Pushing..."} - {"status": "Pushing", "progress": "1/? (n/a)", "progressDetail": {"current": 1}}} - {"error": "Invalid..."} - ... - - Request Headers: - -   - -- **X-Registry-Auth** – include a base64-encoded AuthConfig - object. - -Status Codes: - -- **200** – no error -- **404** – no such image -- **500** – server error - -### Tag an image into a repository - -`POST /images/(name)/tag` - -Tag the image `name` into a repository - -**Example request**: - - POST /images/test/tag?repo=myrepo&force=0&tag=v42 HTTP/1.1 - -**Example response**: - - HTTP/1.1 201 OK - -Query Parameters: - -- **repo** – The repository to tag in -- **force** – 1/True/true or 0/False/false, default false -- **tag** - The new tag name - -Status Codes: - -- **201** – no error -- **400** – bad parameter -- **404** – no such image -- **409** – conflict -- **500** – server error - -### Remove an image - -`DELETE /images/(name*) -: Remove the image `name` from the filesystem - -**Example request**: - - DELETE /images/test HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-type: application/json - - [ - {"Untagged": "3e2f21a89f"}, - {"Deleted": "3e2f21a89f"}, - {"Deleted": "53b4f83ac9"} - ] - -Status Codes: - -- **200** – no error -- **404** – no such image -- **409** – conflict -- **500** – server error - -### Search images - -`GET /images/search` - -Search for an image on [Docker Hub](https://hub.docker.com). - -> **Note**: -> The response keys have changed from API v1.6 to reflect the JSON -> sent by the registry server to the docker daemon's request. - -**Example request**: - - GET /images/search?term=sshd HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - [ - { - "description": "", - "is_official": false, - "is_trusted": false, - "name": "wma55/u1210sshd", - "star_count": 0 - }, - { - "description": "", - "is_official": false, - "is_trusted": false, - "name": "jdswinbank/sshd", - "star_count": 0 - }, - { - "description": "", - "is_official": false, - "is_trusted": false, - "name": "vgauthier/sshd", - "star_count": 0 - } - ... - ] - -Query Parameters: - -- **term** – term to search - -Status Codes: - -- **200** – no error -- **500** – server error - -## 2.3 Misc - -### Build an image from Dockerfile - -`POST /build` - -Build an image from Dockerfile using a POST body. - -**Example request**: - - POST /build HTTP/1.1 - - {{ TAR STREAM }} - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - {"stream": "Step 1..."} - {"stream": "..."} - {"error": "Error...", "errorDetail": {"code": 123, "message": "Error..."}} - - The stream must be a tar archive compressed with one of the - following algorithms: identity (no compression), gzip, bzip2, xz. - - The archive must include a file called `Dockerfile` - at its root. It may include any number of other files, - which will be accessible in the build context (See the [*ADD build - command*](/reference/builder/#add)). - -Query Parameters: - -- **t** – repository name (and optionally a tag) to be applied to - the resulting image in case of success -- **remote** – build source URI (git or HTTPS/HTTP) -- **q** – suppress verbose build output -- **nocache** – do not use the cache when building the image -- **rm** – Remove intermediate containers after a successful build - - Request Headers: - -- **Content-type** – should be set to `"application/tar"`. -- **X-Registry-Config** – base64-encoded ConfigFile objec - -Status Codes: - -- **200** – no error -- **500** – server error - -### Check auth configuration - -`POST /auth` - -Get the default username and email - -**Example request**: - - POST /auth HTTP/1.1 - Content-Type: application/json - - { - "username":" hannibal", - "password: "xxxx", - "email": "hannibal@a-team.com", - "serveraddress": "https://index.docker.io/v1/" - } - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: text/plain - -Status Codes: - -- **200** – no error -- **204** – no error -- **500** – server error - -### Display system-wide information - -`GET /info` - -Display system-wide information - -**Example request**: - - GET /info HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - { - "Containers":11, - "Images":16, - "Debug":false, - "NFd": 11, - "NGoroutines":21, - "MemoryLimit":true, - "SwapLimit":false, - "IPv4Forwarding":true - } - -Status Codes: - -- **200** – no error -- **500** – server error - -### Show the docker version information - -`GET /version` - -Show the docker version information - -**Example request**: - - GET /version HTTP/1.1 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - { - "Version":"0.2.2", - "GitCommit":"5a2a5cc+CHANGES", - "GoVersion":"go1.0.3" - } - -Status Codes: - -- **200** – no error -- **500** – server error - -### Create a new image from a container's changes - -`POST /commit` - -Create a new image from a container's changes - -**Example request**: - - POST /commit?container=44c004db4b17&m=message&repo=myrepo HTTP/1.1 - Content-Type: application/json - - { - "Hostname":"", - "User":"", - "Memory":0, - "MemorySwap":0, - "AttachStdin":false, - "AttachStdout":true, - "AttachStderr":true, - "PortSpecs":null, - "Tty":false, - "OpenStdin":false, - "StdinOnce":false, - "Env":null, - "Cmd":[ - "date" - ], - "Volumes":{ - "/tmp": {} - }, - "WorkingDir":"", - "DisableNetwork": false, - "ExposedPorts":{ - "22/tcp": {} - } - } - -**Example response**: - - HTTP/1.1 201 Created - Content-Type: application/vnd.docker.raw-stream - - {"Id": "596069db4bf5"} - -Json Parameters: - -- **config** - the container's configuration - -Query Parameters: - -- **container** – source container -- **repo** – repository -- **tag** – tag -- **m** – commit message -- **author** – author (e.g., "John Hannibal Smith - <[hannibal@a-team.com](mailto:hannibal%40a-team.com)>") - -Status Codes: - -- **201** – no error -- **404** – no such container -- **500** – server error - -### Monitor Docker's events - -`GET /events` - -Get events from docker, either in real time via streaming, or via -polling (using since). - -Docker containers will report the following events: - - create, destroy, die, export, kill, pause, restart, start, stop, unpause - -and Docker images will report: - - untag, delete - -**Example request**: - - GET /events?since=1374067924 - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/json - - {"status": "create", "id": "dfdf82bd3881","from": "base:latest", "time":1374067924} - {"status": "start", "id": "dfdf82bd3881","from": "base:latest", "time":1374067924} - {"status": "stop", "id": "dfdf82bd3881","from": "base:latest", "time":1374067966} - {"status": "destroy", "id": "dfdf82bd3881","from": "base:latest", "time":1374067970} - -Query Parameters: - -- **since** – timestamp used for polling - -Status Codes: - -- **200** – no error -- **500** – server error - -### Get a tarball containing all images and tags in a repository - -`GET /images/(name)/get` - -Get a tarball containing all images and metadata for the repository specified by `name`. - -See the [image tarball format](#image-tarball-format) for more details. - -**Example request** - - GET /images/ubuntu/get - -**Example response**: - - HTTP/1.1 200 OK - Content-Type: application/x-tar - - Binary data stream - -Status Codes: - -- **200** – no error -- **500** – server error - -### Load a tarball with a set of images and tags into docker - -`POST /images/load` - -Load a set of images and tags into the docker repository. - -See the [image tarball format](#image-tarball-format) for more details. - -**Example request** - - POST /images/load - - Tarball in body - -**Example response**: - - HTTP/1.1 200 OK - -Status Codes: - -- **200** – no error -- **500** – server error - -### Image tarball format - -An image tarball contains one directory per image layer (named using its long ID), -each containing three files: - -1. `VERSION`: currently `1.0` - the file format version -2. `json`: detailed layer information, similar to `docker inspect layer_id` -3. `layer.tar`: A tarfile containing the filesystem changes in this layer - -The `layer.tar` file will contain `aufs` style `.wh..wh.aufs` files and directories -for storing attribute changes and deletions. - -If the tarball defines a repository, there will also be a `repositories` file at -the root that contains a list of repository and tag names mapped to layer IDs. - -``` -{"hello-world": - {"latest": "565a9d68a73f6706862bfe8409a7f659776d4d60a8d096eb4a3cbce6999cc2a1"} -} -``` - -# 3. Going further - -## 3.1 Inside `docker run` - -Here are the steps of `docker run` : - - - Create the container - - - If the status code is 404, it means the image doesn't exist: - -- Try to pull it -- Then retry to create the container - - - Start the container - - - If you are not in detached mode: - -- Attach to the container, using logs=1 (to have stdout and -- stderr from the container's start) and stream=1 - - - If in detached mode or only stdin is attached: - -- Display the container's id - -## 3.2 Hijacking - -In this version of the API, /attach, uses hijacking to transport stdin, -stdout and stderr on the same socket. This might change in the future. - -## 3.3 CORS Requests - -To enable cross origin requests to the remote api add the flag -"--api-enable-cors" when running docker in daemon mode. - - $ docker -d -H="192.168.1.9:2375" --api-enable-cors diff --git a/reference/api/hub_registry_spec.md~ b/reference/api/hub_registry_spec.md~ deleted file mode 100644 index 26d4ffca30..0000000000 --- a/reference/api/hub_registry_spec.md~ +++ /dev/null @@ -1,755 +0,0 @@ -page_title: Registry Documentation -page_description: Documentation for docker Registry and Registry API -page_keywords: docker, registry, api, hub - -# The Docker Hub and the Registry spec - -## The three roles - -There are three major components playing a role in the Docker ecosystem. - -### Docker Hub - -The Docker Hub is responsible for centralizing information about: - - - User accounts - - Checksums of the images - - Public namespaces - -The Docker Hub has different components: - - - Web UI - - Meta-data store (comments, stars, list public repositories) - - Authentication service - - Tokenization - -The Docker Hub is authoritative for that information. - -There is only one instance of the Docker Hub, run and -managed by Docker Inc. - -### Registry - -The registry has the following characteristics: - - - It stores the images and the graph for a set of repositories - - It does not have user accounts data - - It has no notion of user accounts or authorization - - It delegates authentication and authorization to the Docker Hub Auth - service using tokens - - It supports different storage backends (S3, cloud files, local FS) - - It doesn't have a local database - - [Source Code](https://github.com/docker/docker-registry) - -We expect that there will be multiple registries out there. To help you -grasp the context, here are some examples of registries: - - - **sponsor registry**: such a registry is provided by a third-party - hosting infrastructure as a convenience for their customers and the - Docker community as a whole. Its costs are supported by the third - party, but the management and operation of the registry are - supported by Docker, Inc. It features read/write access, and delegates - authentication and authorization to the Docker Hub. - - **mirror registry**: such a registry is provided by a third-party - hosting infrastructure but is targeted at their customers only. Some - mechanism (unspecified to date) ensures that public images are - pulled from a sponsor registry to the mirror registry, to make sure - that the customers of the third-party provider can `docker pull` - those images locally. - - **vendor registry**: such a registry is provided by a software - vendor who wants to distribute docker images. It would be operated - and managed by the vendor. Only users authorized by the vendor would - be able to get write access. Some images would be public (accessible - for anyone), others private (accessible only for authorized users). - Authentication and authorization would be delegated to the Docker Hub. - The goal of vendor registries is to let someone do `docker pull - basho/riak1.3` and automatically push from the vendor registry - (instead of a sponsor registry); i.e., vendors get all the convenience of a - sponsor registry, while retaining control on the asset distribution. - - **private registry**: such a registry is located behind a firewall, - or protected by an additional security layer (HTTP authorization, - SSL client-side certificates, IP address authorization...). The - registry is operated by a private entity, outside of Docker's - control. It can optionally delegate additional authorization to the - Docker Hub, but it is not mandatory. - -> **Note:** The latter implies that while HTTP is the protocol -> of choice for a registry, multiple schemes are possible (and -> in some cases, trivial): -> -> - HTTP with GET (and PUT for read-write registries); -> - local mount point; -> - remote docker addressed through SSH. - -The latter would only require two new commands in Docker, e.g., -`registryget` and `registryput`, -wrapping access to the local filesystem (and optionally doing -consistency checks). Authentication and authorization are then delegated -to SSH (e.g., with public keys). - -### Docker - -On top of being a runtime for LXC, Docker is the Registry client. It -supports: - - - Push / Pull on the registry - - Client authentication on the Docker Hub - -## Workflow - -### Pull - -![](/static_files/docker_pull_chart.png) - -1. Contact the Docker Hub to know where I should download “samalba/busybox” -2. Docker Hub replies: a. `samalba/busybox` is on Registry A b. here are the - checksums for `samalba/busybox` (for all layers) c. token -3. Contact Registry A to receive the layers for `samalba/busybox` (all of - them to the base image). Registry A is authoritative for “samalba/busybox” - but keeps a copy of all inherited layers and serve them all from the same - location. -4. registry contacts Docker Hub to verify if token/user is allowed to download images -5. Docker Hub returns true/false lettings registry know if it should proceed or error - out -6. Get the payload for all layers - -It's possible to run: - - $ sudo docker pull https:///repositories/samalba/busybox - -In this case, Docker bypasses the Docker Hub. However the security is not -guaranteed (in case Registry A is corrupted) because there won't be any -checksum checks. - -Currently registry redirects to s3 urls for downloads, going forward all -downloads need to be streamed through the registry. The Registry will -then abstract the calls to S3 by a top-level class which implements -sub-classes for S3 and local storage. - -Token is only returned when the `X-Docker-Token` -header is sent with request. - -Basic Auth is required to pull private repos. Basic auth isn't required -for pulling public repos, but if one is provided, it needs to be valid -and for an active account. - -**API (pulling repository foo/bar):** - -1. (Docker -> Docker Hub) GET /v1/repositories/foo/bar/images: - -**Headers**: - - Authorization: Basic QWxhZGRpbjpvcGVuIHNlc2FtZQ== - X-Docker-Token: true - -**Action**: - - (looking up the foo/bar in db and gets images and checksums - for that repo (all if no tag is specified, if tag, only - checksums for those tags) see part 4.4.1) - -2. (Docker Hub -> Docker) HTTP 200 OK - -**Headers**: - - Authorization: Token - signature=123abc,repository=”foo/bar”,access=write - X-Docker-Endpoints: registry.docker.io [,registry2.docker.io] - -**Body**: - - Jsonified checksums (see part 4.4.1) - -3. (Docker -> Registry) GET /v1/repositories/foo/bar/tags/latest - -**Headers**: - - Authorization: Token - signature=123abc,repository=”foo/bar”,access=write - -4. (Registry -> Docker Hub) GET /v1/repositories/foo/bar/images - -**Headers**: - - Authorization: Token - signature=123abc,repository=”foo/bar”,access=read - -**Body**: - - - -**Action**: - - (Lookup token see if they have access to pull.) - - If good: - HTTP 200 OK Docker Hub will invalidate the token - - If bad: - HTTP 401 Unauthorized - -5. (Docker -> Registry) GET /v1/images/928374982374/ancestry - -**Action**: - - (for each image id returned in the registry, fetch /json + /layer) - -> **Note**: -> If someone makes a second request, then we will always give a new token, -> never reuse tokens. - -### Push - -![](/static_files/docker_push_chart.png) - -1. Contact the Docker Hub to allocate the repository name “samalba/busybox” - (authentication required with user credentials) -2. If authentication works and namespace available, “samalba/busybox” - is allocated and a temporary token is returned (namespace is marked - as initialized in Docker Hub) -3. Push the image on the registry (along with the token) -4. Registry A contacts the Docker Hub to verify the token (token must - corresponds to the repository name) -5. Docker Hub validates the token. Registry A starts reading the stream - pushed by docker and store the repository (with its images) -6. docker contacts the Docker Hub to give checksums for upload images - -> **Note:** -> **It's possible not to use the Docker Hub at all!** In this case, a deployed -> version of the Registry is deployed to store and serve images. Those -> images are not authenticated and the security is not guaranteed. - -> **Note:** -> **Docker Hub can be replaced!** For a private Registry deployed, a custom -> Docker Hub can be used to serve and validate token according to different -> policies. - -Docker computes the checksums and submit them to the Docker Hub at the end of -the push. When a repository name does not have checksums on the Docker Hub, -it means that the push is in progress (since checksums are submitted at -the end). - -**API (pushing repos foo/bar):** - -1. (Docker -> Docker Hub) PUT /v1/repositories/foo/bar/ - -**Headers**: - - Authorization: Basic sdkjfskdjfhsdkjfh== X-Docker-Token: - true - -**Action**: - -- in Docker Hub, we allocated a new repository, and set to - initialized - -**Body**: - -(The body contains the list of images that are going to be -pushed, with empty checksums. The checksums will be set at -the end of the push): - - [{“id”: “9e89cc6f0bc3c38722009fe6857087b486531f9a779a0c17e3ed29dae8f12c4f”}] - -2. (Docker Hub -> Docker) 200 Created - -**Headers**: - - WWW-Authenticate: Token - signature=123abc,repository=”foo/bar”,access=write - X-Docker-Endpoints: registry.docker.io [, registry2.docker.io] - -3. (Docker -> Registry) PUT /v1/images/98765432_parent/json - -**Headers**: - - Authorization: Token - signature=123abc,repository=”foo/bar”,access=write - -4. (Registry->Docker Hub) GET /v1/repositories/foo/bar/images - -**Headers**: - - Authorization: Token - signature=123abc,repository=”foo/bar”,access=write - -**Action**: - -- Docker Hub: - will invalidate the token. -- Registry: - grants a session (if token is approved) and fetches - the images id - -5. (Docker -> Registry) PUT /v1/images/98765432_parent/json - -**Headers**: - - Authorization: Token - signature=123abc,repository=”foo/bar”,access=write - Cookie: (Cookie provided by the Registry) - -6. (Docker -> Registry) PUT /v1/images/98765432/json - -**Headers**: - - Cookie: (Cookie provided by the Registry) - -7. (Docker -> Registry) PUT /v1/images/98765432_parent/layer - -**Headers**: - - Cookie: (Cookie provided by the Registry) - -8. (Docker -> Registry) PUT /v1/images/98765432/layer - -**Headers**: - - X-Docker-Checksum: sha256:436745873465fdjkhdfjkgh - -9. (Docker -> Registry) PUT /v1/repositories/foo/bar/tags/latest - -**Headers**: - - Cookie: (Cookie provided by the Registry) - -**Body**: - - “98765432” - -10. (Docker -> Docker Hub) PUT /v1/repositories/foo/bar/images - -**Headers**: - - Authorization: Basic 123oislifjsldfj== X-Docker-Endpoints: - registry1.docker.io (no validation on this right now) - -**Body**: - - (The image, id`s, tags and checksums) - [{“id”: - “9e89cc6f0bc3c38722009fe6857087b486531f9a779a0c17e3ed29dae8f12c4f”, - “checksum”: - “b486531f9a779a0c17e3ed29dae8f12c4f9e89cc6f0bc3c38722009fe6857087”}] - -**Return**: - - HTTP 204 - -> **Note:** If push fails and they need to start again, what happens in the Docker Hub, -> there will already be a record for the namespace/name, but it will be -> initialized. Should we allow it, or mark as name already used? One edge -> case could be if someone pushes the same thing at the same time with two -> different shells. - -If it's a retry on the Registry, Docker has a cookie (provided by the -registry after token validation). So the Docker Hub won't have to provide a -new token. - -### Delete - -If you need to delete something from the Docker Hub or registry, we need a -nice clean way to do that. Here is the workflow. - -1. Docker contacts the Docker Hub to request a delete of a repository - `samalba/busybox` (authentication required with user credentials) -2. If authentication works and repository is valid, `samalba/busybox` - is marked as deleted and a temporary token is returned -3. Send a delete request to the registry for the repository (along with - the token) -4. Registry A contacts the Docker Hub to verify the token (token must - corresponds to the repository name) -5. Docker Hub validates the token. Registry A deletes the repository and - everything associated to it. -6. docker contacts the Docker Hub to let it know it was removed from the - registry, the Docker Hub removes all records from the database. - -> **Note**: -> The Docker client should present an "Are you sure?" prompt to confirm -> the deletion before starting the process. Once it starts it can't be -> undone. - -**API (deleting repository foo/bar):** - -1. (Docker -> Docker Hub) DELETE /v1/repositories/foo/bar/ - -**Headers**: - - Authorization: Basic sdkjfskdjfhsdkjfh== X-Docker-Token: - true - -**Action**: - -- in Docker Hub, we make sure it is a valid repository, and set - to deleted (logically) - -**Body**: - - Empty - -2. (Docker Hub -> Docker) 202 Accepted - -**Headers**: - - WWW-Authenticate: Token - signature=123abc,repository=”foo/bar”,access=delete - X-Docker-Endpoints: registry.docker.io [, registry2.docker.io] - # list of endpoints where this repo lives. - -3. (Docker -> Registry) DELETE /v1/repositories/foo/bar/ - -**Headers**: - - Authorization: Token - signature=123abc,repository=”foo/bar”,access=delete - -4. (Registry->Docker Hub) PUT /v1/repositories/foo/bar/auth - -**Headers**: - - Authorization: Token - signature=123abc,repository=”foo/bar”,access=delete - -**Action**: - -- Docker Hub: - will invalidate the token. -- Registry: - deletes the repository (if token is approved) - -5. (Registry -> Docker) 200 OK - - 200 If success 403 if forbidden 400 if bad request 404 - if repository isn't found - -6. (Docker -> Docker Hub) DELETE /v1/repositories/foo/bar/ - -**Headers**: - - Authorization: Basic 123oislifjsldfj== X-Docker-Endpoints: - registry-1.docker.io (no validation on this right now) - -**Body**: - - Empty - -**Return**: - - HTTP 200 - -## How to use the Registry in standalone mode - -The Docker Hub has two main purposes (along with its fancy social features): - - - Resolve short names (to avoid passing absolute URLs all the time): - - username/projectname -> - https://registry.docker.io/users//repositories// - team/projectname -> - https://registry.docker.io/team//repositories// - - - Authenticate a user as a repos owner (for a central referenced - repository) - -### Without a Docker Hub - -Using the Registry without the Docker Hub can be useful to store the images -on a private network without having to rely on an external entity -controlled by Docker Inc. - -In this case, the registry will be launched in a special mode -(-standalone? ne? -no-index?). In this mode, the only thing which changes is -that Registry will never contact the Docker Hub to verify a token. It will be -the Registry owner responsibility to authenticate the user who pushes -(or even pulls) an image using any mechanism (HTTP auth, IP based, -etc...). - -In this scenario, the Registry is responsible for the security in case -of data corruption since the checksums are not delivered by a trusted -entity. - -As hinted previously, a standalone registry can also be implemented by -any HTTP server handling GET/PUT requests (or even only GET requests if -no write access is necessary). - -### With a Docker Hub - -The Docker Hub data needed by the Registry are simple: - - - Serve the checksums - - Provide and authorize a Token - -In the scenario of a Registry running on a private network with the need -of centralizing and authorizing, it's easy to use a custom Docker Hub. - -The only challenge will be to tell Docker to contact (and trust) this -custom Docker Hub. Docker will be configurable at some point to use a -specific Docker Hub, it'll be the private entity responsibility (basically -the organization who uses Docker in a private environment) to maintain -the Docker Hub and the Docker's configuration among its consumers. - -## The API - -The first version of the api is available here: -[https://github.com/jpetazzo/docker/blob/acd51ecea8f5d3c02b00a08176171c59442df8b3/docs/images-repositories-push-pull.md](https://github.com/jpetazzo/docker/blob/acd51ecea8f5d3c02b00a08176171c59442df8b3/docs/images-repositories-push-pull.md) - -### Images - -The format returned in the images is not defined here (for layer and -JSON), basically because Registry stores exactly the same kind of -information as Docker uses to manage them. - -The format of ancestry is a line-separated list of image ids, in age -order, i.e. the image's parent is on the last line, the parent of the -parent on the next-to-last line, etc.; if the image has no parent, the -file is empty. - - GET /v1/images//layer - PUT /v1/images//layer - GET /v1/images//json - PUT /v1/images//json - GET /v1/images//ancestry - PUT /v1/images//ancestry - -### Users - -### Create a user (Docker Hub) - - POST /v1/users: - -**Body**: - - {"email": "[sam@docker.com](mailto:sam%40docker.com)", - "password": "toto42", "username": "foobar"`} - -**Validation**: - -- **username**: min 4 character, max 30 characters, must match the - regular expression [a-z0-9_]. -- **password**: min 5 characters - -**Valid**: - - return HTTP 201 - -Errors: HTTP 400 (we should create error codes for possible errors) - -invalid json - missing field - wrong format (username, password, email, -etc) - forbidden name - name already exists - -> **Note**: -> A user account will be valid only if the email has been validated (a -> validation link is sent to the email address). - -### Update a user (Docker Hub) - - PUT /v1/users/ - -**Body**: - - {"password": "toto"} - -> **Note**: -> We can also update email address, if they do, they will need to reverify -> their new email address. - -### Login (Docker Hub) - -Does nothing else but asking for a user authentication. Can be used to -validate credentials. HTTP Basic Auth for now, maybe change in future. - -GET /v1/users - -**Return**: -- Valid: HTTP 200 -- Invalid login: HTTP 401 -- Account inactive: HTTP 403 Account is not Active - -### Tags (Registry) - -The Registry does not know anything about users. Even though -repositories are under usernames, it's just a namespace for the -registry. Allowing us to implement organizations or different namespaces -per user later, without modifying the Registry's API. - -The following naming restrictions apply: - - - Namespaces must match the same regular expression as usernames (See - 4.2.1.) - - Repository names must match the regular expression [a-zA-Z0-9-_.] - -### Get all tags: - - GET /v1/repositories///tags - - **Return**: HTTP 200 - [ - { - "layer": "9e89cc6f", - "name": "latest" - }, - { - "layer": "b486531f", - "name": "0.1.1", - } - ] - -**4.3.2 Read the content of a tag (resolve the image id):** - - GET /v1/repositories///tags/ - -**Return**: - - "9e89cc6f0bc3c38722009fe6857087b486531f9a779a0c17e3ed29dae8f12c4f" - -**4.3.3 Delete a tag (registry):** - - DELETE /v1/repositories///tags/ - -### 4.4 Images (Docker Hub) - -For the Docker Hub to “resolve” the repository name to a Registry location, -it uses the X-Docker-Endpoints header. In other terms, this requests -always add a `X-Docker-Endpoints` to indicate the -location of the registry which hosts this repository. - -**4.4.1 Get the images:** - - GET /v1/repositories///images - - **Return**: HTTP 200 - [{“id”: - “9e89cc6f0bc3c38722009fe6857087b486531f9a779a0c17e3ed29dae8f12c4f”, - “checksum”: - “[md5:b486531f9a779a0c17e3ed29dae8f12c4f9e89cc6f0bc3c38722009fe6857087](md5:b486531f9a779a0c17e3ed29dae8f12c4f9e89cc6f0bc3c38722009fe6857087)”}] - -### Add/update the images: - -You always add images, you never remove them. - - PUT /v1/repositories///images - -**Body**: - - [ {“id”: - “9e89cc6f0bc3c38722009fe6857087b486531f9a779a0c17e3ed29dae8f12c4f”, - “checksum”: - “sha256:b486531f9a779a0c17e3ed29dae8f12c4f9e89cc6f0bc3c38722009fe6857087”} - ] - -**Return**: - - 204 - -### Repositories - -### Remove a Repository (Registry) - -DELETE /v1/repositories// - -Return 200 OK - -### Remove a Repository (Docker Hub) - -This starts the delete process. see 2.3 for more details. - -DELETE /v1/repositories// - -Return 202 OK - -## Chaining Registries - -It's possible to chain Registries server for several reasons: - - - Load balancing - - Delegate the next request to another server - -When a Registry is a reference for a repository, it should host the -entire images chain in order to avoid breaking the chain during the -download. - -The Docker Hub and Registry use this mechanism to redirect on one or the -other. - -Example with an image download: - -On every request, a special header can be returned: - - X-Docker-Endpoints: server1,server2 - -On the next request, the client will always pick a server from this -list. - -## Authentication & Authorization - -### On the Docker Hub - -The Docker Hub supports both “Basic” and “Token” challenges. Usually when -there is a `401 Unauthorized`, the Docker Hub replies -this: - - 401 Unauthorized - WWW-Authenticate: Basic realm="auth required",Token - -You have 3 options: - -1. Provide user credentials and ask for a token - -**Header**: - - Authorization: Basic QWxhZGRpbjpvcGVuIHNlc2FtZQ== - X-Docker-Token: true - -In this case, along with the 200 response, you'll get a new token -(if user auth is ok): If authorization isn't correct you get a 401 -response. If account isn't active you will get a 403 response. - -**Response**: - - 200 OK - X-Docker-Token: Token - signature=123abc,repository=”foo/bar”,access=read - - -2. Provide user credentials only - -**Header**: - - Authorization: Basic QWxhZGRpbjpvcGVuIHNlc2FtZQ== - -3. Provide Token - -**Header**: - - Authorization: Token - signature=123abc,repository=”foo/bar”,access=read - -### 6.2 On the Registry - -The Registry only supports the Token challenge: - - 401 Unauthorized - WWW-Authenticate: Token - -The only way is to provide a token on `401 Unauthorized` -responses: - - Authorization: Token signature=123abc,repository="foo/bar",access=read - -Usually, the Registry provides a Cookie when a Token verification -succeeded. Every time the Registry passes a Cookie, you have to pass it -back the same cookie.: - - 200 OK - Set-Cookie: session="wD/J7LqL5ctqw8haL10vgfhrb2Q=?foo=UydiYXInCnAxCi4=×tamp=RjEzNjYzMTQ5NDcuNDc0NjQzCi4="; Path=/; HttpOnly - -Next request: - - GET /(...) - Cookie: session="wD/J7LqL5ctqw8haL10vgfhrb2Q=?foo=UydiYXInCnAxCi4=×tamp=RjEzNjYzMTQ5NDcuNDc0NjQzCi4=" - -## Document Version - - - 1.0 : May 6th 2013 : initial release - - 1.1 : June 1st 2013 : Added Delete Repository and way to handle new - source namespace. - diff --git a/reference/api/registry_api.md~ b/reference/api/registry_api.md~ deleted file mode 100644 index 54a158934a..0000000000 --- a/reference/api/registry_api.md~ +++ /dev/null @@ -1,593 +0,0 @@ -page_title: Registry API -page_description: API Documentation for Docker Registry -page_keywords: API, Docker, index, registry, REST, documentation - -# Docker Registry API - -## Introduction - - - This is the REST API for the Docker Registry - - It stores the images and the graph for a set of repositories - - It does not have user accounts data - - It has no notion of user accounts or authorization - - It delegates authentication and authorization to the Index Auth - service using tokens - - It supports different storage backends (S3, cloud files, local FS) - - It doesn't have a local database - - The registry is open source: [Docker Registry](https://github.com/docker/docker-registry) - - We expect that there will be multiple registries out there. To help to -grasp the context, here are some examples of registries: - - - **sponsor registry**: such a registry is provided by a third-party - hosting infrastructure as a convenience for their customers and the - Docker community as a whole. Its costs are supported by the third - party, but the management and operation of the registry are - supported by Docker. It features read/write access, and delegates - authentication and authorization to the Index. - - **mirror registry**: such a registry is provided by a third-party - hosting infrastructure but is targeted at their customers only. Some - mechanism (unspecified to date) ensures that public images are - pulled from a sponsor registry to the mirror registry, to make sure - that the customers of the third-party provider can `docker pull` - those images locally. - - **vendor registry**: such a registry is provided by a software - vendor, who wants to distribute Docker images. It would be operated - and managed by the vendor. Only users authorized by the vendor would - be able to get write access. Some images would be public (accessible - for anyone), others private (accessible only for authorized users). - Authentication and authorization would be delegated to the Index. - The goal of vendor registries is to let someone do `docker pull - basho/riak1.3` and automatically push from the vendor registry - (instead of a sponsor registry); i.e., get all the convenience of a - sponsor registry, while retaining control on the asset distribution. - - **private registry**: such a registry is located behind a firewall, - or protected by an additional security layer (HTTP authorization, - SSL client-side certificates, IP address authorization...). The - registry is operated by a private entity, outside of Docker's - control. It can optionally delegate additional authorization to the - Index, but it is not mandatory. - -> **Note**: -> Mirror registries and private registries which do not use the Index -> don't even need to run the registry code. They can be implemented by any -> kind of transport implementing HTTP GET and PUT. Read-only registries -> can be powered by a simple static HTTPS server. - -> **Note**: -> The latter implies that while HTTP is the protocol of choice for a registry, -> multiple schemes are possible (and in some cases, trivial): -> -> - HTTP with GET (and PUT for read-write registries); -> - local mount point; -> - remote Docker addressed through SSH. - -The latter would only require two new commands in Docker, e.g., -`registryget` and `registryput`, wrapping access to the local filesystem -(and optionally doing consistency checks). Authentication and authorization -are then delegated to SSH (e.g., with public keys). - -> **Note**: -> Private registry servers that expose an HTTP endpoint need to be secured with -> TLS (preferably TLSv1.2, but at least TLSv1.0). Make sure to put the CA -> certificate at /etc/docker/certs.d/my.registry.com:5000/ca.crt on the Docker -> host, so that the daemon can securely access the private registry. -> Support for SSLv3 and lower is not available due to security issues. - -The default namespace for a private repository is `library`. - -# Endpoints - -## Images - -### Get image layer - -`GET /v1/images/(image_id)/layer` - -Get image layer for a given `image_id` - -**Example Request**: - - GET /v1/images/088b4505aa3adc3d35e79c031fa126b403200f02f51920fbd9b7c503e87c7a2c/layer HTTP/1.1 - Host: registry-1.docker.io - Accept: application/json - Content-Type: application/json - Authorization: Token signature=123abc,repository="foo/bar",access=read - -Parameters: - -- **image_id** – the id for the layer you want to get - -**Example Response**: - - HTTP/1.1 200 - Vary: Accept - X-Docker-Registry-Version: 0.6.0 - Cookie: (Cookie provided by the Registry) - - {layer binary data stream} - -Status Codes: - -- **200** – OK -- **401** – Requires authorization -- **404** – Image not found - -### Put image layer - -`PUT /v1/images/(image_id)/layer` - -Put image layer for a given `image_id` - -**Example Request**: - - PUT /v1/images/088b4505aa3adc3d35e79c031fa126b403200f02f51920fbd9b7c503e87c7a2c/layer HTTP/1.1 - Host: registry-1.docker.io - Transfer-Encoding: chunked - Authorization: Token signature=123abc,repository="foo/bar",access=write - - {layer binary data stream} - -Parameters: - -- **image_id** – the id for the layer you want to get - -**Example Response**: - - HTTP/1.1 200 - Vary: Accept - Content-Type: application/json - X-Docker-Registry-Version: 0.6.0 - - "" - -Status Codes: - -- **200** – OK -- **401** – Requires authorization -- **404** – Image not found - -## Image - -### Put image layer - -`PUT /v1/images/(image_id)/json` - -Put image for a given `image_id` - -**Example Request**: - - PUT /v1/images/088b4505aa3adc3d35e79c031fa126b403200f02f51920fbd9b7c503e87c7a2c/json HTTP/1.1 - Host: registry-1.docker.io - Accept: application/json - Content-Type: application/json - Cookie: (Cookie provided by the Registry) - - { - id: "088b4505aa3adc3d35e79c031fa126b403200f02f51920fbd9b7c503e87c7a2c", - parent: "aeee6396d62273d180a49c96c62e45438d87c7da4a5cf5d2be6bee4e21bc226f", - created: "2013-04-30T17:46:10.843673+03:00", - container: "8305672a76cc5e3d168f97221106ced35a76ec7ddbb03209b0f0d96bf74f6ef7", - container_config: { - Hostname: "host-test", - User: "", - Memory: 0, - MemorySwap: 0, - AttachStdin: false, - AttachStdout: false, - AttachStderr: false, - PortSpecs: null, - Tty: false, - OpenStdin: false, - StdinOnce: false, - Env: null, - Cmd: [ - "/bin/bash", - "-c", - "apt-get -q -yy -f install libevent-dev" - ], - Dns: null, - Image: "imagename/blah", - Volumes: { }, - VolumesFrom: "" - }, - docker_version: "0.1.7" - } - -Parameters: - -- **image_id** – the id for the layer you want to get - -**Example Response**: - - HTTP/1.1 200 - Vary: Accept - Content-Type: application/json - X-Docker-Registry-Version: 0.6.0 - - "" - -Status Codes: - -- **200** – OK -- **401** – Requires authorization - -### Get image layer - -`GET /v1/images/(image_id)/json` - -Get image for a given `image_id` - -**Example Request**: - - GET /v1/images/088b4505aa3adc3d35e79c031fa126b403200f02f51920fbd9b7c503e87c7a2c/json HTTP/1.1 - Host: registry-1.docker.io - Accept: application/json - Content-Type: application/json - Cookie: (Cookie provided by the Registry) - -Parameters: - -- **image_id** – the id for the layer you want to get - -**Example Response**: - - HTTP/1.1 200 - Vary: Accept - Content-Type: application/json - X-Docker-Registry-Version: 0.6.0 - X-Docker-Size: 456789 - X-Docker-Checksum: b486531f9a779a0c17e3ed29dae8f12c4f9e89cc6f0bc3c38722009fe6857087 - - { - id: "088b4505aa3adc3d35e79c031fa126b403200f02f51920fbd9b7c503e87c7a2c", - parent: "aeee6396d62273d180a49c96c62e45438d87c7da4a5cf5d2be6bee4e21bc226f", - created: "2013-04-30T17:46:10.843673+03:00", - container: "8305672a76cc5e3d168f97221106ced35a76ec7ddbb03209b0f0d96bf74f6ef7", - container_config: { - Hostname: "host-test", - User: "", - Memory: 0, - MemorySwap: 0, - AttachStdin: false, - AttachStdout: false, - AttachStderr: false, - PortSpecs: null, - Tty: false, - OpenStdin: false, - StdinOnce: false, - Env: null, - Cmd: [ - "/bin/bash", - "-c", - "apt-get -q -yy -f install libevent-dev" - ], - Dns: null, - Image: "imagename/blah", - Volumes: { }, - VolumesFrom: "" - }, - docker_version: "0.1.7" - } - -Status Codes: - -- **200** – OK -- **401** – Requires authorization -- **404** – Image not found - -## Ancestry - -### Get image ancestry - -`GET /v1/images/(image_id)/ancestry` - -Get ancestry for an image given an `image_id` - -**Example Request**: - - GET /v1/images/088b4505aa3adc3d35e79c031fa126b403200f02f51920fbd9b7c503e87c7a2c/ancestry HTTP/1.1 - Host: registry-1.docker.io - Accept: application/json - Content-Type: application/json - Cookie: (Cookie provided by the Registry) - -Parameters: - -- **image_id** – the id for the layer you want to get - -**Example Response**: - - HTTP/1.1 200 - Vary: Accept - Content-Type: application/json - X-Docker-Registry-Version: 0.6.0 - - ["088b4502f51920fbd9b7c503e87c7a2c05aa3adc3d35e79c031fa126b403200f", - "aeee63968d87c7da4a5cf5d2be6bee4e21bc226fd62273d180a49c96c62e4543", - "bfa4c5326bc764280b0863b46a4b20d940bc1897ef9c1dfec060604bdc383280", - "6ab5893c6927c15a15665191f2c6cf751f5056d8b95ceee32e43c5e8a3648544"] - -Status Codes: - -- **200** – OK -- **401** – Requires authorization -- **404** – Image not found - -## Tags - -### List repository tags - -`GET /v1/repositories/(namespace)/(repository)/tags` - -Get all of the tags for the given repo. - -**Example Request**: - - GET /v1/repositories/reynholm/help-system-server/tags HTTP/1.1 - Host: registry-1.docker.io - Accept: application/json - Content-Type: application/json - X-Docker-Registry-Version: 0.6.0 - Cookie: (Cookie provided by the Registry) - -Parameters: - -- **namespace** – namespace for the repo -- **repository** – name for the repo - -**Example Response**: - - HTTP/1.1 200 - Vary: Accept - Content-Type: application/json - X-Docker-Registry-Version: 0.6.0 - - { - "latest": "9e89cc6f0bc3c38722009fe6857087b486531f9a779a0c17e3ed29dae8f12c4f", - "0.1.1": "b486531f9a779a0c17e3ed29dae8f12c4f9e89cc6f0bc3c38722009fe6857087" - } - -Status Codes: - -- **200** – OK -- **401** – Requires authorization -- **404** – Repository not found - -### Get image id for a particular tag - -`GET /v1/repositories/(namespace)/(repository)/tags/(tag*)` - -Get a tag for the given repo. - -**Example Request**: - - GET /v1/repositories/reynholm/help-system-server/tags/latest HTTP/1.1 - Host: registry-1.docker.io - Accept: application/json - Content-Type: application/json - X-Docker-Registry-Version: 0.6.0 - Cookie: (Cookie provided by the Registry) - -Parameters: - -- **namespace** – namespace for the repo -- **repository** – name for the repo -- **tag** – name of tag you want to get - -**Example Response**: - - HTTP/1.1 200 - Vary: Accept - Content-Type: application/json - X-Docker-Registry-Version: 0.6.0 - - "9e89cc6f0bc3c38722009fe6857087b486531f9a779a0c17e3ed29dae8f12c4f" - -Status Codes: - -- **200** – OK -- **401** – Requires authorization -- **404** – Tag not found - -### Delete a repository tag - -`DELETE /v1/repositories/(namespace)/(repository)/tags/(tag*)` - -Delete the tag for the repo - -**Example Request**: - - DELETE /v1/repositories/reynholm/help-system-server/tags/latest HTTP/1.1 - Host: registry-1.docker.io - Accept: application/json - Content-Type: application/json - Cookie: (Cookie provided by the Registry) - -Parameters: - -- **namespace** – namespace for the repo -- **repository** – name for the repo -- **tag** – name of tag you want to delete - -**Example Response**: - - HTTP/1.1 200 - Vary: Accept - Content-Type: application/json - X-Docker-Registry-Version: 0.6.0 - - "" - -Status Codes: - -- **200** – OK -- **401** – Requires authorization -- **404** – Tag not found - -### Set a tag for a specified image id - -`PUT /v1/repositories/(namespace)/(repository)/tags/(tag*)` - -Put a tag for the given repo. - -**Example Request**: - - PUT /v1/repositories/reynholm/help-system-server/tags/latest HTTP/1.1 - Host: registry-1.docker.io - Accept: application/json - Content-Type: application/json - Cookie: (Cookie provided by the Registry) - - "9e89cc6f0bc3c38722009fe6857087b486531f9a779a0c17e3ed29dae8f12c4f" - -Parameters: - -- **namespace** – namespace for the repo -- **repository** – name for the repo -- **tag** – name of tag you want to add - -**Example Response**: - - HTTP/1.1 200 - Vary: Accept - Content-Type: application/json - X-Docker-Registry-Version: 0.6.0 - - "" - -Status Codes: - -- **200** – OK -- **400** – Invalid data -- **401** – Requires authorization -- **404** – Image not found - -## Repositories - -### Delete a repository - -`DELETE /v1/repositories/(namespace)/(repository)/` - -Delete a repository - -**Example Request**: - - DELETE /v1/repositories/reynholm/help-system-server/ HTTP/1.1 - Host: registry-1.docker.io - Accept: application/json - Content-Type: application/json - Cookie: (Cookie provided by the Registry) - - "" - -Parameters: - -- **namespace** – namespace for the repo -- **repository** – name for the repo - -**Example Response**: - - HTTP/1.1 200 - Vary: Accept - Content-Type: application/json - X-Docker-Registry-Version: 0.6.0 - - "" - -Status Codes: - -- **200** – OK -- **401** – Requires authorization -- **404** – Repository not found - -## Search - -If you need to search the index, this is the endpoint you would use. - -`GET /v1/search` - -Search the Index given a search term. It accepts - - [GET](http://www.w3.org/Protocols/rfc2616/rfc2616-sec9.html#sec9.3) - only. - -**Example request**: - - GET /v1/search?q=search_term&page=1&n=25 HTTP/1.1 - Host: index.docker.io - Accept: application/json - -Query Parameters: - -- **q** – what you want to search for -- **n** - number of results you want returned per page (default: 25, min:1, max:100) -- **page** - page number of results - -**Example response**: - - HTTP/1.1 200 OK - Vary: Accept - Content-Type: application/json - - {"num_pages": 1, - "num_results": 3, - "results" : [ - {"name": "ubuntu", "description": "An ubuntu image..."}, - {"name": "centos", "description": "A centos image..."}, - {"name": "fedora", "description": "A fedora image..."} - ], - "page_size": 25, - "query":"search_term", - "page": 1 - } - -Response Items: -- **num_pages** - Total number of pages returned by query -- **num_results** - Total number of results returned by query -- **results** - List of results for the current page -- **page_size** - How many results returned per page -- **query** - Your search term -- **page** - Current page number - -Status Codes: - -- **200** – no error -- **500** – server error - -## Status - -### Status check for registry - -`GET /v1/_ping` - -Check status of the registry. This endpoint is also used to -determine if the registry supports SSL. - -**Example Request**: - - GET /v1/_ping HTTP/1.1 - Host: registry-1.docker.io - Accept: application/json - Content-Type: application/json - - "" - -**Example Response**: - - HTTP/1.1 200 - Vary: Accept - Content-Type: application/json - X-Docker-Registry-Version: 0.6.0 - - "" - -Status Codes: - -- **200** – OK - -## Authorization - -This is where we describe the authorization process, including the -tokens and cookies. - diff --git a/reference/api/registry_api_client_libraries.md~ b/reference/api/registry_api_client_libraries.md~ deleted file mode 100644 index 6977af3cc4..0000000000 --- a/reference/api/registry_api_client_libraries.md~ +++ /dev/null @@ -1,42 +0,0 @@ -page_title: Registry API Client Libraries -page_description: Various client libraries available to use with the Docker registry API -page_keywords: API, Docker, index, registry, REST, documentation, clients, C#, Erlang, Go, Groovy, Java, JavaScript, Perl, PHP, Python, Ruby, Rust, Scala - -# Docker Registry API Client Libraries - -These libraries have not been tested by the Docker maintainers for -compatibility. Please file issues with the library owners. If you find -more library implementations, please submit a PR with an update to this page -or open an issue in the [Docker](https://github.com/docker/docker/issues) -project and we will add the libraries here. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Language/FrameworkNameRepositoryStatus
JavaScript (AngularJS) WebUIdocker-registry-frontendhttps://github.com/kwk/docker-registry-frontendActive
Godocker-reg-clienthttps://github.com/CenturyLinkLabs/docker-reg-clientActive
diff --git a/reference/api/remote_api_client_libraries.md~ b/reference/api/remote_api_client_libraries.md~ deleted file mode 100644 index d79bbd89ab..0000000000 --- a/reference/api/remote_api_client_libraries.md~ +++ /dev/null @@ -1,174 +0,0 @@ -page_title: Remote API Client Libraries -page_description: Various client libraries available to use with the Docker remote API -page_keywords: API, Docker, index, registry, REST, documentation, clients, C#, Erlang, Go, Groovy, Java, JavaScript, Perl, PHP, Python, Ruby, Rust, Scala - -# Docker Remote API Client Libraries - -These libraries have not been tested by the Docker maintainers for -compatibility. Please file issues with the library owners. If you find -more library implementations, please list them in Docker doc bugs and we -will add the libraries here. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Language/FrameworkNameRepositoryStatus
C#Docker.DotNethttps://github.com/ahmetalpbalkan/Docker.DotNetActive
C++lasote/docker_clienthttp://www.biicode.com/lasote/docker_client (Biicode C++ dependency manager)Active
Erlangerldockerhttps://github.com/proger/erldockerActive
Gogo-dockerclienthttps://github.com/fsouza/go-dockerclientActive
Godockerclienthttps://github.com/samalba/dockerclientActive
Groovydocker-clienthttps://github.com/gesellix-docker/docker-clientActive
Javadocker-javahttps://github.com/docker-java/docker-javaActive
Javadocker-clienthttps://github.com/spotify/docker-clientActive
Javajclouds-dockerhttps://github.com/jclouds/jclouds-labs/tree/master/dockerActive
JavaScript (NodeJS)dockerodehttps://github.com/apocas/dockerode - Install via NPM: npm install dockerodeActive
JavaScript (NodeJS)docker.iohttps://github.com/appersonlabs/docker.io - Install via NPM: npm install docker.ioActive
JavaScriptdocker-jshttps://github.com/dgoujard/docker-jsOutdated
JavaScript (Angular) WebUIdocker-cphttps://github.com/13W/docker-cpActive
JavaScript (Angular) WebUIdockeruihttps://github.com/crosbymichael/dockeruiActive
PerlNet::Dockerhttps://metacpan.org/pod/Net::DockerActive
PerlEixo::Dockerhttps://github.com/alambike/eixo-dockerActive
PHPAlvinehttp://pear.alvine.io/ (alpha)Active
PHPDocker-PHPhttp://stage1.github.io/docker-php/Active
Pythondocker-pyhttps://github.com/docker/docker-pyActive
Rubydocker-apihttps://github.com/swipely/docker-apiActive
Rubydocker-clienthttps://github.com/geku/docker-clientOutdated
Rustdocker-rusthttps://github.com/abh1nav/docker-rustActive
Scalatugboathttps://github.com/softprops/tugboatActive
Scalareactive-dockerhttps://github.com/almoehi/reactive-dockerActive
diff --git a/reference/builder.md~ b/reference/builder.md~ deleted file mode 100644 index 3d34db38cc..0000000000 --- a/reference/builder.md~ +++ /dev/null @@ -1,943 +0,0 @@ -page_title: Dockerfile Reference -page_description: Dockerfiles use a simple DSL which allows you to automate the steps you would normally manually take to create an image. -page_keywords: builder, docker, Dockerfile, automation, image creation - -# Dockerfile Reference - -**Docker can build images automatically** by reading the instructions -from a `Dockerfile`. A `Dockerfile` is a text document that contains all -the commands you would normally execute manually in order to build a -Docker image. By calling `docker build` from your terminal, you can have -Docker build your image step by step, executing the instructions -successively. - -This page discusses the specifics of all the instructions you can use in your -`Dockerfile`. To further help you write a clear, readable, maintainable -`Dockerfile`, we've also written a [`Dockerfile` Best Practices -guide](/articles/dockerfile_best-practices). Lastly, you can test your -Dockerfile knowledge with the [Dockerfile tutorial](/userguide/level1). - -## Usage - -To [*build*](/reference/commandline/cli/#build) an image from a source repository, -create a description file called `Dockerfile` at the root of your repository. -This file will describe the steps to assemble the image. - -Then call `docker build` with the path of your source repository as the argument -(for example, `.`): - - $ sudo docker build . - -The path to the source repository defines where to find the *context* of -the build. The build is run by the Docker daemon, not by the CLI, so the -whole context must be transferred to the daemon. The Docker CLI reports -"Sending build context to Docker daemon" when the context is sent to the daemon. - -> **Warning** -> Avoid using your root directory, `/`, as the root of the source repository. The -> `docker build` command will use whatever directory contains the Dockerfile as the build -> context (including all of its subdirectories). The build context will be sent to the -> Docker daemon before building the image, which means if you use `/` as the source -> repository, the entire contents of your hard drive will get sent to the daemon (and -> thus to the machine running the daemon). You probably don't want that. - -In most cases, it's best to put each Dockerfile in an empty directory, and then add only -the files needed for building that Dockerfile to that directory. To further speed up the -build, you can exclude files and directories by adding a `.dockerignore` file to the same -directory. - -You can specify a repository and tag at which to save the new image if -the build succeeds: - - $ sudo docker build -t shykes/myapp . - -The Docker daemon will run your steps one-by-one, committing the result -to a new image if necessary, before finally outputting the ID of your -new image. The Docker daemon will automatically clean up the context you -sent. - -Note that each instruction is run independently, and causes a new image -to be created - so `RUN cd /tmp` will not have any effect on the next -instructions. - -Whenever possible, Docker will re-use the intermediate images, -accelerating `docker build` significantly (indicated by `Using cache` - -see the [`Dockerfile` Best Practices -guide](/articles/dockerfile_best-practices/#build-cache) for more information): - - $ sudo docker build -t SvenDowideit/ambassador . - Uploading context 10.24 kB - Uploading context - Step 1 : FROM docker-ut - ---> cbba202fe96b - Step 2 : MAINTAINER SvenDowideit@home.org.au - ---> Using cache - ---> 51182097be13 - Step 3 : CMD env | grep _TCP= | sed 's/.*_PORT_\([0-9]*\)_TCP=tcp:\/\/\(.*\):\(.*\)/socat TCP4-LISTEN:\1,fork,reuseaddr TCP4:\2:\3 \&/' | sh && top - ---> Using cache - ---> 1a5ffc17324d - Successfully built 1a5ffc17324d - -When you're done with your build, you're ready to look into [*Pushing a -repository to its registry*]( /userguide/dockerrepos/#contributing-to-docker-hub). - -## Format - -Here is the format of the `Dockerfile`: - - # Comment - INSTRUCTION arguments - -The Instruction is not case-sensitive, however convention is for them to -be UPPERCASE in order to distinguish them from arguments more easily. - -Docker runs the instructions in a `Dockerfile` in order. **The -first instruction must be \`FROM\`** in order to specify the [*Base -Image*](/terms/image/#base-image) from which you are building. - -Docker will treat lines that *begin* with `#` as a -comment. A `#` marker anywhere else in the line will -be treated as an argument. This allows statements like: - - # Comment - RUN echo 'we are running some # of cool things' - -Here is the set of instructions you can use in a `Dockerfile` for building -images. - -### Environment Replacement - -> **Note**: prior to 1.3, `Dockerfile` environment variables were handled -> similarly, in that they would be replaced as described below. However, there -> was no formal definition on as to which instructions handled environment -> replacement at the time. After 1.3 this behavior will be preserved and -> canonical. - -Environment variables (declared with [the `ENV` statement](#env)) can also be used in -certain instructions as variables to be interpreted by the `Dockerfile`. Escapes -are also handled for including variable-like syntax into a statement literally. - -Environment variables are notated in the `Dockerfile` either with -`$variable_name` or `${variable_name}`. They are treated equivalently and the -brace syntax is typically used to address issues with variable names with no -whitespace, like `${foo}_bar`. - -Escaping is possible by adding a `\` before the variable: `\$foo` or `\${foo}`, -for example, will translate to `$foo` and `${foo}` literals respectively. - -Example (parsed representation is displayed after the `#`): - - FROM busybox - ENV foo /bar - WORKDIR ${foo} # WORKDIR /bar - ADD . $foo # ADD . /bar - COPY \$foo /quux # COPY $foo /quux - -The instructions that handle environment variables in the `Dockerfile` are: - -* `ENV` -* `ADD` -* `COPY` -* `WORKDIR` -* `EXPOSE` -* `VOLUME` -* `USER` - -`ONBUILD` instructions are **NOT** supported for environment replacement, even -the instructions above. - -## The `.dockerignore` file - -If a file named `.dockerignore` exists in the source repository, then it -is interpreted as a newline-separated list of exclusion patterns. -Exclusion patterns match files or directories relative to the source repository -that will be excluded from the context. Globbing is done using Go's -[filepath.Match](http://golang.org/pkg/path/filepath#Match) rules. - -> **Note**: -> The `.dockerignore` file can even be used to ignore the `Dockerfile` and -> `.dockerignore` files. This might be useful if you are copying files from -> the root of the build context into your new containter but do not want to -> include the `Dockerfile` or `.dockerignore` files (e.g. `ADD . /someDir/`). - -The following example shows the use of the `.dockerignore` file to exclude the -`.git` directory from the context. Its effect can be seen in the changed size of -the uploaded context. - - $ sudo docker build . - Uploading context 18.829 MB - Uploading context - Step 0 : FROM busybox - ---> 769b9341d937 - Step 1 : CMD echo Hello World - ---> Using cache - ---> 99cc1ad10469 - Successfully built 99cc1ad10469 - $ echo ".git" > .dockerignore - $ sudo docker build . - Uploading context 6.76 MB - Uploading context - Step 0 : FROM busybox - ---> 769b9341d937 - Step 1 : CMD echo Hello World - ---> Using cache - ---> 99cc1ad10469 - Successfully built 99cc1ad10469 - -## FROM - - FROM - -Or - - FROM : - -The `FROM` instruction sets the [*Base Image*](/terms/image/#base-image) -for subsequent instructions. As such, a valid `Dockerfile` must have `FROM` as -its first instruction. The image can be any valid image – it is especially easy -to start by **pulling an image** from the [*Public Repositories*]( -/userguide/dockerrepos). - -`FROM` must be the first non-comment instruction in the `Dockerfile`. - -`FROM` can appear multiple times within a single `Dockerfile` in order to create -multiple images. Simply make a note of the last image ID output by the commit -before each new `FROM` command. - -If no `tag` is given to the `FROM` instruction, `latest` is assumed. If the -used tag does not exist, an error will be returned. - -## MAINTAINER - - MAINTAINER - -The `MAINTAINER` instruction allows you to set the *Author* field of the -generated images. - -## RUN - -RUN has 2 forms: - -- `RUN ` (the command is run in a shell - `/bin/sh -c` - *shell* form) -- `RUN ["executable", "param1", "param2"]` (*exec* form) - -The `RUN` instruction will execute any commands in a new layer on top of the -current image and commit the results. The resulting committed image will be -used for the next step in the `Dockerfile`. - -Layering `RUN` instructions and generating commits conforms to the core -concepts of Docker where commits are cheap and containers can be created from -any point in an image's history, much like source control. - -The *exec* form makes it possible to avoid shell string munging, and to `RUN` -commands using a base image that does not contain `/bin/sh`. - -> **Note**: -> To use a different shell, other than '/bin/sh', use the *exec* form -> passing in the desired shell. For example, -> `RUN ["/bin/bash", "-c", "echo hello"]` - -> **Note**: -> The *exec* form is parsed as a JSON array, which means that -> you must use double-quotes (") around words not single-quotes ('). - -> **Note**: -> Unlike the *shell* form, the *exec* form does not invoke a command shell. -> This means that normal shell processing does not happen. For example, -> `RUN [ "echo", "$HOME" ]` will not do variable substitution on `$HOME`. -> If you want shell processing then either use the *shell* form or execute -> a shell directly, for example: `RUN [ "sh", "-c", "echo", "$HOME" ]`. - -The cache for `RUN` instructions isn't invalidated automatically during -the next build. The cache for an instruction like -`RUN apt-get dist-upgrade -y` will be reused during the next build. The -cache for `RUN` instructions can be invalidated by using the `--no-cache` -flag, for example `docker build --no-cache`. - -See the [`Dockerfile` Best Practices -guide](/articles/dockerfile_best-practices/#build-cache) for more information. - -The cache for `RUN` instructions can be invalidated by `ADD` instructions. See -[below](#add) for details. - -### Known Issues (RUN) - -- [Issue 783](https://github.com/docker/docker/issues/783) is about file - permissions problems that can occur when using the AUFS file system. You - might notice it during an attempt to `rm` a file, for example. The issue - describes a workaround. - -## CMD - -The `CMD` instruction has three forms: - -- `CMD ["executable","param1","param2"]` (*exec* form, this is the preferred form) -- `CMD ["param1","param2"]` (as *default parameters to ENTRYPOINT*) -- `CMD command param1 param2` (*shell* form) - -There can only be one `CMD` instruction in a `Dockerfile`. If you list more than one `CMD` -then only the last `CMD` will take effect. - -**The main purpose of a `CMD` is to provide defaults for an executing -container.** These defaults can include an executable, or they can omit -the executable, in which case you must specify an `ENTRYPOINT` -instruction as well. - -> **Note**: -> If `CMD` is used to provide default arguments for the `ENTRYPOINT` -> instruction, both the `CMD` and `ENTRYPOINT` instructions should be specified -> with the JSON array format. - -> **Note**: -> The *exec* form is parsed as a JSON array, which means that -> you must use double-quotes (") around words not single-quotes ('). - -> **Note**: -> Unlike the *shell* form, the *exec* form does not invoke a command shell. -> This means that normal shell processing does not happen. For example, -> `CMD [ "echo", "$HOME" ]` will not do variable substitution on `$HOME`. -> If you want shell processing then either use the *shell* form or execute -> a shell directly, for example: `CMD [ "sh", "-c", "echo", "$HOME" ]`. - -When used in the shell or exec formats, the `CMD` instruction sets the command -to be executed when running the image. - -If you use the *shell* form of the `CMD`, then the `` will execute in -`/bin/sh -c`: - - FROM ubuntu - CMD echo "This is a test." | wc - - -If you want to **run your** `` **without a shell** then you must -express the command as a JSON array and give the full path to the executable. -**This array form is the preferred format of `CMD`.** Any additional parameters -must be individually expressed as strings in the array: - - FROM ubuntu - CMD ["/usr/bin/wc","--help"] - -If you would like your container to run the same executable every time, then -you should consider using `ENTRYPOINT` in combination with `CMD`. See -[*ENTRYPOINT*](#entrypoint). - -If the user specifies arguments to `docker run` then they will override the -default specified in `CMD`. - -> **Note**: -> don't confuse `RUN` with `CMD`. `RUN` actually runs a command and commits -> the result; `CMD` does not execute anything at build time, but specifies -> the intended command for the image. - -## EXPOSE - - EXPOSE [...] - -The `EXPOSE` instructions informs Docker that the container will listen on the -specified network ports at runtime. Docker uses this information to interconnect -containers using links (see the [Docker User -Guide](/userguide/dockerlinks)) and to determine which ports to expose to the -host when [using the -P flag](/reference/run/#expose-incoming-ports). - -> **Note**: -> `EXPOSE` doesn't define which ports can be exposed to the host or make ports -> accessible from the host by default. To expose ports to the host, at runtime, -> [use the `-p` flag](/userguide/dockerlinks) or -> [the -P flag](/reference/run/#expose-incoming-ports). - -## ENV - - ENV - ENV = ... - -The `ENV` instruction sets the environment variable `` to the value -``. This value will be in the environment of all "descendent" `Dockerfile` -commands and can be [replaced inline](#environment-replacement) in many as well. - -The `ENV` instruction has two forms. The first form, `ENV `, -will set a single variable to a value. The entire string after the first -space will be treated as the `` - including characters such as -spaces and quotes. - -The second form, `ENV = ...`, allows for multiple variables to -be set at one time. Notice that the second form uses the equals sign (=) -in the syntax, while the first form does not. Like command line parsing, -quotes and backslashes can be used to include spaces within values. - -For example: - - ENV myName="John Doe" myDog=Rex\ The\ Dog \ - myCat=fluffy - -and - - ENV myName John Doe - ENV myDog Rex The Dog - ENV myCat fluffy - -will yield the same net results in the final container, but the first form -does it all in one layer. - -The environment variables set using `ENV` will persist when a container is run -from the resulting image. You can view the values using `docker inspect`, and -change them using `docker run --env =`. - -> **Note**: -> Environment persistence can cause unexpected effects. For example, -> setting `ENV DEBIAN_FRONTEND noninteractive` may confuse apt-get -> users on a Debian-based image. To set a value for a single command, use -> `RUN = `. - -## ADD - -ADD has two forms: - -- `ADD ... ` -- `ADD [""... ""]` (this form is required for paths containing -whitespace) - -The `ADD` instruction copies new files, directories or remote file URLs from `` -and adds them to the filesystem of the container at the path ``. - -Multiple `` resource may be specified but if they are files or -directories then they must be relative to the source directory that is -being built (the context of the build). - -Each `` may contain wildcards and matching will be done using Go's -[filepath.Match](http://golang.org/pkg/path/filepath#Match) rules. -For most command line uses this should act as expected, for example: - - ADD hom* /mydir/ # adds all files starting with "hom" - ADD hom?.txt /mydir/ # ? is replaced with any single character - -The `` is an absolute path, or a path relative to `WORKDIR`, into which -the source will be copied inside the destination container. - - ADD test aDir/ # adds "test" to `WORKDIR`/aDir/ - -All new files and directories are created with a UID and GID of 0. - -In the case where `` is a remote file URL, the destination will -have permissions of 600. If the remote file being retrieved has an HTTP -`Last-Modified` header, the timestamp from that header will be used -to set the `mtime` on the destination file. Then, like any other file -processed during an `ADD`, `mtime` will be included in the determination -of whether or not the file has changed and the cache should be updated. - -> **Note**: -> If you build by passing a `Dockerfile` through STDIN (`docker -> build - < somefile`), there is no build context, so the `Dockerfile` -> can only contain a URL based `ADD` instruction. You can also pass a -> compressed archive through STDIN: (`docker build - < archive.tar.gz`), -> the `Dockerfile` at the root of the archive and the rest of the -> archive will get used at the context of the build. - -> **Note**: -> If your URL files are protected using authentication, you -> will need to use `RUN wget`, `RUN curl` or use another tool from -> within the container as the `ADD` instruction does not support -> authentication. - -> **Note**: -> The first encountered `ADD` instruction will invalidate the cache for all -> following instructions from the Dockerfile if the contents of `` have -> changed. This includes invalidating the cache for `RUN` instructions. -> See the [`Dockerfile` Best Practices -guide](/articles/dockerfile_best-practices/#build-cache) for more information. - - -The copy obeys the following rules: - -- The `` path must be inside the *context* of the build; - you cannot `ADD ../something /something`, because the first step of a - `docker build` is to send the context directory (and subdirectories) to the - docker daemon. - -- If `` is a URL and `` does not end with a trailing slash, then a - file is downloaded from the URL and copied to ``. - -- If `` is a URL and `` does end with a trailing slash, then the - filename is inferred from the URL and the file is downloaded to - `/`. For instance, `ADD http://example.com/foobar /` would - create the file `/foobar`. The URL must have a nontrivial path so that an - appropriate filename can be discovered in this case (`http://example.com` - will not work). - -- If `` is a directory, the entire contents of the directory are copied, - including filesystem metadata. -> **Note**: -> The directory itself is not copied, just its contents. - -- If `` is a *local* tar archive in a recognized compression format - (identity, gzip, bzip2 or xz) then it is unpacked as a directory. Resources - from *remote* URLs are **not** decompressed. When a directory is copied or - unpacked, it has the same behavior as `tar -x`: the result is the union of: - - 1. Whatever existed at the destination path and - 2. The contents of the source tree, with conflicts resolved in favor - of "2." on a file-by-file basis. - -- If `` is any other kind of file, it is copied individually along with - its metadata. In this case, if `` ends with a trailing slash `/`, it - will be considered a directory and the contents of `` will be written - at `/base()`. - -- If multiple `` resources are specified, either directly or due to the - use of a wildcard, then `` must be a directory, and it must end with - a slash `/`. - -- If `` does not end with a trailing slash, it will be considered a - regular file and the contents of `` will be written at ``. - -- If `` doesn't exist, it is created along with all missing directories - in its path. - -## COPY - -COPY has two forms: - -- `COPY ... ` -- `COPY [""... ""]` (this form is required for paths containing -whitespace) - -The `COPY` instruction copies new files or directories from `` -and adds them to the filesystem of the container at the path ``. - -Multiple `` resource may be specified but they must be relative -to the source directory that is being built (the context of the build). - -Each `` may contain wildcards and matching will be done using Go's -[filepath.Match](http://golang.org/pkg/path/filepath#Match) rules. -For most command line uses this should act as expected, for example: - - COPY hom* /mydir/ # adds all files starting with "hom" - COPY hom?.txt /mydir/ # ? is replaced with any single character - -The `` is an absolute path, or a path relative to `WORKDIR`, into which -the source will be copied inside the destination container. - - COPY test aDir/ # adds "test" to `WORKDIR`/aDir/ - -All new files and directories are created with a UID and GID of 0. - -> **Note**: -> If you build using STDIN (`docker build - < somefile`), there is no -> build context, so `COPY` can't be used. - -The copy obeys the following rules: - -- The `` path must be inside the *context* of the build; - you cannot `COPY ../something /something`, because the first step of a - `docker build` is to send the context directory (and subdirectories) to the - docker daemon. - -- If `` is a directory, the entire contents of the directory are copied, - including filesystem metadata. -> **Note**: -> The directory itself is not copied, just its contents. - -- If `` is any other kind of file, it is copied individually along with - its metadata. In this case, if `` ends with a trailing slash `/`, it - will be considered a directory and the contents of `` will be written - at `/base()`. - -- If multiple `` resources are specified, either directly or due to the - use of a wildcard, then `` must be a directory, and it must end with - a slash `/`. - -- If `` does not end with a trailing slash, it will be considered a - regular file and the contents of `` will be written at ``. - -- If `` doesn't exist, it is created along with all missing directories - in its path. - -## ENTRYPOINT - -ENTRYPOINT has two forms: - -- `ENTRYPOINT ["executable", "param1", "param2"]` - (the preferred *exec* form) -- `ENTRYPOINT command param1 param2` - (*shell* form) - -An `ENTRYPOINT` allows you to configure a container that will run as an executable. - -For example, the following will start nginx with its default content, listening -on port 80: - - docker run -i -t --rm -p 80:80 nginx - -Command line arguments to `docker run ` will be appended after all -elements in an *exec* form `ENTRYPOINT`, and will override all elements specified -using `CMD`. -This allows arguments to be passed to the entry point, i.e., `docker run -d` -will pass the `-d` argument to the entry point. -You can override the `ENTRYPOINT` instruction using the `docker run --entrypoint` -flag. - -The *shell* form prevents any `CMD` or `run` command line arguments from being -used, but has the disadvantage that your `ENTRYPOINT` will be started as a -subcommand of `/bin/sh -c`, which does not pass signals. -This means that the executable will not be the container's `PID 1` - and -will _not_ receive Unix signals - so your executable will not receive a -`SIGTERM` from `docker stop `. - -Only the last `ENTRYPOINT` instruction in the `Dockerfile` will have an effect. - -### Exec form ENTRYPOINT example - -You can use the *exec* form of `ENTRYPOINT` to set fairly stable default commands -and arguments and then use either form of `CMD` to set additional defaults that -are more likely to be changed. - - FROM ubuntu - ENTRYPOINT ["top", "-b"] - CMD ["-c"] - -When you run the container, you can see that `top` is the only process: - - $ docker run -it --rm --name test top -H - top - 08:25:00 up 7:27, 0 users, load average: 0.00, 0.01, 0.05 - Threads: 1 total, 1 running, 0 sleeping, 0 stopped, 0 zombie - %Cpu(s): 0.1 us, 0.1 sy, 0.0 ni, 99.7 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st - KiB Mem: 2056668 total, 1616832 used, 439836 free, 99352 buffers - KiB Swap: 1441840 total, 0 used, 1441840 free. 1324440 cached Mem - - PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND - 1 root 20 0 19744 2336 2080 R 0.0 0.1 0:00.04 top - -To examine the result further, you can use `docker exec`: - - $ docker exec -it test ps aux - USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND - root 1 2.6 0.1 19752 2352 ? Ss+ 08:24 0:00 top -b -H - root 7 0.0 0.1 15572 2164 ? R+ 08:25 0:00 ps aux - -And you can gracefully request `top` to shut down using `docker stop test`. - -The following `Dockerfile` shows using the `ENTRYPOINT` to run Apache in the -foreground (i.e., as `PID 1`): - -``` -FROM debian:stable -RUN apt-get update && apt-get install -y --force-yes apache2 -EXPOSE 80 443 -VOLUME ["/var/www", "/var/log/apache2", "/etc/apache2"] -ENTRYPOINT ["/usr/sbin/apache2ctl", "-D", "FOREGROUND"] -``` - -If you need to write a starter script for a single executable, you can ensure that -the final executable receives the Unix signals by using `exec` and `gosu` -commands: - -```bash -#!/bin/bash -set -e - -if [ "$1" = 'postgres' ]; then - chown -R postgres "$PGDATA" - - if [ -z "$(ls -A "$PGDATA")" ]; then - gosu postgres initdb - fi - - exec gosu postgres "$@" -fi - -exec "$@" -``` - -Lastly, if you need to do some extra cleanup (or communicate with other containers) -on shutdown, or are co-ordinating more than one executable, you may need to ensure -that the `ENTRYPOINT` script receives the Unix signals, passes them on, and then -does some more work: - -``` -#!/bin/sh -# Note: I've written this using sh so it works in the busybox container too - -# USE the trap if you need to also do manual cleanup after the service is stopped, -# or need to start multiple services in the one container -trap "echo TRAPed signal" HUP INT QUIT KILL TERM - -# start service in background here -/usr/sbin/apachectl start - -echo "[hit enter key to exit] or run 'docker stop '" -read - -# stop service and clean up here -echo "stopping apache" -/usr/sbin/apachectl stop - -echo "exited $0" -``` - -If you run this image with `docker run -it --rm -p 80:80 --name test apache`, -you can then examine the container's processes with `docker exec`, or `docker top`, -and then ask the script to stop Apache: - -```bash -$ docker exec -it test ps aux -USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND -root 1 0.1 0.0 4448 692 ? Ss+ 00:42 0:00 /bin/sh /run.sh 123 cmd cmd2 -root 19 0.0 0.2 71304 4440 ? Ss 00:42 0:00 /usr/sbin/apache2 -k start -www-data 20 0.2 0.2 360468 6004 ? Sl 00:42 0:00 /usr/sbin/apache2 -k start -www-data 21 0.2 0.2 360468 6000 ? Sl 00:42 0:00 /usr/sbin/apache2 -k start -root 81 0.0 0.1 15572 2140 ? R+ 00:44 0:00 ps aux -$ docker top test -PID USER COMMAND -10035 root {run.sh} /bin/sh /run.sh 123 cmd cmd2 -10054 root /usr/sbin/apache2 -k start -10055 33 /usr/sbin/apache2 -k start -10056 33 /usr/sbin/apache2 -k start -$ /usr/bin/time docker stop test -test -real 0m 0.27s -user 0m 0.03s -sys 0m 0.03s -``` - -> **Note:** you can over ride the `ENTRYPOINT` setting using `--entrypoint`, -> but this can only set the binary to *exec* (no `sh -c` will be used). - -> **Note**: -> The *exec* form is parsed as a JSON array, which means that -> you must use double-quotes (") around words not single-quotes ('). - -> **Note**: -> Unlike the *shell* form, the *exec* form does not invoke a command shell. -> This means that normal shell processing does not happen. For example, -> `ENTRYPOINT [ "echo", "$HOME" ]` will not do variable substitution on `$HOME`. -> If you want shell processing then either use the *shell* form or execute -> a shell directly, for example: `ENTRYPOINT [ "sh", "-c", "echo", "$HOME" ]`. -> Variables that are defined in the `Dockerfile`using `ENV`, will be substituted by -> the `Dockerfile` parser. - -### Shell form ENTRYPOINT example - -You can specify a plain string for the `ENTRYPOINT` and it will execute in `/bin/sh -c`. -This form will use shell processing to substitute shell environment variables, -and will ignore any `CMD` or `docker run` command line arguments. -To ensure that `docker stop` will signal any long running `ENTRYPOINT` executable -correctly, you need to remember to start it with `exec`: - - FROM ubuntu - ENTRYPOINT exec top -b - -When you run this image, you'll see the single `PID 1` process: - - $ docker run -it --rm --name test top - Mem: 1704520K used, 352148K free, 0K shrd, 0K buff, 140368121167873K cached - CPU: 5% usr 0% sys 0% nic 94% idle 0% io 0% irq 0% sirq - Load average: 0.08 0.03 0.05 2/98 6 - PID PPID USER STAT VSZ %VSZ %CPU COMMAND - 1 0 root R 3164 0% 0% top -b - -Which will exit cleanly on `docker stop`: - - $ /usr/bin/time docker stop test - test - real 0m 0.20s - user 0m 0.02s - sys 0m 0.04s - -If you forget to add `exec` to the beginning of your `ENTRYPOINT`: - - FROM ubuntu - ENTRYPOINT top -b - CMD --ignored-param1 - -You can then run it (giving it a name for the next step): - - $ docker run -it --name test top --ignored-param2 - Mem: 1704184K used, 352484K free, 0K shrd, 0K buff, 140621524238337K cached - CPU: 9% usr 2% sys 0% nic 88% idle 0% io 0% irq 0% sirq - Load average: 0.01 0.02 0.05 2/101 7 - PID PPID USER STAT VSZ %VSZ %CPU COMMAND - 1 0 root S 3168 0% 0% /bin/sh -c top -b cmd cmd2 - 7 1 root R 3164 0% 0% top -b - -You can see from the output of `top` that the specified `ENTRYPOINT` is not `PID 1`. - -If you then run `docker stop test`, the container will not exit cleanly - the -`stop` command will be forced to send a `SIGKILL` after the timeout: - - $ docker exec -it test ps aux - PID USER COMMAND - 1 root /bin/sh -c top -b cmd cmd2 - 7 root top -b - 8 root ps aux - $ /usr/bin/time docker stop test - test - real 0m 10.19s - user 0m 0.04s - sys 0m 0.03s - -## VOLUME - - VOLUME ["/data"] - -The `VOLUME` instruction creates a mount point with the specified name -and marks it as holding externally mounted volumes from native host or other -containers. The value can be a JSON array, `VOLUME ["/var/log/"]`, or a plain -string with multiple arguments, such as `VOLUME /var/log` or `VOLUME /var/log -/var/db`. For more information/examples and mounting instructions via the -Docker client, refer to -[*Share Directories via Volumes*](/userguide/dockervolumes/#volume) -documentation. - -The `docker run` command initializes the newly created volume with any data -that exists at the specified location within the base image. For example, -consider the following Dockerfile snippet: - - FROM ubuntu - RUN mkdir /myvol - RUN echo "hello world" > /myvol/greating - VOLUME /myvol - -This Dockerfile results in an image that causes `docker run`, to -create a new mount point at `/myvol` and copy the `greating` file -into the newly created volume. - -> **Note**: -> The list is parsed as a JSON array, which means that -> you must use double-quotes (") around words not single-quotes ('). - -## USER - - USER daemon - -The `USER` instruction sets the user name or UID to use when running the image -and for any `RUN`, `CMD` and `ENTRYPOINT` instructions that follow it in the -`Dockerfile`. - -## WORKDIR - - WORKDIR /path/to/workdir - -The `WORKDIR` instruction sets the working directory for any `RUN`, `CMD`, -`ENTRYPOINT`, `COPY` and `ADD` instructions that follow it in the `Dockerfile`. - -It can be used multiple times in the one `Dockerfile`. If a relative path -is provided, it will be relative to the path of the previous `WORKDIR` -instruction. For example: - - WORKDIR /a - WORKDIR b - WORKDIR c - RUN pwd - -The output of the final `pwd` command in this `Dockerfile` would be -`/a/b/c`. - -The `WORKDIR` instruction can resolve environment variables previously set using -`ENV`. You can only use environment variables explicitly set in the `Dockerfile`. -For example: - - ENV DIRPATH /path - WORKDIR $DIRPATH/$DIRNAME - -The output of the final `pwd` command in this `Dockerfile` would be -`/path/$DIRNAME` - -## ONBUILD - - ONBUILD [INSTRUCTION] - -The `ONBUILD` instruction adds to the image a *trigger* instruction to -be executed at a later time, when the image is used as the base for -another build. The trigger will be executed in the context of the -downstream build, as if it had been inserted immediately after the -`FROM` instruction in the downstream `Dockerfile`. - -Any build instruction can be registered as a trigger. - -This is useful if you are building an image which will be used as a base -to build other images, for example an application build environment or a -daemon which may be customized with user-specific configuration. - -For example, if your image is a reusable Python application builder, it -will require application source code to be added in a particular -directory, and it might require a build script to be called *after* -that. You can't just call `ADD` and `RUN` now, because you don't yet -have access to the application source code, and it will be different for -each application build. You could simply provide application developers -with a boilerplate `Dockerfile` to copy-paste into their application, but -that is inefficient, error-prone and difficult to update because it -mixes with application-specific code. - -The solution is to use `ONBUILD` to register advance instructions to -run later, during the next build stage. - -Here's how it works: - -1. When it encounters an `ONBUILD` instruction, the builder adds a - trigger to the metadata of the image being built. The instruction - does not otherwise affect the current build. -2. At the end of the build, a list of all triggers is stored in the - image manifest, under the key `OnBuild`. They can be inspected with - the `docker inspect` command. -3. Later the image may be used as a base for a new build, using the - `FROM` instruction. As part of processing the `FROM` instruction, - the downstream builder looks for `ONBUILD` triggers, and executes - them in the same order they were registered. If any of the triggers - fail, the `FROM` instruction is aborted which in turn causes the - build to fail. If all triggers succeed, the `FROM` instruction - completes and the build continues as usual. -4. Triggers are cleared from the final image after being executed. In - other words they are not inherited by "grand-children" builds. - -For example you might add something like this: - - [...] - ONBUILD ADD . /app/src - ONBUILD RUN /usr/local/bin/python-build --dir /app/src - [...] - -> **Warning**: Chaining `ONBUILD` instructions using `ONBUILD ONBUILD` isn't allowed. - -> **Warning**: The `ONBUILD` instruction may not trigger `FROM` or `MAINTAINER` instructions. - -## Dockerfile Examples - - # Nginx - # - # VERSION 0.0.1 - - FROM ubuntu - MAINTAINER Victor Vieux - - RUN apt-get update && apt-get install -y inotify-tools nginx apache2 openssh-server - - # Firefox over VNC - # - # VERSION 0.3 - - FROM ubuntu - - # Install vnc, xvfb in order to create a 'fake' display and firefox - RUN apt-get update && apt-get install -y x11vnc xvfb firefox - RUN mkdir ~/.vnc - # Setup a password - RUN x11vnc -storepasswd 1234 ~/.vnc/passwd - # Autostart firefox (might not be the best way, but it does the trick) - RUN bash -c 'echo "firefox" >> /.bashrc' - - EXPOSE 5900 - CMD ["x11vnc", "-forever", "-usepw", "-create"] - - # Multiple images example - # - # VERSION 0.1 - - FROM ubuntu - RUN echo foo > bar - # Will output something like ===> 907ad6c2736f - - FROM ubuntu - RUN echo moo > oink - # Will output something like ===> 695d7793cbe4 - - # You᾿ll now have two images, 907ad6c2736f with /bar, and 695d7793cbe4 with - # /oink. - diff --git a/reference/commandline/cli.md~ b/reference/commandline/cli.md~ deleted file mode 100644 index 4cd30aca12..0000000000 --- a/reference/commandline/cli.md~ +++ /dev/null @@ -1,2128 +0,0 @@ -page_title: Command Line Interface -page_description: Docker's CLI command description and usage -page_keywords: Docker, Docker documentation, CLI, command line - -# Command Line - -{{ include "no-remote-sudo.md" }} - -To list available commands, either run `docker` with no parameters -or execute `docker help`: - - $ sudo docker - Usage: docker [OPTIONS] COMMAND [arg...] - -H, --host=[]: The socket(s) to bind to in daemon mode, specified using one or more tcp://host:port, unix:///path/to/socket, fd://* or fd://socketfd. - - A self-sufficient runtime for Linux containers. - - ... - -## Help -To list the help on any command just execute the command, followed by the `--help` option. - - $ sudo docker run --help - - Usage: docker run [OPTIONS] IMAGE [COMMAND] [ARG...] - - Run a command in a new container - - -a, --attach=[] Attach to STDIN, STDOUT or STDERR. - -c, --cpu-shares=0 CPU shares (relative weight) - ... - -## Option types - -Single character command line options can be combined, so rather than -typing `docker run -i -t --name test busybox sh`, -you can write `docker run -it --name test busybox sh`. - -### Boolean - -Boolean options take the form `-d=false`. The value you see in the help text is the -default value which is set if you do **not** specify that flag. If you specify -a Boolean flag without a value, this will set the flag to `true`, irrespective -of the default value. - -For example, running `docker run -d` will set the value to `true`, so -your container **will** run in "detached" mode, in the background. - -Options which default to `true` (e.g., `docker build --rm=true`) can only -be set to the non-default value by explicitly setting them to `false`: - - $ docker build --rm=false . - -### Multi - -Options like `-a=[]` indicate they can be specified multiple times: - - $ sudo docker run -a stdin -a stdout -a stderr -i -t ubuntu /bin/bash - -Sometimes this can use a more complex value string, as for `-v`: - - $ sudo docker run -v /host:/container example/mysql - -### Strings and Integers - -Options like `--name=""` expect a string, and they -can only be specified once. Options like `-c=0` -expect an integer, and they can only be specified once. - -## daemon - - Usage: docker [OPTIONS] COMMAND [arg...] - - A self-sufficient runtime for linux containers. - - Options: - --api-enable-cors=false Enable CORS headers in the remote API - -b, --bridge="" Attach containers to a pre-existing network bridge - use 'none' to disable container networking - --bip="" Use this CIDR notation address for the network bridge's IP, not compatible with -b - -D, --debug=false Enable debug mode - -d, --daemon=false Enable daemon mode - --dns=[] Force Docker to use specific DNS servers - --dns-search=[] Force Docker to use specific DNS search domains - -e, --exec-driver="native" Force the Docker runtime to use a specific exec driver - --fixed-cidr="" IPv4 subnet for fixed IPs (e.g.: 10.20.0.0/16) - this subnet must be nested in the bridge subnet (which is defined by -b or --bip) - --fixed-cidr-v6="" IPv6 subnet for global IPs (e.g.: 2a00:1450::/64) - -G, --group="docker" Group to assign the unix socket specified by -H when running in daemon mode - use '' (the empty string) to disable setting of a group - -g, --graph="/var/lib/docker" Path to use as the root of the Docker runtime - -H, --host=[] The socket(s) to bind to in daemon mode or connect to in client mode, specified using one or more tcp://host:port, unix:///path/to/socket, fd://* or fd://socketfd. - --icc=true Allow unrestricted inter-container and Docker daemon host communication - --insecure-registry=[] Enable insecure communication with specified registries (disables certificate verification for HTTPS and enables HTTP fallback) (e.g., localhost:5000 or 10.20.0.0/16) - --ip=0.0.0.0 Default IP address to use when binding container ports - --ip-forward=true Enable net.ipv4.ip_forward and IPv6 forwarding if --fixed-cidr-v6 is defined. IPv6 forwarding may interfere with your existing IPv6 configuration when using Router Advertisement. - --ip-masq=true Enable IP masquerading for bridge's IP range - --iptables=true Enable Docker's addition of iptables rules - --ipv6=false Enable Docker IPv6 support - -l, --log-level="info" Set the logging level (debug, info, warn, error, fatal) - --label=[] Set key=value labels to the daemon (displayed in `docker info`) - --mtu=0 Set the containers network MTU - if no value is provided: default to the default route MTU or 1500 if no default route is available - -p, --pidfile="/var/run/docker.pid" Path to use for daemon PID file - --registry-mirror=[] Specify a preferred Docker registry mirror - -s, --storage-driver="" Force the Docker runtime to use a specific storage driver - --selinux-enabled=false Enable selinux support. SELinux does not presently support the BTRFS storage driver - --storage-opt=[] Set storage driver options - --tls=false Use TLS; implied by --tlsverify flag - --tlscacert="/home/sven/.docker/ca.pem" Trust only remotes providing a certificate signed by the CA given here - --tlscert="/home/sven/.docker/cert.pem" Path to TLS certificate file - --tlskey="/home/sven/.docker/key.pem" Path to TLS key file - --tlsverify=false Use TLS and verify the remote (daemon: verify client, client: verify daemon) - -v, --version=false Print version information and quit - -Options with [] may be specified multiple times. - -The Docker daemon is the persistent process that manages containers. -Docker uses the same binary for both the daemon and client. To run the -daemon you provide the `-d` flag. - - -To run the daemon with debug output, use `docker -d -D`. - -### Daemon socket option - -The Docker daemon can listen for [Docker Remote API](/reference/api/docker_remote_api/) -requests via three different types of Socket: `unix`, `tcp`, and `fd`. - -By default, a `unix` domain socket (or IPC socket) is created at `/var/run/docker.sock`, -requiring either `root` permission, or `docker` group membership. - -If you need to access the Docker daemon remotely, you need to enable the `tcp` -Socket. Beware that the default setup provides un-encrypted and un-authenticated -direct access to the Docker daemon - and should be secured either using the -[built in HTTPS encrypted socket](/articles/https/), or by putting a secure web -proxy in front of it. You can listen on port `2375` on all network interfaces -with `-H tcp://0.0.0.0:2375`, or on a particular network interface using its IP -address: `-H tcp://192.168.59.103:2375`. It is conventional to use port `2375` -for un-encrypted, and port `2376` for encrypted communication with the daemon. - -> **Note** If you're using an HTTPS encrypted socket, keep in mind that only TLS1.0 -> and greater are supported. Protocols SSLv3 and under are not supported anymore -> for security reasons. - -On Systemd based systems, you can communicate with the daemon via -[Systemd socket activation](http://0pointer.de/blog/projects/socket-activation.html), use -`docker -d -H fd://`. Using `fd://` will work perfectly for most setups but -you can also specify individual sockets: `docker -d -H fd://3`. If the -specified socket activated files aren't found, then Docker will exit. You -can find examples of using Systemd socket activation with Docker and -Systemd in the [Docker source tree]( -https://github.com/docker/docker/tree/master/contrib/init/systemd/). - -You can configure the Docker daemon to listen to multiple sockets at the same -time using multiple `-H` options: - - # listen using the default unix socket, and on 2 specific IP addresses on this host. - docker -d -H unix:///var/run/docker.sock -H tcp://192.168.59.106 -H tcp://10.10.10.2 - -The Docker client will honor the `DOCKER_HOST` environment variable to set -the `-H` flag for the client. - - $ sudo docker -H tcp://0.0.0.0:2375 ps - # or - $ export DOCKER_HOST="tcp://0.0.0.0:2375" - $ sudo docker ps - # both are equal - -Setting the `DOCKER_TLS_VERIFY` environment variable to any value other than the empty -string is equivalent to setting the `--tlsverify` flag. The following are equivalent: - - $ sudo docker --tlsverify ps - # or - $ export DOCKER_TLS_VERIFY=1 - $ sudo docker ps - -The Docker client will honor the `HTTP_PROXY`, `HTTPS_PROXY`, and `NO_PROXY` -environment variables (or the lowercase versions thereof). `HTTPS_PROXY` takes -precedence over `HTTP_PROXY`. If you happen to have a proxy configured with the -`HTTP_PROXY` or `HTTPS_PROXY` environment variables but still want to -communicate with the Docker daemon over its default `unix` domain socket, -setting the `NO_PROXY` environment variable to the path of the socket -(`/var/run/docker.sock`) is required. - -### Daemon storage-driver option - -The Docker daemon has support for several different image layer storage drivers: `aufs`, -`devicemapper`, `btrfs` and `overlay`. - -The `aufs` driver is the oldest, but is based on a Linux kernel patch-set that -is unlikely to be merged into the main kernel. These are also known to cause some -serious kernel crashes. However, `aufs` is also the only storage driver that allows -containers to share executable and shared library memory, so is a useful choice -when running thousands of containers with the same program or libraries. - -The `devicemapper` driver uses thin provisioning and Copy on Write (CoW) -snapshots. For each devicemapper graph location – typically -`/var/lib/docker/devicemapper` – a thin pool is created based on two block -devices, one for data and one for metadata. By default, these block devices -are created automatically by using loopback mounts of automatically created -sparse files. Refer to [Storage driver options](#storage-driver-options) below -for a way how to customize this setup. -[~jpetazzo/Resizing Docker containers with the Device Mapper plugin]( -http://jpetazzo.github.io/2014/01/29/docker-device-mapper-resize/) article -explains how to tune your existing setup without the use of options. - -The `btrfs` driver is very fast for `docker build` - but like `devicemapper` does not -share executable memory between devices. Use `docker -d -s btrfs -g /mnt/btrfs_partition`. - -The `overlay` is a very fast union filesystem. It is now merged in the main -Linux kernel as of [3.18.0](https://lkml.org/lkml/2014/10/26/137). -Call `docker -d -s overlay` to use it. -> **Note:** -> It is currently unsupported on `btrfs` or any Copy on Write filesystem -> and should only be used over `ext4` partitions. - -#### Storage driver options - -Particular storage-driver can be configured with options specified with -`--storage-opt` flags. The only driver accepting options is `devicemapper` as -of now. All its options are prefixed with `dm`. - -Currently supported options are: - - * `dm.basesize` - - Specifies the size to use when creating the base device, which limits the - size of images and containers. The default value is 10G. Note, thin devices - are inherently "sparse", so a 10G device which is mostly empty doesn't use - 10 GB of space on the pool. However, the filesystem will use more space for - the empty case the larger the device is. - - **Warning**: This value affects the system-wide "base" empty filesystem - that may already be initialized and inherited by pulled images. Typically, - a change to this value will require additional steps to take effect: - - $ sudo service docker stop - $ sudo rm -rf /var/lib/docker - $ sudo service docker start - - Example use: - - $ sudo docker -d --storage-opt dm.basesize=20G - - * `dm.loopdatasize` - - Specifies the size to use when creating the loopback file for the "data" - device which is used for the thin pool. The default size is 100G. Note that - the file is sparse, so it will not initially take up this much space. - - Example use: - - $ sudo docker -d --storage-opt dm.loopdatasize=200G - - * `dm.loopmetadatasize` - - Specifies the size to use when creating the loopback file for the - "metadata" device which is used for the thin pool. The default size is 2G. - Note that the file is sparse, so it will not initially take up this much - space. - - Example use: - - $ sudo docker -d --storage-opt dm.loopmetadatasize=4G - - * `dm.fs` - - Specifies the filesystem type to use for the base device. The supported - options are "ext4" and "xfs". The default is "ext4" - - Example use: - - $ sudo docker -d --storage-opt dm.fs=xfs - - * `dm.mkfsarg` - - Specifies extra mkfs arguments to be used when creating the base device. - - Example use: - - $ sudo docker -d --storage-opt "dm.mkfsarg=-O ^has_journal" - - * `dm.mountopt` - - Specifies extra mount options used when mounting the thin devices. - - Example use: - - $ sudo docker -d --storage-opt dm.mountopt=nodiscard - - * `dm.datadev` - - Specifies a custom blockdevice to use for data for the thin pool. - - If using a block device for device mapper storage, ideally both datadev and - metadatadev should be specified to completely avoid using the loopback - device. - - Example use: - - $ sudo docker -d \ - --storage-opt dm.datadev=/dev/sdb1 \ - --storage-opt dm.metadatadev=/dev/sdc1 - - * `dm.metadatadev` - - Specifies a custom blockdevice to use for metadata for the thin pool. - - For best performance the metadata should be on a different spindle than the - data, or even better on an SSD. - - If setting up a new metadata pool it is required to be valid. This can be - achieved by zeroing the first 4k to indicate empty metadata, like this: - - $ dd if=/dev/zero of=$metadata_dev bs=4096 count=1 - - Example use: - - $ sudo docker -d \ - --storage-opt dm.datadev=/dev/sdb1 \ - --storage-opt dm.metadatadev=/dev/sdc1 - - * `dm.blocksize` - - Specifies a custom blocksize to use for the thin pool. The default - blocksize is 64K. - - Example use: - - $ sudo docker -d --storage-opt dm.blocksize=512K - - * `dm.blkdiscard` - - Enables or disables the use of blkdiscard when removing devicemapper - devices. This is enabled by default (only) if using loopback devices and is - required to resparsify the loopback file on image/container removal. - - Disabling this on loopback can lead to *much* faster container removal - times, but will make the space used in `/var/lib/docker` directory not be - returned to the system for other use when containers are removed. - - Example use: - - $ sudo docker -d --storage-opt dm.blkdiscard=false - -### Docker exec-driver option - -The Docker daemon uses a specifically built `libcontainer` execution driver as its -interface to the Linux kernel `namespaces`, `cgroups`, and `SELinux`. - -There is still legacy support for the original [LXC userspace tools]( -https://linuxcontainers.org/) via the `lxc` execution driver, however, this is -not where the primary development of new functionality is taking place. -Add `-e lxc` to the daemon flags to use the `lxc` execution driver. - - -### Daemon DNS options - -To set the DNS server for all Docker containers, use -`docker -d --dns 8.8.8.8`. - -To set the DNS search domain for all Docker containers, use -`docker -d --dns-search example.com`. - -### Insecure registries - -Docker considers a private registry either secure or insecure. -In the rest of this section, *registry* is used for *private registry*, and `myregistry:5000` -is a placeholder example for a private registry. - -A secure registry uses TLS and a copy of its CA certificate is placed on the Docker host at -`/etc/docker/certs.d/myregistry:5000/ca.crt`. -An insecure registry is either not using TLS (i.e., listening on plain text HTTP), or is using -TLS with a CA certificate not known by the Docker daemon. The latter can happen when the -certificate was not found under `/etc/docker/certs.d/myregistry:5000/`, or if the certificate -verification failed (i.e., wrong CA). - -By default, Docker assumes all, but local (see local registries below), registries are secure. -Communicating with an insecure registry is not possible if Docker assumes that registry is secure. -In order to communicate with an insecure registry, the Docker daemon requires `--insecure-registry` -in one of the following two forms: - -* `--insecure-registry myregistry:5000` tells the Docker daemon that myregistry:5000 should be considered insecure. -* `--insecure-registry 10.1.0.0/16` tells the Docker daemon that all registries whose domain resolve to an IP address is part -of the subnet described by the CIDR syntax, should be considered insecure. - -The flag can be used multiple times to allow multiple registries to be marked as insecure. - -If an insecure registry is not marked as insecure, `docker pull`, `docker push`, and `docker search` -will result in an error message prompting the user to either secure or pass the `--insecure-registry` -flag to the Docker daemon as described above. - -Local registries, whose IP address falls in the 127.0.0.0/8 range, are automatically marked as insecure -as of Docker 1.3.2. It is not recommended to rely on this, as it may change in the future. - -### Running a Docker daemon behind a HTTPS_PROXY - -When running inside a LAN that uses a `HTTPS` proxy, the Docker Hub certificates -will be replaced by the proxy's certificates. These certificates need to be added -to your Docker host's configuration: - -1. Install the `ca-certificates` package for your distribution -2. Ask your network admin for the proxy's CA certificate and append them to - `/etc/pki/tls/certs/ca-bundle.crt` -3. Then start your Docker daemon with `HTTPS_PROXY=http://username:password@proxy:port/ docker -d`. - The `username:` and `password@` are optional - and are only needed if your proxy - is set up to require authentication. - -This will only add the proxy and authentication to the Docker daemon's requests - -your `docker build`s and running containers will need extra configuration to use -the proxy - -### Miscellaneous options - -IP masquerading uses address translation to allow containers without a public IP to talk -to other machines on the Internet. This may interfere with some network topologies and -can be disabled with --ip-masq=false. - -Docker supports softlinks for the Docker data directory -(`/var/lib/docker`) and for `/var/lib/docker/tmp`. The `DOCKER_TMPDIR` and the data directory can be set like this: - - DOCKER_TMPDIR=/mnt/disk2/tmp /usr/local/bin/docker -d -D -g /var/lib/docker -H unix:// > /var/lib/boot2docker/docker.log 2>&1 - # or - export DOCKER_TMPDIR=/mnt/disk2/tmp - /usr/local/bin/docker -d -D -g /var/lib/docker -H unix:// > /var/lib/boot2docker/docker.log 2>&1 - - -## attach - - Usage: docker attach [OPTIONS] CONTAINER - - Attach to a running container - - --no-stdin=false Do not attach STDIN - --sig-proxy=true Proxy all received signals to the process (non-TTY mode only). SIGCHLD, SIGKILL, and SIGSTOP are not proxied. - -The `docker attach` command allows you to attach to a running container using -the container's ID or name, either to view its ongoing output or to control it -interactively. You can attach to the same contained process multiple times -simultaneously, screen sharing style, or quickly view the progress of your -daemonized process. - -You can detach from the container (and leave it running) with `CTRL-p CTRL-q` -(for a quiet exit) or `CTRL-c` which will send a `SIGKILL` to the container. -When you are attached to a container, and exit its main process, the process's -exit code will be returned to the client. - -It is forbidden to redirect the standard input of a `docker attach` command while -attaching to a tty-enabled container (i.e.: launched with `-t`). - -#### Examples - - $ sudo docker run -d --name topdemo ubuntu /usr/bin/top -b) - $ sudo docker attach topdemo - top - 02:05:52 up 3:05, 0 users, load average: 0.01, 0.02, 0.05 - Tasks: 1 total, 1 running, 0 sleeping, 0 stopped, 0 zombie - Cpu(s): 0.1%us, 0.2%sy, 0.0%ni, 99.7%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st - Mem: 373572k total, 355560k used, 18012k free, 27872k buffers - Swap: 786428k total, 0k used, 786428k free, 221740k cached - - PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND - 1 root 20 0 17200 1116 912 R 0 0.3 0:00.03 top - - top - 02:05:55 up 3:05, 0 users, load average: 0.01, 0.02, 0.05 - Tasks: 1 total, 1 running, 0 sleeping, 0 stopped, 0 zombie - Cpu(s): 0.0%us, 0.2%sy, 0.0%ni, 99.8%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st - Mem: 373572k total, 355244k used, 18328k free, 27872k buffers - Swap: 786428k total, 0k used, 786428k free, 221776k cached - - PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND - 1 root 20 0 17208 1144 932 R 0 0.3 0:00.03 top - - - top - 02:05:58 up 3:06, 0 users, load average: 0.01, 0.02, 0.05 - Tasks: 1 total, 1 running, 0 sleeping, 0 stopped, 0 zombie - Cpu(s): 0.2%us, 0.3%sy, 0.0%ni, 99.5%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st - Mem: 373572k total, 355780k used, 17792k free, 27880k buffers - Swap: 786428k total, 0k used, 786428k free, 221776k cached - - PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND - 1 root 20 0 17208 1144 932 R 0 0.3 0:00.03 top - ^C$ - $ echo $? - 0 - $ docker ps -a | grep topdemo - 7998ac8581f9 ubuntu:14.04 "/usr/bin/top -b" 38 seconds ago Exited (0) 21 seconds ago topdemo - -And in this second example, you can see the exit code returned by the `bash` process -is returned by the `docker attach` command to its caller too: - - $ sudo docker run --name test -d -it debian - 275c44472aebd77c926d4527885bb09f2f6db21d878c75f0a1c212c03d3bcfab - $ sudo docker attach test - $$ exit 13 - exit - $ echo $? - 13 - $ sudo docker ps -a | grep test - 275c44472aeb debian:7 "/bin/bash" 26 seconds ago Exited (13) 17 seconds ago test - -## build - - Usage: docker build [OPTIONS] PATH | URL | - - - Build a new image from the source code at PATH - - --force-rm=false Always remove intermediate containers, even after unsuccessful builds - --no-cache=false Do not use cache when building the image - --pull=false Always attempt to pull a newer version of the image - -q, --quiet=false Suppress the verbose output generated by the containers - --rm=true Remove intermediate containers after a successful build - -t, --tag="" Repository name (and optionally a tag) to be applied to the resulting image in case of success - -Use this command to build Docker images from a Dockerfile and a -"context". - -The files at `PATH` or `URL` are called the "context" of the build. The -build process may refer to any of the files in the context, for example -when using an [*ADD*](/reference/builder/#add) instruction. -When a single Dockerfile is given as `URL` or is piped through `STDIN` -(`docker build - < Dockerfile`), then no context is set. - -When a Git repository is set as `URL`, then the repository is used as -the context. The Git repository is cloned with its submodules -(`git clone -recursive`). A fresh `git clone` occurs in a temporary directory -on your local host, and then this is sent to the Docker daemon as the -context. This way, your local user credentials and VPN's etc can be -used to access private repositories. - -If a file named `.dockerignore` exists in the root of `PATH` then it -is interpreted as a newline-separated list of exclusion patterns. -Exclusion patterns match files or directories relative to `PATH` that -will be excluded from the context. Globbing is done using Go's -[filepath.Match](http://golang.org/pkg/path/filepath#Match) rules. - -Please note that `.dockerignore` files in other subdirectories are -considered as normal files. Filepaths in .dockerignore are absolute with -the current directory as the root. Wildcards are allowed but the search -is not recursive. - -#### Example .dockerignore file - */temp* - */*/temp* - temp? - -The first line above `*/temp*`, would ignore all files with names starting with -`temp` from any subdirectory below the root directory. For example, a file named -`/somedir/temporary.txt` would be ignored. The second line `*/*/temp*`, will -ignore files starting with name `temp` from any subdirectory that is two levels -below the root directory. For example, the file `/somedir/subdir/temporary.txt` -would get ignored in this case. The last line in the above example `temp?` -will ignore the files that match the pattern from the root directory. -For example, the files `tempa`, `tempb` are ignored from the root directory. -Currently there is no support for regular expressions. Formats -like `[^temp*]` are ignored. - -By default the `docker build` command will look for a `Dockerfile` at the -root of the build context. The `-f`, `--file`, option lets you specify -the path to an alternative file to use instead. This is useful -in cases where the same set of files are used for multiple builds. The path -must be to a file within the build context. If a relative path is specified -then it must to be relative to the current directory. - - -See also: - -[*Dockerfile Reference*](/reference/builder). - -#### Examples - - $ sudo docker build . - Uploading context 10240 bytes - Step 1 : FROM busybox - Pulling repository busybox - ---> e9aa60c60128MB/2.284 MB (100%) endpoint: https://cdn-registry-1.docker.io/v1/ - Step 2 : RUN ls -lh / - ---> Running in 9c9e81692ae9 - total 24 - drwxr-xr-x 2 root root 4.0K Mar 12 2013 bin - drwxr-xr-x 5 root root 4.0K Oct 19 00:19 dev - drwxr-xr-x 2 root root 4.0K Oct 19 00:19 etc - drwxr-xr-x 2 root root 4.0K Nov 15 23:34 lib - lrwxrwxrwx 1 root root 3 Mar 12 2013 lib64 -> lib - dr-xr-xr-x 116 root root 0 Nov 15 23:34 proc - lrwxrwxrwx 1 root root 3 Mar 12 2013 sbin -> bin - dr-xr-xr-x 13 root root 0 Nov 15 23:34 sys - drwxr-xr-x 2 root root 4.0K Mar 12 2013 tmp - drwxr-xr-x 2 root root 4.0K Nov 15 23:34 usr - ---> b35f4035db3f - Step 3 : CMD echo Hello world - ---> Running in 02071fceb21b - ---> f52f38b7823e - Successfully built f52f38b7823e - Removing intermediate container 9c9e81692ae9 - Removing intermediate container 02071fceb21b - -This example specifies that the `PATH` is -`.`, and so all the files in the local directory get -`tar`d and sent to the Docker daemon. The `PATH` -specifies where to find the files for the "context" of the build on the -Docker daemon. Remember that the daemon could be running on a remote -machine and that no parsing of the Dockerfile -happens at the client side (where you're running -`docker build`). That means that *all* the files at -`PATH` get sent, not just the ones listed to -[*ADD*](/reference/builder/#add) in the Dockerfile. - -The transfer of context from the local machine to the Docker daemon is -what the `docker` client means when you see the -"Sending build context" message. - -If you wish to keep the intermediate containers after the build is -complete, you must use `--rm=false`. This does not -affect the build cache. - - $ sudo docker build . - Uploading context 18.829 MB - Uploading context - Step 0 : FROM busybox - ---> 769b9341d937 - Step 1 : CMD echo Hello world - ---> Using cache - ---> 99cc1ad10469 - Successfully built 99cc1ad10469 - $ echo ".git" > .dockerignore - $ sudo docker build . - Uploading context 6.76 MB - Uploading context - Step 0 : FROM busybox - ---> 769b9341d937 - Step 1 : CMD echo Hello world - ---> Using cache - ---> 99cc1ad10469 - Successfully built 99cc1ad10469 - -This example shows the use of the `.dockerignore` file to exclude the `.git` -directory from the context. Its effect can be seen in the changed size of the -uploaded context. - - $ sudo docker build -t vieux/apache:2.0 . - -This will build like the previous example, but it will then tag the -resulting image. The repository name will be `vieux/apache` -and the tag will be `2.0` - - $ sudo docker build - < Dockerfile - -This will read a Dockerfile from `STDIN` without context. Due to the -lack of a context, no contents of any local directory will be sent to -the Docker daemon. Since there is no context, a Dockerfile `ADD` only -works if it refers to a remote URL. - - $ sudo docker build - < context.tar.gz - -This will build an image for a compressed context read from `STDIN`. -Supported formats are: bzip2, gzip and xz. - - $ sudo docker build github.com/creack/docker-firefox - -This will clone the GitHub repository and use the cloned repository as -context. The Dockerfile at the root of the -repository is used as Dockerfile. Note that you -can specify an arbitrary Git repository by using the `git://` or `git@` -schema. - - $ sudo docker build -f Dockerfile.debug . - -This will use a file called `Dockerfile.debug` for the build -instructions instead of `Dockerfile`. - - $ sudo docker build -f dockerfiles/Dockerfile.debug -t myapp_debug . - $ sudo docker build -f dockerfiles/Dockerfile.prod -t myapp_prod . - -The above commands will build the current build context (as specified by -the `.`) twice, once using a debug version of a `Dockerfile` and once using -a production version. - - $ cd /home/me/myapp/some/dir/really/deep - $ sudo docker build -f /home/me/myapp/dockerfiles/debug /home/me/myapp - $ sudo docker build -f ../../../../dockerfiles/debug /home/me/myapp - -These two `docker build` commands do the exact same thing. They both -use the contents of the `debug` file instead of looking for a `Dockerfile` -and will use `/home/me/myapp` as the root of the build context. Note that -`debug` is in the directory structure of the build context, regardless of how -you refer to it on the command line. - -> **Note:** `docker build` will return a `no such file or directory` error -> if the file or directory does not exist in the uploaded context. This may -> happen if there is no context, or if you specify a file that is elsewhere -> on the Host system. The context is limited to the current directory (and its -> children) for security reasons, and to ensure repeatable builds on remote -> Docker hosts. This is also the reason why `ADD ../file` will not work. - -## commit - - Usage: docker commit [OPTIONS] CONTAINER [REPOSITORY[:TAG]] - - Create a new image from a container's changes - - -a, --author="" Author (e.g., "John Hannibal Smith ") - -m, --message="" Commit message - -p, --pause=true Pause container during commit - -It can be useful to commit a container's file changes or settings into a -new image. This allows you debug a container by running an interactive -shell, or to export a working dataset to another server. Generally, it -is better to use Dockerfiles to manage your images in a documented and -maintainable way. - -By default, the container being committed and its processes will be paused -while the image is committed. This reduces the likelihood of -encountering data corruption during the process of creating the commit. -If this behavior is undesired, set the 'p' option to false. - -#### Commit an existing container - - $ sudo docker ps - ID IMAGE COMMAND CREATED STATUS PORTS - c3f279d17e0a ubuntu:12.04 /bin/bash 7 days ago Up 25 hours - 197387f1b436 ubuntu:12.04 /bin/bash 7 days ago Up 25 hours - $ sudo docker commit c3f279d17e0a SvenDowideit/testimage:version3 - f5283438590d - $ sudo docker images | head - REPOSITORY TAG ID CREATED VIRTUAL SIZE - SvenDowideit/testimage version3 f5283438590d 16 seconds ago 335.7 MB - -## cp - -Copy files/folders from a container's filesystem to the host -path. Paths are relative to the root of the filesystem. - - Usage: docker cp CONTAINER:PATH HOSTPATH - - Copy files/folders from the PATH to the HOSTPATH - -## create - -Creates a new container. - - Usage: docker create [OPTIONS] IMAGE [COMMAND] [ARG...] - - Create a new container - - -a, --attach=[] Attach to STDIN, STDOUT or STDERR. - --add-host=[] Add a custom host-to-IP mapping (host:ip) - -c, --cpu-shares=0 CPU shares (relative weight) - --cap-add=[] Add Linux capabilities - --cap-drop=[] Drop Linux capabilities - --cidfile="" Write the container ID to the file - --cpuset="" CPUs in which to allow execution (0-3, 0,1) - --device=[] Add a host device to the container (e.g. --device=/dev/sdc:/dev/xvdc:rwm) - --dns=[] Set custom DNS servers - --dns-search=[] Set custom DNS search domains (Use --dns-search=. if you don't wish to set the search domain) - -e, --env=[] Set environment variables - --entrypoint="" Overwrite the default ENTRYPOINT of the image - --env-file=[] Read in a line delimited file of environment variables - --expose=[] Expose a port or a range of ports (e.g. --expose=3300-3310) from the container without publishing it to your host - -h, --hostname="" Container host name - -i, --interactive=false Keep STDIN open even if not attached - --ipc="" Default is to create a private IPC namespace (POSIX SysV IPC) for the container - 'container:': reuses another container shared memory, semaphores and message queues - 'host': use the host shared memory,semaphores and message queues inside the container. Note: the host mode gives the container full access to local shared memory and is therefore considered insecure. - --link=[] Add link to another container in the form of :alias - --lxc-conf=[] (lxc exec-driver only) Add custom lxc options --lxc-conf="lxc.cgroup.cpuset.cpus = 0,1" - -m, --memory="" Memory limit (format: , where unit = b, k, m or g) - --mac-address="" Container MAC address (e.g. 92:d0:c6:0a:29:33) - --name="" Assign a name to the container - --net="bridge" Set the Network mode for the container - 'bridge': creates a new network stack for the container on the docker bridge - 'none': no networking for this container - 'container:': reuses another container network stack - 'host': use the host network stack inside the container. Note: the host mode gives the container full access to local system services such as D-bus and is therefore considered insecure. - -P, --publish-all=false Publish all exposed ports to random ports on the host interfaces - -p, --publish=[] Publish a container's port, or a range of ports (e.g., `-p 3300-3310`), to the host - format: ip:hostPort:containerPort | ip::containerPort | hostPort:containerPort | containerPort - Both hostPort and containerPort can be specified as a range of ports. - When specifying ranges for both, the number of container ports in the range must match the number of host ports in the range. (e.g., `-p 1234-1236:1234-1236/tcp`) - (use 'docker port' to see the actual mapping) - --privileged=false Give extended privileges to this container - --read-only=false Mount the container's root filesystem as read only - --restart="" Restart policy to apply when a container exits (no, on-failure[:max-retry], always) - --security-opt=[] Security Options - -t, --tty=false Allocate a pseudo-TTY - -u, --user="" Username or UID - -v, --volume=[] Bind mount a volume (e.g., from the host: -v /host:/container, from Docker: -v /container) - --volumes-from=[] Mount volumes from the specified container(s) - -w, --workdir="" Working directory inside the container - -The `docker create` command creates a writeable container layer over -the specified image and prepares it for running the specified command. -The container ID is then printed to `STDOUT`. -This is similar to `docker run -d` except the container is never started. -You can then use the `docker start ` command to start the -container at any point. - -This is useful when you want to set up a container configuration ahead -of time so that it is ready to start when you need it. - -Please see the [run command](#run) section for more details. - -#### Examples - - $ sudo docker create -t -i fedora bash - 6d8af538ec541dd581ebc2a24153a28329acb5268abe5ef868c1f1a261221752 - $ sudo docker start -a -i 6d8af538ec5 - bash-4.2# - -As of v1.4.0 container volumes are initialized during the `docker create` -phase (i.e., `docker run` too). For example, this allows you to `create` the -`data` volume container, and then use it from another container: - - $ docker create -v /data --name data ubuntu - 240633dfbb98128fa77473d3d9018f6123b99c454b3251427ae190a7d951ad57 - $ docker run --rm --volumes-from data ubuntu ls -la /data - total 8 - drwxr-xr-x 2 root root 4096 Dec 5 04:10 . - drwxr-xr-x 48 root root 4096 Dec 5 04:11 .. - -Similarly, `create` a host directory bind mounted volume container, which -can then be used from the subsequent container: - - $ docker create -v /home/docker:/docker --name docker ubuntu - 9aa88c08f319cd1e4515c3c46b0de7cc9aa75e878357b1e96f91e2c773029f03 - $ docker run --rm --volumes-from docker ubuntu ls -la /docker - total 20 - drwxr-sr-x 5 1000 staff 180 Dec 5 04:00 . - drwxr-xr-x 48 root root 4096 Dec 5 04:13 .. - -rw-rw-r-- 1 1000 staff 3833 Dec 5 04:01 .ash_history - -rw-r--r-- 1 1000 staff 446 Nov 28 11:51 .ashrc - -rw-r--r-- 1 1000 staff 25 Dec 5 04:00 .gitconfig - drwxr-sr-x 3 1000 staff 60 Dec 1 03:28 .local - -rw-r--r-- 1 1000 staff 920 Nov 28 11:51 .profile - drwx--S--- 2 1000 staff 460 Dec 5 00:51 .ssh - drwxr-xr-x 32 1000 staff 1140 Dec 5 04:01 docker - - -## diff - -List the changed files and directories in a container᾿s filesystem - - Usage: docker diff CONTAINER - - Inspect changes on a container's filesystem - -There are 3 events that are listed in the `diff`: - -1. `A` - Add -2. `D` - Delete -3. `C` - Change - -For example: - - $ sudo docker diff 7bb0e258aefe - - C /dev - A /dev/kmsg - C /etc - A /etc/mtab - A /go - A /go/src - A /go/src/github.com - A /go/src/github.com/docker - A /go/src/github.com/docker/docker - A /go/src/github.com/docker/docker/.git - .... - -## events - - Usage: docker events [OPTIONS] - - Get real time events from the server - - -f, --filter=[] Provide filter values (i.e., 'event=stop') - --since="" Show all events created since timestamp - --until="" Stream events until this timestamp - -Docker containers will report the following events: - - create, destroy, die, export, kill, oom, pause, restart, start, stop, unpause - -and Docker images will report: - - untag, delete - -#### Filtering - -The filtering flag (`-f` or `--filter`) format is of "key=value". If you would like to use -multiple filters, pass multiple flags (e.g., `--filter "foo=bar" --filter "bif=baz"`) - -Using the same filter multiple times will be handled as a *OR*; for example -`--filter container=588a23dac085 --filter container=a8f7720b8c22` will display events for -container 588a23dac085 *OR* container a8f7720b8c22 - -Using multiple filters will be handled as a *AND*; for example -`--filter container=588a23dac085 --filter event=start` will display events for container -container 588a23dac085 *AND* the event type is *start* - -Current filters: - * event - * image - * container - -#### Examples - -You'll need two shells for this example. - -**Shell 1: Listening for events:** - - $ sudo docker events - -**Shell 2: Start and Stop containers:** - - $ sudo docker start 4386fb97867d - $ sudo docker stop 4386fb97867d - $ sudo docker stop 7805c1d35632 - -**Shell 1: (Again .. now showing events):** - - 2014-05-10T17:42:14.999999999Z07:00 4386fb97867d: (from ubuntu-1:14.04) start - 2014-05-10T17:42:14.999999999Z07:00 4386fb97867d: (from ubuntu-1:14.04) die - 2014-05-10T17:42:14.999999999Z07:00 4386fb97867d: (from ubuntu-1:14.04) stop - 2014-05-10T17:42:14.999999999Z07:00 7805c1d35632: (from redis:2.8) die - 2014-05-10T17:42:14.999999999Z07:00 7805c1d35632: (from redis:2.8) stop - -**Show events in the past from a specified time:** - - $ sudo docker events --since 1378216169 - 2014-03-10T17:42:14.999999999Z07:00 4386fb97867d: (from ubuntu-1:14.04) die - 2014-05-10T17:42:14.999999999Z07:00 4386fb97867d: (from ubuntu-1:14.04) stop - 2014-05-10T17:42:14.999999999Z07:00 7805c1d35632: (from redis:2.8) die - 2014-03-10T17:42:14.999999999Z07:00 7805c1d35632: (from redis:2.8) stop - - $ sudo docker events --since '2013-09-03' - 2014-09-03T17:42:14.999999999Z07:00 4386fb97867d: (from ubuntu-1:14.04) start - 2014-09-03T17:42:14.999999999Z07:00 4386fb97867d: (from ubuntu-1:14.04) die - 2014-05-10T17:42:14.999999999Z07:00 4386fb97867d: (from ubuntu-1:14.04) stop - 2014-05-10T17:42:14.999999999Z07:00 7805c1d35632: (from redis:2.8) die - 2014-09-03T17:42:14.999999999Z07:00 7805c1d35632: (from redis:2.8) stop - - $ sudo docker events --since '2013-09-03T15:49:29' - 2014-09-03T15:49:29.999999999Z07:00 4386fb97867d: (from ubuntu-1:14.04) die - 2014-05-10T17:42:14.999999999Z07:00 4386fb97867d: (from ubuntu-1:14.04) stop - 2014-05-10T17:42:14.999999999Z07:00 7805c1d35632: (from redis:2.8) die - 2014-09-03T15:49:29.999999999Z07:00 7805c1d35632: (from redis:2.8) stop - -**Filter events:** - - $ sudo docker events --filter 'event=stop' - 2014-05-10T17:42:14.999999999Z07:00 4386fb97867d: (from ubuntu-1:14.04) stop - 2014-09-03T17:42:14.999999999Z07:00 7805c1d35632: (from redis:2.8) stop - - $ sudo docker events --filter 'image=ubuntu-1:14.04' - 2014-05-10T17:42:14.999999999Z07:00 4386fb97867d: (from ubuntu-1:14.04) start - 2014-05-10T17:42:14.999999999Z07:00 4386fb97867d: (from ubuntu-1:14.04) die - 2014-05-10T17:42:14.999999999Z07:00 4386fb97867d: (from ubuntu-1:14.04) stop - - $ sudo docker events --filter 'container=7805c1d35632' - 2014-05-10T17:42:14.999999999Z07:00 7805c1d35632: (from redis:2.8) die - 2014-09-03T15:49:29.999999999Z07:00 7805c1d35632: (from redis:2.8) stop - - $ sudo docker events --filter 'container=7805c1d35632' --filter 'container=4386fb97867d' - 2014-09-03T15:49:29.999999999Z07:00 4386fb97867d: (from ubuntu-1:14.04) die - 2014-05-10T17:42:14.999999999Z07:00 4386fb97867d: (from ubuntu-1:14.04) stop - 2014-05-10T17:42:14.999999999Z07:00 7805c1d35632: (from redis:2.8) die - 2014-09-03T15:49:29.999999999Z07:00 7805c1d35632: (from redis:2.8) stop - - $ sudo docker events --filter 'container=7805c1d35632' --filter 'event=stop' - 2014-09-03T15:49:29.999999999Z07:00 7805c1d35632: (from redis:2.8) stop - -## exec - - Usage: docker exec [OPTIONS] CONTAINER COMMAND [ARG...] - - Run a command in a running container - - -d, --detach=false Detached mode: run command in the background - -i, --interactive=false Keep STDIN open even if not attached - -t, --tty=false Allocate a pseudo-TTY - -The `docker exec` command runs a new command in a running container. - -The command started using `docker exec` will only run while the container's primary -process (`PID 1`) is running, and will not be restarted if the container is restarted. - -If the container is paused, then the `docker exec` command will fail with an error: - - $ docker pause test - test - $ docker ps - CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES - 1ae3b36715d2 ubuntu:latest "bash" 17 seconds ago Up 16 seconds (Paused) test - $ docker exec test ls - FATA[0000] Error response from daemon: Container test is paused, unpause the container before exec - $ echo $? - 1 - -#### Examples - - $ sudo docker run --name ubuntu_bash --rm -i -t ubuntu bash - -This will create a container named `ubuntu_bash` and start a Bash session. - - $ sudo docker exec -d ubuntu_bash touch /tmp/execWorks - -This will create a new file `/tmp/execWorks` inside the running container -`ubuntu_bash`, in the background. - - $ sudo docker exec -it ubuntu_bash bash - -This will create a new Bash session in the container `ubuntu_bash`. - -## export - - Usage: docker export CONTAINER - - Export the contents of a filesystem as a tar archive to STDOUT - -For example: - - $ sudo docker export red_panda > latest.tar - -> **Note:** -> `docker export` does not export the contents of volumes associated with the -> container. If a volume is mounted on top of an existing directory in the -> container, `docker export` will export the contents of the *underlying* -> directory, not the contents of the volume. -> -> Refer to [Backup, restore, or migrate data volumes](/userguide/dockervolumes/#backup-restore-or-migrate-data-volumes) -> in the user guide for examples on exporting data in a volume. - -## history - - Usage: docker history [OPTIONS] IMAGE - - Show the history of an image - - --no-trunc=false Don't truncate output - -q, --quiet=false Only show numeric IDs - -To see how the `docker:latest` image was built: - - $ sudo docker history docker - IMAGE CREATED CREATED BY SIZE - 3e23a5875458790b7a806f95f7ec0d0b2a5c1659bfc899c89f939f6d5b8f7094 8 days ago /bin/sh -c #(nop) ENV LC_ALL=C.UTF-8 0 B - 8578938dd17054dce7993d21de79e96a037400e8d28e15e7290fea4f65128a36 8 days ago /bin/sh -c dpkg-reconfigure locales && locale-gen C.UTF-8 && /usr/sbin/update-locale LANG=C.UTF-8 1.245 MB - be51b77efb42f67a5e96437b3e102f81e0a1399038f77bf28cea0ed23a65cf60 8 days ago /bin/sh -c apt-get update && apt-get install -y git libxml2-dev python build-essential make gcc python-dev locales python-pip 338.3 MB - 4b137612be55ca69776c7f30c2d2dd0aa2e7d72059820abf3e25b629f887a084 6 weeks ago /bin/sh -c #(nop) ADD jessie.tar.xz in / 121 MB - 750d58736b4b6cc0f9a9abe8f258cef269e3e9dceced1146503522be9f985ada 6 weeks ago /bin/sh -c #(nop) MAINTAINER Tianon Gravi - mkimage-debootstrap.sh -t jessie.tar.xz jessie http://http.debian.net/debian 0 B - 511136ea3c5a64f264b78b5433614aec563103b4d4702f3ba7d4d2698e22c158 9 months ago 0 B - -## images - - Usage: docker images [OPTIONS] [REPOSITORY] - - List images - - -a, --all=false Show all images (by default filter out the intermediate image layers) - -f, --filter=[] Provide filter values (i.e., 'dangling=true') - --no-trunc=false Don't truncate output - -q, --quiet=false Only show numeric IDs - -The default `docker images` will show all top level -images, their repository and tags, and their virtual size. - -Docker images have intermediate layers that increase reusability, -decrease disk usage, and speed up `docker build` by -allowing each step to be cached. These intermediate layers are not shown -by default. - -The `VIRTUAL SIZE` is the cumulative space taken up by the image and all -its parent images. This is also the disk space used by the contents of the -Tar file created when you `docker save` an image. - -An image will be listed more than once if it has multiple repository names -or tags. This single image (identifiable by its matching `IMAGE ID`) -uses up the `VIRTUAL SIZE` listed only once. - -#### Listing the most recently created images - - $ sudo docker images | head - REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE - 77af4d6b9913 19 hours ago 1.089 GB - committ latest b6fa739cedf5 19 hours ago 1.089 GB - 78a85c484f71 19 hours ago 1.089 GB - docker latest 30557a29d5ab 20 hours ago 1.089 GB - 5ed6274db6ce 24 hours ago 1.089 GB - postgres 9 746b819f315e 4 days ago 213.4 MB - postgres 9.3 746b819f315e 4 days ago 213.4 MB - postgres 9.3.5 746b819f315e 4 days ago 213.4 MB - postgres latest 746b819f315e 4 days ago 213.4 MB - - -#### Listing the full length image IDs - - $ sudo docker images --no-trunc | head - REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE - 77af4d6b9913e693e8d0b4b294fa62ade6054e6b2f1ffb617ac955dd63fb0182 19 hours ago 1.089 GB - committest latest b6fa739cedf5ea12a620a439402b6004d057da800f91c7524b5086a5e4749c9f 19 hours ago 1.089 GB - 78a85c484f71509adeaace20e72e941f6bdd2b25b4c75da8693efd9f61a37921 19 hours ago 1.089 GB - docker latest 30557a29d5abc51e5f1d5b472e79b7e296f595abcf19fe6b9199dbbc809c6ff4 20 hours ago 1.089 GB - 0124422dd9f9cf7ef15c0617cda3931ee68346455441d66ab8bdc5b05e9fdce5 20 hours ago 1.089 GB - 18ad6fad340262ac2a636efd98a6d1f0ea775ae3d45240d3418466495a19a81b 22 hours ago 1.082 GB - f9f1e26352f0a3ba6a0ff68167559f64f3e21ff7ada60366e2d44a04befd1d3a 23 hours ago 1.089 GB - tryout latest 2629d1fa0b81b222fca63371ca16cbf6a0772d07759ff80e8d1369b926940074 23 hours ago 131.5 MB - 5ed6274db6ceb2397844896966ea239290555e74ef307030ebb01ff91b1914df 24 hours ago 1.089 GB - -#### Filtering - -The filtering flag (`-f` or `--filter`) format is of "key=value". If there is more -than one filter, then pass multiple flags (e.g., `--filter "foo=bar" --filter "bif=baz"`) - -Current filters: - * dangling (boolean - true or false) - -##### Untagged images - - $ sudo docker images --filter "dangling=true" - - REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE - 8abc22fbb042 4 weeks ago 0 B - 48e5f45168b9 4 weeks ago 2.489 MB - bf747efa0e2f 4 weeks ago 0 B - 980fe10e5736 12 weeks ago 101.4 MB - dea752e4e117 12 weeks ago 101.4 MB - 511136ea3c5a 8 months ago 0 B - -This will display untagged images, that are the leaves of the images tree (not -intermediary layers). These images occur when a new build of an image takes the -`repo:tag` away from the image ID, leaving it untagged. A warning will be issued -if trying to remove an image when a container is presently using it. -By having this flag it allows for batch cleanup. - -Ready for use by `docker rmi ...`, like: - - $ sudo docker rmi $(sudo docker images -f "dangling=true" -q) - - 8abc22fbb042 - 48e5f45168b9 - bf747efa0e2f - 980fe10e5736 - dea752e4e117 - 511136ea3c5a - -NOTE: Docker will warn you if any containers exist that are using these untagged images. - -## import - - Usage: docker import URL|- [REPOSITORY[:TAG]] - - Create an empty filesystem image and import the contents of the tarball (.tar, .tar.gz, .tgz, .bzip, .tar.xz, .txz) into it, then optionally tag it. - -URLs must start with `http` and point to a single file archive (.tar, -.tar.gz, .tgz, .bzip, .tar.xz, or .txz) containing a root filesystem. If -you would like to import from a local directory or archive, you can use -the `-` parameter to take the data from `STDIN`. - -#### Examples - -**Import from a remote location:** - -This will create a new untagged image. - - $ sudo docker import http://example.com/exampleimage.tgz - -**Import from a local file:** - -Import to docker via pipe and `STDIN`. - - $ cat exampleimage.tgz | sudo docker import - exampleimagelocal:new - -**Import from a local directory:** - - $ sudo tar -c . | sudo docker import - exampleimagedir - -Note the `sudo` in this example – you must preserve -the ownership of the files (especially root ownership) during the -archiving with tar. If you are not root (or the sudo command) when you -tar, then the ownerships might not get preserved. - -## info - - - Usage: docker info - - Display system-wide information - -For example: - - $ sudo docker -D info - Containers: 14 - Images: 52 - Storage Driver: aufs - Root Dir: /var/lib/docker/aufs - Backing Filesystem: extfs - Dirs: 545 - Execution Driver: native-0.2 - Kernel Version: 3.13.0-24-generic - Operating System: Ubuntu 14.04 LTS - CPUs: 1 - Name: prod-server-42 - ID: 7TRN:IPZB:QYBB:VPBQ:UMPP:KARE:6ZNR:XE6T:7EWV:PKF4:ZOJD:TPYS - Total Memory: 2 GiB - Debug mode (server): false - Debug mode (client): true - Fds: 10 - Goroutines: 9 - EventsListeners: 0 - Init Path: /usr/bin/docker - Docker Root Dir: /var/lib/docker - Username: svendowideit - Registry: [https://index.docker.io/v1/] - Labels: - storage=ssd - -The global `-D` option tells all `docker` commands to output debug information. - -When sending issue reports, please use `docker version` and `docker -D info` to -ensure we know how your setup is configured. - -## inspect - - Usage: docker inspect [OPTIONS] CONTAINER|IMAGE [CONTAINER|IMAGE...] - - Return low-level information on a container or image - - -f, --format="" Format the output using the given go template. - -By default, this will render all results in a JSON array. If a format is -specified, the given template will be executed for each result. - -Go's [text/template](http://golang.org/pkg/text/template/) package -describes all the details of the format. - -#### Examples - -**Get an instance's IP address:** - -For the most part, you can pick out any field from the JSON in a fairly -straightforward manner. - - $ sudo docker inspect --format='{{.NetworkSettings.IPAddress}}' $INSTANCE_ID - -**Get an instance's MAC Address:** - -For the most part, you can pick out any field from the JSON in a fairly -straightforward manner. - - $ sudo docker inspect --format='{{.NetworkSettings.MacAddress}}' $INSTANCE_ID - -**List All Port Bindings:** - -One can loop over arrays and maps in the results to produce simple text -output: - - $ sudo docker inspect --format='{{range $p, $conf := .NetworkSettings.Ports}} {{$p}} -> {{(index $conf 0).HostPort}} {{end}}' $INSTANCE_ID - -**Find a Specific Port Mapping:** - -The `.Field` syntax doesn't work when the field name begins with a -number, but the template language's `index` function does. The -`.NetworkSettings.Ports` section contains a map of the internal port -mappings to a list of external address/port objects, so to grab just the -numeric public port, you use `index` to find the specific port map, and -then `index` 0 contains the first object inside of that. Then we ask for -the `HostPort` field to get the public address. - - $ sudo docker inspect --format='{{(index (index .NetworkSettings.Ports "8787/tcp") 0).HostPort}}' $INSTANCE_ID - -**Get config:** - -The `.Field` syntax doesn't work when the field contains JSON data, but -the template language's custom `json` function does. The `.config` -section contains complex JSON object, so to grab it as JSON, you use -`json` to convert the configuration object into JSON. - - $ sudo docker inspect --format='{{json .config}}' $INSTANCE_ID - -## kill - - Usage: docker kill [OPTIONS] CONTAINER [CONTAINER...] - - Kill a running container using SIGKILL or a specified signal - - -s, --signal="KILL" Signal to send to the container - -The main process inside the container will be sent `SIGKILL`, or any -signal specified with option `--signal`. - -## load - - Usage: docker load [OPTIONS] - - Load an image from a tar archive on STDIN - - -i, --input="" Read from a tar archive file, instead of STDIN - -Loads a tarred repository from a file or the standard input stream. -Restores both images and tags. - - $ sudo docker images - REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE - $ sudo docker load < busybox.tar - $ sudo docker images - REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE - busybox latest 769b9341d937 7 weeks ago 2.489 MB - $ sudo docker load --input fedora.tar - $ sudo docker images - REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE - busybox latest 769b9341d937 7 weeks ago 2.489 MB - fedora rawhide 0d20aec6529d 7 weeks ago 387 MB - fedora 20 58394af37342 7 weeks ago 385.5 MB - fedora heisenbug 58394af37342 7 weeks ago 385.5 MB - fedora latest 58394af37342 7 weeks ago 385.5 MB - -## login - - Usage: docker login [OPTIONS] [SERVER] - - Register or log in to a Docker registry server, if no server is specified "https://index.docker.io/v1/" is the default. - - -e, --email="" Email - -p, --password="" Password - -u, --username="" Username - -If you want to login to a self-hosted registry you can specify this by -adding the server name. - - example: - $ sudo docker login localhost:8080 - -## logout - - Usage: docker logout [SERVER] - - Log out from a Docker registry, if no server is specified "https://index.docker.io/v1/" is the default. - -For example: - - $ sudo docker logout localhost:8080 - -## logs - - Usage: docker logs [OPTIONS] CONTAINER - - Fetch the logs of a container - - -f, --follow=false Follow log output - -t, --timestamps=false Show timestamps - --tail="all" Output the specified number of lines at the end of logs (defaults to all logs) - -The `docker logs` command batch-retrieves logs present at the time of execution. - -The `docker logs --follow` command will continue streaming the new output from -the container's `STDOUT` and `STDERR`. - -Passing a negative number or a non-integer to `--tail` is invalid and the -value is set to `all` in that case. This behavior may change in the future. - -The `docker logs --timestamp` commands will add an RFC3339Nano -timestamp, for example `2014-09-16T06:17:46.000000000Z`, to each -log entry. To ensure that the timestamps for are aligned the -nano-second part of the timestamp will be padded with zero when necessary. - -## pause - - Usage: docker pause CONTAINER - - Pause all processes within a container - -The `docker pause` command uses the cgroups freezer to suspend all processes in -a container. Traditionally, when suspending a process the `SIGSTOP` signal is -used, which is observable by the process being suspended. With the cgroups freezer -the process is unaware, and unable to capture, that it is being suspended, -and subsequently resumed. - -See the -[cgroups freezer documentation](https://www.kernel.org/doc/Documentation/cgroups/freezer-subsystem.txt) -for further details. - -## port - - Usage: docker port CONTAINER [PRIVATE_PORT[/PROTO]] - - List port mappings for the CONTAINER, or lookup the public-facing port that is NAT-ed to the PRIVATE_PORT - -You can find out all the ports mapped by not specifying a `PRIVATE_PORT`, or -just a specific mapping: - - $ sudo docker ps test - CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES - b650456536c7 busybox:latest top 54 minutes ago Up 54 minutes 0.0.0.0:1234->9876/tcp, 0.0.0.0:4321->7890/tcp test - $ sudo docker port test - 7890/tcp -> 0.0.0.0:4321 - 9876/tcp -> 0.0.0.0:1234 - $ sudo docker port test 7890/tcp - 0.0.0.0:4321 - $ sudo docker port test 7890/udp - 2014/06/24 11:53:36 Error: No public port '7890/udp' published for test - $ sudo docker port test 7890 - 0.0.0.0:4321 - -## pause - - Usage: docker pause CONTAINER - - Pause all processes within a container - -The `docker pause` command uses the cgroups freezer to suspend all processes in -a container. Traditionally when suspending a process the `SIGSTOP` signal is -used, which is observable by the process being suspended. With the cgroups freezer -the process is unaware, and unable to capture, that it is being suspended, -and subsequently resumed. - -See the -[cgroups freezer documentation](https://www.kernel.org/doc/Documentation/cgroups/freezer-subsystem.txt) -for further details. - -## rename - - Usage: docker rename OLD_NAME NEW_NAME - - rename a existing container to a NEW_NAME - -The `docker rename` command allows the container to be renamed to a different name. - -## ps - - Usage: docker ps [OPTIONS] - - List containers - - -a, --all=false Show all containers. Only running containers are shown by default. - --before="" Show only container created before Id or Name, include non-running ones. - -f, --filter=[] Provide filter values. Valid filters: - exited= - containers with exit code of - status=(restarting|running|paused|exited) - -l, --latest=false Show only the latest created container, include non-running ones. - -n=-1 Show n last created containers, include non-running ones. - --no-trunc=false Don't truncate output - -q, --quiet=false Only display numeric IDs - -s, --size=false Display total file sizes - --since="" Show only containers created since Id or Name, include non-running ones. - -Running `docker ps --no-trunc` showing 2 linked containers. - - $ sudo docker ps - CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES - f7ee772232194fcc088c6bdec6ea09f7b3f6c54d53934658164b8602d7cd4744 ubuntu:12.04 bash 17 seconds ago Up 16 seconds webapp - d0963715a061c7c7b7cc80b2646da913a959fbf13e80a971d4a60f6997a2f595 crosbymichael/redis:latest /redis-server --dir 33 minutes ago Up 33 minutes 6379/tcp redis,webapp/db - -`docker ps` will show only running containers by default. To see all containers: -`docker ps -a` - -#### Filtering - -The filtering flag (`-f` or `--filter)` format is a `key=value` pair. If there is more -than one filter, then pass multiple flags (e.g. `--filter "foo=bar" --filter "bif=baz"`) - -Current filters: - * exited (int - the code of exited containers. Only useful with '--all') - * status (restarting|running|paused|exited) - -##### Successfully exited containers - - $ sudo docker ps -a --filter 'exited=0' - CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES - ea09c3c82f6e registry:latest /srv/run.sh 2 weeks ago Exited (0) 2 weeks ago 127.0.0.1:5000->5000/tcp desperate_leakey - 106ea823fe4e fedora:latest /bin/sh -c 'bash -l' 2 weeks ago Exited (0) 2 weeks ago determined_albattani - 48ee228c9464 fedora:20 bash 2 weeks ago Exited (0) 2 weeks ago tender_torvalds - -This shows all the containers that have exited with status of '0' - -## pull - - Usage: docker pull [OPTIONS] NAME[:TAG] - - Pull an image or a repository from the registry - - -a, --all-tags=false Download all tagged images in the repository - -Most of your images will be created on top of a base image from the -[Docker Hub](https://hub.docker.com) registry. - -[Docker Hub](https://hub.docker.com) contains many pre-built images that you -can `pull` and try without needing to define and configure your own. - -It is also possible to manually specify the path of a registry to pull from. -For example, if you have set up a local registry, you can specify its path to -pull from it. A repository path is similar to a URL, but does not contain -a protocol specifier (`https://`, for example). - -To download a particular image, or set of images (i.e., a repository), -use `docker pull`: - - $ sudo docker pull debian - # will pull the debian:latest image, its intermediate layers - # and any aliases of the same id - $ sudo docker pull debian:testing - # will pull the image named debian:testing and any intermediate - # layers it is based on. - # (Typically the empty `scratch` image, a MAINTAINER layer, - # and the un-tarred base). - $ sudo docker pull --all-tags centos - # will pull all the images from the centos repository - $ sudo docker pull registry.hub.docker.com/debian - # manually specifies the path to the default Docker registry. This could - # be replaced with the path to a local registry to pull from another source. - -## push - - Usage: docker push NAME[:TAG] - - Push an image or a repository to the registry - -Use `docker push` to share your images to the [Docker Hub](https://hub.docker.com) -registry or to a self-hosted one. - -## restart - - Usage: docker restart [OPTIONS] CONTAINER [CONTAINER...] - - Restart a running container - - -t, --time=10 Number of seconds to try to stop for before killing the container. Once killed it will then be restarted. Default is 10 seconds. - -## rm - - Usage: docker rm [OPTIONS] CONTAINER [CONTAINER...] - - Remove one or more containers - - -f, --force=false Force the removal of a running container (uses SIGKILL) - -l, --link=false Remove the specified link and not the underlying container - -v, --volumes=false Remove the volumes associated with the container - -#### Examples - - $ sudo docker rm /redis - /redis - -This will remove the container referenced under the link -`/redis`. - - $ sudo docker rm --link /webapp/redis - /webapp/redis - -This will remove the underlying link between `/webapp` and the `/redis` -containers removing all network communication. - - $ sudo docker rm --force redis - redis - -The main process inside the container referenced under the link `/redis` will receive -`SIGKILL`, then the container will be removed. - -This command will delete all stopped containers. The command `docker ps --a -q` will return all existing container IDs and pass them to the `rm` -command which will delete them. Any running containers will not be -deleted. - -## rmi - - Usage: docker rmi [OPTIONS] IMAGE [IMAGE...] - - Remove one or more images - - -f, --force=false Force removal of the image - --no-prune=false Do not delete untagged parents - -#### Removing tagged images - -Images can be removed either by their short or long IDs, or their image -names. If an image has more than one name, each of them needs to be -removed before the image is removed. - - $ sudo docker images - REPOSITORY TAG IMAGE ID CREATED SIZE - test1 latest fd484f19954f 23 seconds ago 7 B (virtual 4.964 MB) - test latest fd484f19954f 23 seconds ago 7 B (virtual 4.964 MB) - test2 latest fd484f19954f 23 seconds ago 7 B (virtual 4.964 MB) - - $ sudo docker rmi fd484f19954f - Error: Conflict, cannot delete image fd484f19954f because it is tagged in multiple repositories - 2013/12/11 05:47:16 Error: failed to remove one or more images - - $ sudo docker rmi test1 - Untagged: fd484f19954f4920da7ff372b5067f5b7ddb2fd3830cecd17b96ea9e286ba5b8 - $ sudo docker rmi test2 - Untagged: fd484f19954f4920da7ff372b5067f5b7ddb2fd3830cecd17b96ea9e286ba5b8 - - $ sudo docker images - REPOSITORY TAG IMAGE ID CREATED SIZE - test latest fd484f19954f 23 seconds ago 7 B (virtual 4.964 MB) - $ sudo docker rmi test - Untagged: fd484f19954f4920da7ff372b5067f5b7ddb2fd3830cecd17b96ea9e286ba5b8 - Deleted: fd484f19954f4920da7ff372b5067f5b7ddb2fd3830cecd17b96ea9e286ba5b8 - -## run - - Usage: docker run [OPTIONS] IMAGE [COMMAND] [ARG...] - - Run a command in a new container - - -a, --attach=[] Attach to STDIN, STDOUT or STDERR. - --add-host=[] Add a custom host-to-IP mapping (host:ip) - -c, --cpu-shares=0 CPU shares (relative weight) - --cap-add=[] Add Linux capabilities - --cap-drop=[] Drop Linux capabilities - --cidfile="" Write the container ID to the file - --cpuset="" CPUs in which to allow execution (0-3, 0,1) - -d, --detach=false Detached mode: run the container in the background and print the new container ID - --device=[] Add a host device to the container (e.g. --device=/dev/sdc:/dev/xvdc:rwm) - --dns=[] Set custom DNS servers - --dns-search=[] Set custom DNS search domains (Use --dns-search=. if you don't wish to set the search domain) - -e, --env=[] Set environment variables - --entrypoint="" Overwrite the default ENTRYPOINT of the image - --env-file=[] Read in a line delimited file of environment variables - --expose=[] Expose a port or a range of ports (e.g. --expose=3300-3310) from the container without publishing it to your host - -h, --hostname="" Container host name - -i, --interactive=false Keep STDIN open even if not attached - --ipc="" Default is to create a private IPC namespace (POSIX SysV IPC) for the container - 'container:': reuses another container shared memory, semaphores and message queues - 'host': use the host shared memory,semaphores and message queues inside the container. Note: the host mode gives the container full access to local shared memory and is therefore considered insecure. - --link=[] Add link to another container in the form of name:alias - --lxc-conf=[] (lxc exec-driver only) Add custom lxc options --lxc-conf="lxc.cgroup.cpuset.cpus = 0,1" - -m, --memory="" Memory limit (format: , where unit = b, k, m or g) - -memory-swap="" Total memory usage (memory + swap), set '-1' to disable swap (format: , where unit = b, k, m or g) - --mac-address="" Container MAC address (e.g. 92:d0:c6:0a:29:33) - --name="" Assign a name to the container - --net="bridge" Set the Network mode for the container - 'bridge': creates a new network stack for the container on the docker bridge - 'none': no networking for this container - 'container:': reuses another container network stack - 'host': use the host network stack inside the container. Note: the host mode gives the container full access to local system services such as D-bus and is therefore considered insecure. - -P, --publish-all=false Publish all exposed ports to random ports on the host interfaces - -p, --publish=[] Publish a container's port to the host - format: ip:hostPort:containerPort | ip::containerPort | hostPort:containerPort | containerPort - Both hostPort and containerPort can be specified as a range of ports. - When specifying ranges for both, the number of container ports in the range must match the number of host ports in the range. (e.g., `-p 1234-1236:1234-1236/tcp`) - (use 'docker port' to see the actual mapping) - --pid=host 'host': use the host PID namespace inside the container. Note: the host mode gives the container full access to local system services such as D-bus and is therefore considered insecure. - --privileged=false Give extended privileges to this container - --read-only=false Mount the container's root filesystem as read only - --restart="" Restart policy to apply when a container exits (no, on-failure[:max-retry], always) - --rm=false Automatically remove the container when it exits (incompatible with -d) - --security-opt=[] Security Options - --sig-proxy=true Proxy received signals to the process (non-TTY mode only). SIGCHLD, SIGSTOP, and SIGKILL are not proxied. - -t, --tty=false Allocate a pseudo-TTY - -u, --user="" Username or UID - -v, --volume=[] Bind mount a volume (e.g., from the host: -v /host:/container, from Docker: -v /container) - --volumes-from=[] Mount volumes from the specified container(s) - -w, --workdir="" Working directory inside the container - -The `docker run` command first `creates` a writeable container layer over the -specified image, and then `starts` it using the specified command. That is, -`docker run` is equivalent to the API `/containers/create` then -`/containers/(id)/start`. A stopped container can be restarted with all its -previous changes intact using `docker start`. See `docker ps -a` to view a list -of all containers. - -There is detailed information about `docker run` in the [Docker run reference]( -/reference/run/). - -The `docker run` command can be used in combination with `docker commit` to -[*change the command that a container runs*](#commit-an-existing-container). - -See the [Docker User Guide](/userguide/dockerlinks/) for more detailed -information about the `--expose`, `-p`, `-P` and `--link` parameters, -and linking containers. - -#### Examples - - $ sudo docker run --name test -it debian - $$ exit 13 - exit - $ echo $? - 13 - $ sudo docker ps -a | grep test - 275c44472aeb debian:7 "/bin/bash" 26 seconds ago Exited (13) 17 seconds ago test - -In this example, we are running `bash` interactively in the `debian:latest` image, and giving -the container the name `test`. We then quit `bash` by running `exit 13`, which means `bash` -will have an exit code of `13`. This is then passed on to the caller of `docker run`, and -is recorded in the `test` container metadata. - - $ sudo docker run --cidfile /tmp/docker_test.cid ubuntu echo "test" - -This will create a container and print `test` to the console. The `cidfile` -flag makes Docker attempt to create a new file and write the container ID to it. -If the file exists already, Docker will return an error. Docker will close this -file when `docker run` exits. - - $ sudo docker run -t -i --rm ubuntu bash - root@bc338942ef20:/# mount -t tmpfs none /mnt - mount: permission denied - -This will *not* work, because by default, most potentially dangerous kernel -capabilities are dropped; including `cap_sys_admin` (which is required to mount -filesystems). However, the `--privileged` flag will allow it to run: - - $ sudo docker run --privileged ubuntu bash - root@50e3f57e16e6:/# mount -t tmpfs none /mnt - root@50e3f57e16e6:/# df -h - Filesystem Size Used Avail Use% Mounted on - none 1.9G 0 1.9G 0% /mnt - -The `--privileged` flag gives *all* capabilities to the container, and it also -lifts all the limitations enforced by the `device` cgroup controller. In other -words, the container can then do almost everything that the host can do. This -flag exists to allow special use-cases, like running Docker within Docker. - - $ sudo docker run -w /path/to/dir/ -i -t ubuntu pwd - -The `-w` lets the command being executed inside directory given, here -`/path/to/dir/`. If the path does not exists it is created inside the container. - - $ sudo docker run -v `pwd`:`pwd` -w `pwd` -i -t ubuntu pwd - -The `-v` flag mounts the current working directory into the container. The `-w` -lets the command being executed inside the current working directory, by -changing into the directory to the value returned by `pwd`. So this -combination executes the command using the container, but inside the -current working directory. - - $ sudo docker run -v /doesnt/exist:/foo -w /foo -i -t ubuntu bash - -When the host directory of a bind-mounted volume doesn't exist, Docker -will automatically create this directory on the host for you. In the -example above, Docker will create the `/doesnt/exist` -folder before starting your container. - - $ sudo docker run --read-only -v /icanwrite busybox touch /icanwrite here - -Volumes can be used in combination with `--read-only` to control where -a container writes files. The `--read-only` flag mounts the container's root -filesystem as read only prohibiting writes to locations other than the -specified volumes for the container. - - $ sudo docker run -t -i -v /var/run/docker.sock:/var/run/docker.sock -v ./static-docker:/usr/bin/docker busybox sh - -By bind-mounting the docker unix socket and statically linked docker -binary (such as that provided by [https://get.docker.com]( -https://get.docker.com)), you give the container the full access to create and -manipulate the host's Docker daemon. - - $ sudo docker run -p 127.0.0.1:80:8080 ubuntu bash - -This binds port `8080` of the container to port `80` on `127.0.0.1` of -the host machine. The [Docker User Guide](/userguide/dockerlinks/) -explains in detail how to manipulate ports in Docker. - - $ sudo docker run --expose 80 ubuntu bash - -This exposes port `80` of the container for use within a link without -publishing the port to the host system's interfaces. The [Docker User -Guide](/userguide/dockerlinks) explains in detail how to manipulate -ports in Docker. - - $ sudo docker run -e MYVAR1 --env MYVAR2=foo --env-file ./env.list ubuntu bash - -This sets environmental variables in the container. For illustration all three -flags are shown here. Where `-e`, `--env` take an environment variable and -value, or if no `=` is provided, then that variable's current value is passed -through (i.e. `$MYVAR1` from the host is set to `$MYVAR1` in the container). -When no `=` is provided and that variable is not defined in the client's -environment then that variable will be removed from the container's list of -environment variables. -All three flags, `-e`, `--env` and `--env-file` can be repeated. - -Regardless of the order of these three flags, the `--env-file` are processed -first, and then `-e`, `--env` flags. This way, the `-e` or `--env` will -override variables as needed. - - $ cat ./env.list - TEST_FOO=BAR - $ sudo docker run --env TEST_FOO="This is a test" --env-file ./env.list busybox env | grep TEST_FOO - TEST_FOO=This is a test - -The `--env-file` flag takes a filename as an argument and expects each line -to be in the `VAR=VAL` format, mimicking the argument passed to `--env`. Comment -lines need only be prefixed with `#` - -An example of a file passed with `--env-file` - - $ cat ./env.list - TEST_FOO=BAR - - # this is a comment - TEST_APP_DEST_HOST=10.10.0.127 - TEST_APP_DEST_PORT=8888 - - # pass through this variable from the caller - TEST_PASSTHROUGH - $ sudo TEST_PASSTHROUGH=howdy docker run --env-file ./env.list busybox env - HOME=/ - PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin - HOSTNAME=5198e0745561 - TEST_FOO=BAR - TEST_APP_DEST_HOST=10.10.0.127 - TEST_APP_DEST_PORT=8888 - TEST_PASSTHROUGH=howdy - - $ sudo docker run --name console -t -i ubuntu bash - -This will create and run a new container with the container name being -`console`. - - $ sudo docker run --link /redis:redis --name console ubuntu bash - -The `--link` flag will link the container named `/redis` into the newly -created container with the alias `redis`. The new container can access the -network and environment of the `redis` container via environment variables. -The `--name` flag will assign the name `console` to the newly created -container. - - $ sudo docker run --volumes-from 777f7dc92da7 --volumes-from ba8c0c54f0f2:ro -i -t ubuntu pwd - -The `--volumes-from` flag mounts all the defined volumes from the referenced -containers. Containers can be specified by repetitions of the `--volumes-from` -argument. The container ID may be optionally suffixed with `:ro` or `:rw` to -mount the volumes in read-only or read-write mode, respectively. By default, -the volumes are mounted in the same mode (read write or read only) as -the reference container. - -The `-a` flag tells `docker run` to bind to the container's `STDIN`, `STDOUT` or -`STDERR`. This makes it possible to manipulate the output and input as needed. - - $ echo "test" | sudo docker run -i -a stdin ubuntu cat - - -This pipes data into a container and prints the container's ID by attaching -only to the container's `STDIN`. - - $ sudo docker run -a stderr ubuntu echo test - -This isn't going to print anything unless there's an error because we've -only attached to the `STDERR` of the container. The container's logs -still store what's been written to `STDERR` and `STDOUT`. - - $ cat somefile | sudo docker run -i -a stdin mybuilder dobuild - -This is how piping a file into a container could be done for a build. -The container's ID will be printed after the build is done and the build -logs could be retrieved using `docker logs`. This is -useful if you need to pipe a file or something else into a container and -retrieve the container's ID once the container has finished running. - - $ sudo docker run --device=/dev/sdc:/dev/xvdc --device=/dev/sdd --device=/dev/zero:/dev/nulo -i -t ubuntu ls -l /dev/{xvdc,sdd,nulo} - brw-rw---- 1 root disk 8, 2 Feb 9 16:05 /dev/xvdc - brw-rw---- 1 root disk 8, 3 Feb 9 16:05 /dev/sdd - crw-rw-rw- 1 root root 1, 5 Feb 9 16:05 /dev/nulo - -It is often necessary to directly expose devices to a container. The `--device` -option enables that. For example, a specific block storage device or loop -device or audio device can be added to an otherwise unprivileged container -(without the `--privileged` flag) and have the application directly access it. - -By default, the container will be able to `read`, `write` and `mknod` these devices. -This can be overridden using a third `:rwm` set of options to each `--device` -flag: - - -``` - $ sudo docker run --device=/dev/sda:/dev/xvdc --rm -it ubuntu fdisk /dev/xvdc - - Command (m for help): q - $ sudo docker run --device=/dev/sda:/dev/xvdc:r --rm -it ubuntu fdisk /dev/xvdc - You will not be able to write the partition table. - - Command (m for help): q - - $ sudo docker run --device=/dev/sda:/dev/xvdc --rm -it ubuntu fdisk /dev/xvdc - - Command (m for help): q - - $ sudo docker run --device=/dev/sda:/dev/xvdc:m --rm -it ubuntu fdisk /dev/xvdc - fdisk: unable to open /dev/xvdc: Operation not permitted -``` - -> **Note:** -> `--device` cannot be safely used with ephemeral devices. Block devices that -> may be removed should not be added to untrusted containers with `--device`. - -**A complete example:** - - $ sudo docker run -d --name static static-web-files sh - $ sudo docker run -d --expose=8098 --name riak riakserver - $ sudo docker run -d -m 100m -e DEVELOPMENT=1 -e BRANCH=example-code -v $(pwd):/app/bin:ro --name app appserver - $ sudo docker run -d -p 1443:443 --dns=10.0.0.1 --dns-search=dev.org -v /var/log/httpd --volumes-from static --link riak --link app -h www.sven.dev.org --name web webserver - $ sudo docker run -t -i --rm --volumes-from web -w /var/log/httpd busybox tail -f access.log - -This example shows five containers that might be set up to test a web -application change: - -1. Start a pre-prepared volume image `static-web-files` (in the background) - that has CSS, image and static HTML in it, (with a `VOLUME` instruction in - the Dockerfile to allow the web server to use those files); -2. Start a pre-prepared `riakserver` image, give the container name `riak` and - expose port `8098` to any containers that link to it; -3. Start the `appserver` image, restricting its memory usage to 100MB, setting - two environment variables `DEVELOPMENT` and `BRANCH` and bind-mounting the - current directory (`$(pwd)`) in the container in read-only mode as `/app/bin`; -4. Start the `webserver`, mapping port `443` in the container to port `1443` on - the Docker server, setting the DNS server to `10.0.0.1` and DNS search - domain to `dev.org`, creating a volume to put the log files into (so we can - access it from another container), then importing the files from the volume - exposed by the `static` container, and linking to all exposed ports from - `riak` and `app`. Lastly, we set the hostname to `web.sven.dev.org` so its - consistent with the pre-generated SSL certificate; -5. Finally, we create a container that runs `tail -f access.log` using the logs - volume from the `web` container, setting the workdir to `/var/log/httpd`. The - `--rm` option means that when the container exits, the container's layer is - removed. - -#### Restart Policies - -Use Docker's `--restart` to specify a container's *restart policy*. A restart -policy controls whether the Docker daemon restarts a container after exit. -Docker supports the following restart policies: - - - - - - - - - - - - - - - - - - - - - - -
PolicyResult
no - Do not automatically restart the container when it exits. This is the - default. -
- - on-failure[:max-retries] - - - Restart only if the container exits with a non-zero exit status. - Optionally, limit the number of restart retries the Docker - daemon attempts. -
always - Always restart the container regardless of the exit status. - When you specify always, the Docker daemon will try to restart - the container indefinitely. -
- - $ sudo docker run --restart=always redis - -This will run the `redis` container with a restart policy of **always** -so that if the container exits, Docker will restart it. - -More detailed information on restart policies can be found in the -[Restart Policies (--restart)](/reference/run/#restart-policies-restart) section -of the Docker run reference page. - -### Adding entries to a container hosts file - -You can add other hosts into a container's `/etc/hosts` file by using one or more -`--add-host` flags. This example adds a static address for a host named `docker`: - -``` - $ docker run --add-host=docker:10.180.0.1 --rm -it debian - $$ ping docker - PING docker (10.180.0.1): 48 data bytes - 56 bytes from 10.180.0.1: icmp_seq=0 ttl=254 time=7.600 ms - 56 bytes from 10.180.0.1: icmp_seq=1 ttl=254 time=30.705 ms - ^C--- docker ping statistics --- - 2 packets transmitted, 2 packets received, 0% packet loss - round-trip min/avg/max/stddev = 7.600/19.152/30.705/11.553 ms -``` - -> **Note:** -> Sometimes you need to connect to the Docker host, which means getting the IP -> address of the host. You can use the following shell commands to simplify this -> process: -> -> $ alias hostip="ip route show 0.0.0.0/0 | grep -Eo 'via \S+' | awk '{ print \$2 }'" -> $ docker run --add-host=docker:$(hostip) --rm -it debian - -## save - - Usage: docker save [OPTIONS] IMAGE [IMAGE...] - - Save an image(s) to a tar archive (streamed to STDOUT by default) - - -o, --output="" Write to a file, instead of STDOUT - -Produces a tarred repository to the standard output stream. -Contains all parent layers, and all tags + versions, or specified `repo:tag`, for -each argument provided. - -It is used to create a backup that can then be used with `docker load` - - $ sudo docker save busybox > busybox.tar - $ ls -sh busybox.tar - 2.7M busybox.tar - $ sudo docker save --output busybox.tar busybox - $ ls -sh busybox.tar - 2.7M busybox.tar - $ sudo docker save -o fedora-all.tar fedora - $ sudo docker save -o fedora-latest.tar fedora:latest - -It is even useful to cherry-pick particular tags of an image repository - - $ sudo docker save -o ubuntu.tar ubuntu:lucid ubuntu:saucy - -## search - -Search [Docker Hub](https://hub.docker.com) for images - - Usage: docker search [OPTIONS] TERM - - Search the Docker Hub for images - - --automated=false Only show automated builds - --no-trunc=false Don't truncate output - -s, --stars=0 Only displays with at least x stars - -See [*Find Public Images on Docker Hub*]( -/userguide/dockerrepos/#searching-for-images) for -more details on finding shared images from the command line. - -> **Note:** -> Search queries will only return up to 25 results - -## start - - Usage: docker start [OPTIONS] CONTAINER [CONTAINER...] - - Restart a stopped container - - -a, --attach=false Attach container's STDOUT and STDERR and forward all signals to the process - -i, --interactive=false Attach container's STDIN - -## stats - - Usage: docker stats CONTAINER [CONTAINER...] - - Display a live stream of one or more containers' resource usage statistics - - --help=false Print usage - -> **Note**: this functionality currently only works when using the *libcontainer* exec-driver. - -Running `docker stats` on multiple containers - - $ sudo docker stats redis1 redis2 - CONTAINER CPU % MEM USAGE/LIMIT MEM % NET I/O - redis1 0.07% 796 KiB/64 MiB 1.21% 788 B/648 B - redis2 0.07% 2.746 MiB/64 MiB 4.29% 1.266 KiB/648 B - - -The `docker stats` command will only return a live stream of data for running -containers. Stopped containers will not return any data. - -> **Note:** -> If you want more detailed information about a container's resource usage, use the API endpoint. - -## stop - - Usage: docker stop [OPTIONS] CONTAINER [CONTAINER...] - - Stop a running container by sending SIGTERM and then SIGKILL after a grace period - - -t, --time=10 Number of seconds to wait for the container to stop before killing it. Default is 10 seconds. - -The main process inside the container will receive `SIGTERM`, and after a -grace period, `SIGKILL`. - -## tag - - Usage: docker tag [OPTIONS] IMAGE[:TAG] [REGISTRYHOST/][USERNAME/]NAME[:TAG] - - Tag an image into a repository - - -f, --force=false Force - -You can group your images together using names and tags, and then upload -them to [*Share Images via Repositories*]( -/userguide/dockerrepos/#contributing-to-docker-hub). - -## top - - Usage: docker top CONTAINER [ps OPTIONS] - - Display the running processes of a container - -## unpause - - Usage: docker unpause CONTAINER - - Unpause all processes within a container - -The `docker unpause` command uses the cgroups freezer to un-suspend all -processes in a container. - -See the -[cgroups freezer documentation](https://www.kernel.org/doc/Documentation/cgroups/freezer-subsystem.txt) -for further details. - -## version - - Usage: docker version - - Show the Docker version information. - -Show the Docker version, API version, Git commit, and Go version of -both Docker client and daemon. - -## wait - - Usage: docker wait CONTAINER [CONTAINER...] - - Block until a container stops, then print its exit code. - diff --git a/reference/commandline/cli.md~~ b/reference/commandline/cli.md~~ deleted file mode 100644 index 4cd30aca12..0000000000 --- a/reference/commandline/cli.md~~ +++ /dev/null @@ -1,2128 +0,0 @@ -page_title: Command Line Interface -page_description: Docker's CLI command description and usage -page_keywords: Docker, Docker documentation, CLI, command line - -# Command Line - -{{ include "no-remote-sudo.md" }} - -To list available commands, either run `docker` with no parameters -or execute `docker help`: - - $ sudo docker - Usage: docker [OPTIONS] COMMAND [arg...] - -H, --host=[]: The socket(s) to bind to in daemon mode, specified using one or more tcp://host:port, unix:///path/to/socket, fd://* or fd://socketfd. - - A self-sufficient runtime for Linux containers. - - ... - -## Help -To list the help on any command just execute the command, followed by the `--help` option. - - $ sudo docker run --help - - Usage: docker run [OPTIONS] IMAGE [COMMAND] [ARG...] - - Run a command in a new container - - -a, --attach=[] Attach to STDIN, STDOUT or STDERR. - -c, --cpu-shares=0 CPU shares (relative weight) - ... - -## Option types - -Single character command line options can be combined, so rather than -typing `docker run -i -t --name test busybox sh`, -you can write `docker run -it --name test busybox sh`. - -### Boolean - -Boolean options take the form `-d=false`. The value you see in the help text is the -default value which is set if you do **not** specify that flag. If you specify -a Boolean flag without a value, this will set the flag to `true`, irrespective -of the default value. - -For example, running `docker run -d` will set the value to `true`, so -your container **will** run in "detached" mode, in the background. - -Options which default to `true` (e.g., `docker build --rm=true`) can only -be set to the non-default value by explicitly setting them to `false`: - - $ docker build --rm=false . - -### Multi - -Options like `-a=[]` indicate they can be specified multiple times: - - $ sudo docker run -a stdin -a stdout -a stderr -i -t ubuntu /bin/bash - -Sometimes this can use a more complex value string, as for `-v`: - - $ sudo docker run -v /host:/container example/mysql - -### Strings and Integers - -Options like `--name=""` expect a string, and they -can only be specified once. Options like `-c=0` -expect an integer, and they can only be specified once. - -## daemon - - Usage: docker [OPTIONS] COMMAND [arg...] - - A self-sufficient runtime for linux containers. - - Options: - --api-enable-cors=false Enable CORS headers in the remote API - -b, --bridge="" Attach containers to a pre-existing network bridge - use 'none' to disable container networking - --bip="" Use this CIDR notation address for the network bridge's IP, not compatible with -b - -D, --debug=false Enable debug mode - -d, --daemon=false Enable daemon mode - --dns=[] Force Docker to use specific DNS servers - --dns-search=[] Force Docker to use specific DNS search domains - -e, --exec-driver="native" Force the Docker runtime to use a specific exec driver - --fixed-cidr="" IPv4 subnet for fixed IPs (e.g.: 10.20.0.0/16) - this subnet must be nested in the bridge subnet (which is defined by -b or --bip) - --fixed-cidr-v6="" IPv6 subnet for global IPs (e.g.: 2a00:1450::/64) - -G, --group="docker" Group to assign the unix socket specified by -H when running in daemon mode - use '' (the empty string) to disable setting of a group - -g, --graph="/var/lib/docker" Path to use as the root of the Docker runtime - -H, --host=[] The socket(s) to bind to in daemon mode or connect to in client mode, specified using one or more tcp://host:port, unix:///path/to/socket, fd://* or fd://socketfd. - --icc=true Allow unrestricted inter-container and Docker daemon host communication - --insecure-registry=[] Enable insecure communication with specified registries (disables certificate verification for HTTPS and enables HTTP fallback) (e.g., localhost:5000 or 10.20.0.0/16) - --ip=0.0.0.0 Default IP address to use when binding container ports - --ip-forward=true Enable net.ipv4.ip_forward and IPv6 forwarding if --fixed-cidr-v6 is defined. IPv6 forwarding may interfere with your existing IPv6 configuration when using Router Advertisement. - --ip-masq=true Enable IP masquerading for bridge's IP range - --iptables=true Enable Docker's addition of iptables rules - --ipv6=false Enable Docker IPv6 support - -l, --log-level="info" Set the logging level (debug, info, warn, error, fatal) - --label=[] Set key=value labels to the daemon (displayed in `docker info`) - --mtu=0 Set the containers network MTU - if no value is provided: default to the default route MTU or 1500 if no default route is available - -p, --pidfile="/var/run/docker.pid" Path to use for daemon PID file - --registry-mirror=[] Specify a preferred Docker registry mirror - -s, --storage-driver="" Force the Docker runtime to use a specific storage driver - --selinux-enabled=false Enable selinux support. SELinux does not presently support the BTRFS storage driver - --storage-opt=[] Set storage driver options - --tls=false Use TLS; implied by --tlsverify flag - --tlscacert="/home/sven/.docker/ca.pem" Trust only remotes providing a certificate signed by the CA given here - --tlscert="/home/sven/.docker/cert.pem" Path to TLS certificate file - --tlskey="/home/sven/.docker/key.pem" Path to TLS key file - --tlsverify=false Use TLS and verify the remote (daemon: verify client, client: verify daemon) - -v, --version=false Print version information and quit - -Options with [] may be specified multiple times. - -The Docker daemon is the persistent process that manages containers. -Docker uses the same binary for both the daemon and client. To run the -daemon you provide the `-d` flag. - - -To run the daemon with debug output, use `docker -d -D`. - -### Daemon socket option - -The Docker daemon can listen for [Docker Remote API](/reference/api/docker_remote_api/) -requests via three different types of Socket: `unix`, `tcp`, and `fd`. - -By default, a `unix` domain socket (or IPC socket) is created at `/var/run/docker.sock`, -requiring either `root` permission, or `docker` group membership. - -If you need to access the Docker daemon remotely, you need to enable the `tcp` -Socket. Beware that the default setup provides un-encrypted and un-authenticated -direct access to the Docker daemon - and should be secured either using the -[built in HTTPS encrypted socket](/articles/https/), or by putting a secure web -proxy in front of it. You can listen on port `2375` on all network interfaces -with `-H tcp://0.0.0.0:2375`, or on a particular network interface using its IP -address: `-H tcp://192.168.59.103:2375`. It is conventional to use port `2375` -for un-encrypted, and port `2376` for encrypted communication with the daemon. - -> **Note** If you're using an HTTPS encrypted socket, keep in mind that only TLS1.0 -> and greater are supported. Protocols SSLv3 and under are not supported anymore -> for security reasons. - -On Systemd based systems, you can communicate with the daemon via -[Systemd socket activation](http://0pointer.de/blog/projects/socket-activation.html), use -`docker -d -H fd://`. Using `fd://` will work perfectly for most setups but -you can also specify individual sockets: `docker -d -H fd://3`. If the -specified socket activated files aren't found, then Docker will exit. You -can find examples of using Systemd socket activation with Docker and -Systemd in the [Docker source tree]( -https://github.com/docker/docker/tree/master/contrib/init/systemd/). - -You can configure the Docker daemon to listen to multiple sockets at the same -time using multiple `-H` options: - - # listen using the default unix socket, and on 2 specific IP addresses on this host. - docker -d -H unix:///var/run/docker.sock -H tcp://192.168.59.106 -H tcp://10.10.10.2 - -The Docker client will honor the `DOCKER_HOST` environment variable to set -the `-H` flag for the client. - - $ sudo docker -H tcp://0.0.0.0:2375 ps - # or - $ export DOCKER_HOST="tcp://0.0.0.0:2375" - $ sudo docker ps - # both are equal - -Setting the `DOCKER_TLS_VERIFY` environment variable to any value other than the empty -string is equivalent to setting the `--tlsverify` flag. The following are equivalent: - - $ sudo docker --tlsverify ps - # or - $ export DOCKER_TLS_VERIFY=1 - $ sudo docker ps - -The Docker client will honor the `HTTP_PROXY`, `HTTPS_PROXY`, and `NO_PROXY` -environment variables (or the lowercase versions thereof). `HTTPS_PROXY` takes -precedence over `HTTP_PROXY`. If you happen to have a proxy configured with the -`HTTP_PROXY` or `HTTPS_PROXY` environment variables but still want to -communicate with the Docker daemon over its default `unix` domain socket, -setting the `NO_PROXY` environment variable to the path of the socket -(`/var/run/docker.sock`) is required. - -### Daemon storage-driver option - -The Docker daemon has support for several different image layer storage drivers: `aufs`, -`devicemapper`, `btrfs` and `overlay`. - -The `aufs` driver is the oldest, but is based on a Linux kernel patch-set that -is unlikely to be merged into the main kernel. These are also known to cause some -serious kernel crashes. However, `aufs` is also the only storage driver that allows -containers to share executable and shared library memory, so is a useful choice -when running thousands of containers with the same program or libraries. - -The `devicemapper` driver uses thin provisioning and Copy on Write (CoW) -snapshots. For each devicemapper graph location – typically -`/var/lib/docker/devicemapper` – a thin pool is created based on two block -devices, one for data and one for metadata. By default, these block devices -are created automatically by using loopback mounts of automatically created -sparse files. Refer to [Storage driver options](#storage-driver-options) below -for a way how to customize this setup. -[~jpetazzo/Resizing Docker containers with the Device Mapper plugin]( -http://jpetazzo.github.io/2014/01/29/docker-device-mapper-resize/) article -explains how to tune your existing setup without the use of options. - -The `btrfs` driver is very fast for `docker build` - but like `devicemapper` does not -share executable memory between devices. Use `docker -d -s btrfs -g /mnt/btrfs_partition`. - -The `overlay` is a very fast union filesystem. It is now merged in the main -Linux kernel as of [3.18.0](https://lkml.org/lkml/2014/10/26/137). -Call `docker -d -s overlay` to use it. -> **Note:** -> It is currently unsupported on `btrfs` or any Copy on Write filesystem -> and should only be used over `ext4` partitions. - -#### Storage driver options - -Particular storage-driver can be configured with options specified with -`--storage-opt` flags. The only driver accepting options is `devicemapper` as -of now. All its options are prefixed with `dm`. - -Currently supported options are: - - * `dm.basesize` - - Specifies the size to use when creating the base device, which limits the - size of images and containers. The default value is 10G. Note, thin devices - are inherently "sparse", so a 10G device which is mostly empty doesn't use - 10 GB of space on the pool. However, the filesystem will use more space for - the empty case the larger the device is. - - **Warning**: This value affects the system-wide "base" empty filesystem - that may already be initialized and inherited by pulled images. Typically, - a change to this value will require additional steps to take effect: - - $ sudo service docker stop - $ sudo rm -rf /var/lib/docker - $ sudo service docker start - - Example use: - - $ sudo docker -d --storage-opt dm.basesize=20G - - * `dm.loopdatasize` - - Specifies the size to use when creating the loopback file for the "data" - device which is used for the thin pool. The default size is 100G. Note that - the file is sparse, so it will not initially take up this much space. - - Example use: - - $ sudo docker -d --storage-opt dm.loopdatasize=200G - - * `dm.loopmetadatasize` - - Specifies the size to use when creating the loopback file for the - "metadata" device which is used for the thin pool. The default size is 2G. - Note that the file is sparse, so it will not initially take up this much - space. - - Example use: - - $ sudo docker -d --storage-opt dm.loopmetadatasize=4G - - * `dm.fs` - - Specifies the filesystem type to use for the base device. The supported - options are "ext4" and "xfs". The default is "ext4" - - Example use: - - $ sudo docker -d --storage-opt dm.fs=xfs - - * `dm.mkfsarg` - - Specifies extra mkfs arguments to be used when creating the base device. - - Example use: - - $ sudo docker -d --storage-opt "dm.mkfsarg=-O ^has_journal" - - * `dm.mountopt` - - Specifies extra mount options used when mounting the thin devices. - - Example use: - - $ sudo docker -d --storage-opt dm.mountopt=nodiscard - - * `dm.datadev` - - Specifies a custom blockdevice to use for data for the thin pool. - - If using a block device for device mapper storage, ideally both datadev and - metadatadev should be specified to completely avoid using the loopback - device. - - Example use: - - $ sudo docker -d \ - --storage-opt dm.datadev=/dev/sdb1 \ - --storage-opt dm.metadatadev=/dev/sdc1 - - * `dm.metadatadev` - - Specifies a custom blockdevice to use for metadata for the thin pool. - - For best performance the metadata should be on a different spindle than the - data, or even better on an SSD. - - If setting up a new metadata pool it is required to be valid. This can be - achieved by zeroing the first 4k to indicate empty metadata, like this: - - $ dd if=/dev/zero of=$metadata_dev bs=4096 count=1 - - Example use: - - $ sudo docker -d \ - --storage-opt dm.datadev=/dev/sdb1 \ - --storage-opt dm.metadatadev=/dev/sdc1 - - * `dm.blocksize` - - Specifies a custom blocksize to use for the thin pool. The default - blocksize is 64K. - - Example use: - - $ sudo docker -d --storage-opt dm.blocksize=512K - - * `dm.blkdiscard` - - Enables or disables the use of blkdiscard when removing devicemapper - devices. This is enabled by default (only) if using loopback devices and is - required to resparsify the loopback file on image/container removal. - - Disabling this on loopback can lead to *much* faster container removal - times, but will make the space used in `/var/lib/docker` directory not be - returned to the system for other use when containers are removed. - - Example use: - - $ sudo docker -d --storage-opt dm.blkdiscard=false - -### Docker exec-driver option - -The Docker daemon uses a specifically built `libcontainer` execution driver as its -interface to the Linux kernel `namespaces`, `cgroups`, and `SELinux`. - -There is still legacy support for the original [LXC userspace tools]( -https://linuxcontainers.org/) via the `lxc` execution driver, however, this is -not where the primary development of new functionality is taking place. -Add `-e lxc` to the daemon flags to use the `lxc` execution driver. - - -### Daemon DNS options - -To set the DNS server for all Docker containers, use -`docker -d --dns 8.8.8.8`. - -To set the DNS search domain for all Docker containers, use -`docker -d --dns-search example.com`. - -### Insecure registries - -Docker considers a private registry either secure or insecure. -In the rest of this section, *registry* is used for *private registry*, and `myregistry:5000` -is a placeholder example for a private registry. - -A secure registry uses TLS and a copy of its CA certificate is placed on the Docker host at -`/etc/docker/certs.d/myregistry:5000/ca.crt`. -An insecure registry is either not using TLS (i.e., listening on plain text HTTP), or is using -TLS with a CA certificate not known by the Docker daemon. The latter can happen when the -certificate was not found under `/etc/docker/certs.d/myregistry:5000/`, or if the certificate -verification failed (i.e., wrong CA). - -By default, Docker assumes all, but local (see local registries below), registries are secure. -Communicating with an insecure registry is not possible if Docker assumes that registry is secure. -In order to communicate with an insecure registry, the Docker daemon requires `--insecure-registry` -in one of the following two forms: - -* `--insecure-registry myregistry:5000` tells the Docker daemon that myregistry:5000 should be considered insecure. -* `--insecure-registry 10.1.0.0/16` tells the Docker daemon that all registries whose domain resolve to an IP address is part -of the subnet described by the CIDR syntax, should be considered insecure. - -The flag can be used multiple times to allow multiple registries to be marked as insecure. - -If an insecure registry is not marked as insecure, `docker pull`, `docker push`, and `docker search` -will result in an error message prompting the user to either secure or pass the `--insecure-registry` -flag to the Docker daemon as described above. - -Local registries, whose IP address falls in the 127.0.0.0/8 range, are automatically marked as insecure -as of Docker 1.3.2. It is not recommended to rely on this, as it may change in the future. - -### Running a Docker daemon behind a HTTPS_PROXY - -When running inside a LAN that uses a `HTTPS` proxy, the Docker Hub certificates -will be replaced by the proxy's certificates. These certificates need to be added -to your Docker host's configuration: - -1. Install the `ca-certificates` package for your distribution -2. Ask your network admin for the proxy's CA certificate and append them to - `/etc/pki/tls/certs/ca-bundle.crt` -3. Then start your Docker daemon with `HTTPS_PROXY=http://username:password@proxy:port/ docker -d`. - The `username:` and `password@` are optional - and are only needed if your proxy - is set up to require authentication. - -This will only add the proxy and authentication to the Docker daemon's requests - -your `docker build`s and running containers will need extra configuration to use -the proxy - -### Miscellaneous options - -IP masquerading uses address translation to allow containers without a public IP to talk -to other machines on the Internet. This may interfere with some network topologies and -can be disabled with --ip-masq=false. - -Docker supports softlinks for the Docker data directory -(`/var/lib/docker`) and for `/var/lib/docker/tmp`. The `DOCKER_TMPDIR` and the data directory can be set like this: - - DOCKER_TMPDIR=/mnt/disk2/tmp /usr/local/bin/docker -d -D -g /var/lib/docker -H unix:// > /var/lib/boot2docker/docker.log 2>&1 - # or - export DOCKER_TMPDIR=/mnt/disk2/tmp - /usr/local/bin/docker -d -D -g /var/lib/docker -H unix:// > /var/lib/boot2docker/docker.log 2>&1 - - -## attach - - Usage: docker attach [OPTIONS] CONTAINER - - Attach to a running container - - --no-stdin=false Do not attach STDIN - --sig-proxy=true Proxy all received signals to the process (non-TTY mode only). SIGCHLD, SIGKILL, and SIGSTOP are not proxied. - -The `docker attach` command allows you to attach to a running container using -the container's ID or name, either to view its ongoing output or to control it -interactively. You can attach to the same contained process multiple times -simultaneously, screen sharing style, or quickly view the progress of your -daemonized process. - -You can detach from the container (and leave it running) with `CTRL-p CTRL-q` -(for a quiet exit) or `CTRL-c` which will send a `SIGKILL` to the container. -When you are attached to a container, and exit its main process, the process's -exit code will be returned to the client. - -It is forbidden to redirect the standard input of a `docker attach` command while -attaching to a tty-enabled container (i.e.: launched with `-t`). - -#### Examples - - $ sudo docker run -d --name topdemo ubuntu /usr/bin/top -b) - $ sudo docker attach topdemo - top - 02:05:52 up 3:05, 0 users, load average: 0.01, 0.02, 0.05 - Tasks: 1 total, 1 running, 0 sleeping, 0 stopped, 0 zombie - Cpu(s): 0.1%us, 0.2%sy, 0.0%ni, 99.7%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st - Mem: 373572k total, 355560k used, 18012k free, 27872k buffers - Swap: 786428k total, 0k used, 786428k free, 221740k cached - - PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND - 1 root 20 0 17200 1116 912 R 0 0.3 0:00.03 top - - top - 02:05:55 up 3:05, 0 users, load average: 0.01, 0.02, 0.05 - Tasks: 1 total, 1 running, 0 sleeping, 0 stopped, 0 zombie - Cpu(s): 0.0%us, 0.2%sy, 0.0%ni, 99.8%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st - Mem: 373572k total, 355244k used, 18328k free, 27872k buffers - Swap: 786428k total, 0k used, 786428k free, 221776k cached - - PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND - 1 root 20 0 17208 1144 932 R 0 0.3 0:00.03 top - - - top - 02:05:58 up 3:06, 0 users, load average: 0.01, 0.02, 0.05 - Tasks: 1 total, 1 running, 0 sleeping, 0 stopped, 0 zombie - Cpu(s): 0.2%us, 0.3%sy, 0.0%ni, 99.5%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st - Mem: 373572k total, 355780k used, 17792k free, 27880k buffers - Swap: 786428k total, 0k used, 786428k free, 221776k cached - - PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND - 1 root 20 0 17208 1144 932 R 0 0.3 0:00.03 top - ^C$ - $ echo $? - 0 - $ docker ps -a | grep topdemo - 7998ac8581f9 ubuntu:14.04 "/usr/bin/top -b" 38 seconds ago Exited (0) 21 seconds ago topdemo - -And in this second example, you can see the exit code returned by the `bash` process -is returned by the `docker attach` command to its caller too: - - $ sudo docker run --name test -d -it debian - 275c44472aebd77c926d4527885bb09f2f6db21d878c75f0a1c212c03d3bcfab - $ sudo docker attach test - $$ exit 13 - exit - $ echo $? - 13 - $ sudo docker ps -a | grep test - 275c44472aeb debian:7 "/bin/bash" 26 seconds ago Exited (13) 17 seconds ago test - -## build - - Usage: docker build [OPTIONS] PATH | URL | - - - Build a new image from the source code at PATH - - --force-rm=false Always remove intermediate containers, even after unsuccessful builds - --no-cache=false Do not use cache when building the image - --pull=false Always attempt to pull a newer version of the image - -q, --quiet=false Suppress the verbose output generated by the containers - --rm=true Remove intermediate containers after a successful build - -t, --tag="" Repository name (and optionally a tag) to be applied to the resulting image in case of success - -Use this command to build Docker images from a Dockerfile and a -"context". - -The files at `PATH` or `URL` are called the "context" of the build. The -build process may refer to any of the files in the context, for example -when using an [*ADD*](/reference/builder/#add) instruction. -When a single Dockerfile is given as `URL` or is piped through `STDIN` -(`docker build - < Dockerfile`), then no context is set. - -When a Git repository is set as `URL`, then the repository is used as -the context. The Git repository is cloned with its submodules -(`git clone -recursive`). A fresh `git clone` occurs in a temporary directory -on your local host, and then this is sent to the Docker daemon as the -context. This way, your local user credentials and VPN's etc can be -used to access private repositories. - -If a file named `.dockerignore` exists in the root of `PATH` then it -is interpreted as a newline-separated list of exclusion patterns. -Exclusion patterns match files or directories relative to `PATH` that -will be excluded from the context. Globbing is done using Go's -[filepath.Match](http://golang.org/pkg/path/filepath#Match) rules. - -Please note that `.dockerignore` files in other subdirectories are -considered as normal files. Filepaths in .dockerignore are absolute with -the current directory as the root. Wildcards are allowed but the search -is not recursive. - -#### Example .dockerignore file - */temp* - */*/temp* - temp? - -The first line above `*/temp*`, would ignore all files with names starting with -`temp` from any subdirectory below the root directory. For example, a file named -`/somedir/temporary.txt` would be ignored. The second line `*/*/temp*`, will -ignore files starting with name `temp` from any subdirectory that is two levels -below the root directory. For example, the file `/somedir/subdir/temporary.txt` -would get ignored in this case. The last line in the above example `temp?` -will ignore the files that match the pattern from the root directory. -For example, the files `tempa`, `tempb` are ignored from the root directory. -Currently there is no support for regular expressions. Formats -like `[^temp*]` are ignored. - -By default the `docker build` command will look for a `Dockerfile` at the -root of the build context. The `-f`, `--file`, option lets you specify -the path to an alternative file to use instead. This is useful -in cases where the same set of files are used for multiple builds. The path -must be to a file within the build context. If a relative path is specified -then it must to be relative to the current directory. - - -See also: - -[*Dockerfile Reference*](/reference/builder). - -#### Examples - - $ sudo docker build . - Uploading context 10240 bytes - Step 1 : FROM busybox - Pulling repository busybox - ---> e9aa60c60128MB/2.284 MB (100%) endpoint: https://cdn-registry-1.docker.io/v1/ - Step 2 : RUN ls -lh / - ---> Running in 9c9e81692ae9 - total 24 - drwxr-xr-x 2 root root 4.0K Mar 12 2013 bin - drwxr-xr-x 5 root root 4.0K Oct 19 00:19 dev - drwxr-xr-x 2 root root 4.0K Oct 19 00:19 etc - drwxr-xr-x 2 root root 4.0K Nov 15 23:34 lib - lrwxrwxrwx 1 root root 3 Mar 12 2013 lib64 -> lib - dr-xr-xr-x 116 root root 0 Nov 15 23:34 proc - lrwxrwxrwx 1 root root 3 Mar 12 2013 sbin -> bin - dr-xr-xr-x 13 root root 0 Nov 15 23:34 sys - drwxr-xr-x 2 root root 4.0K Mar 12 2013 tmp - drwxr-xr-x 2 root root 4.0K Nov 15 23:34 usr - ---> b35f4035db3f - Step 3 : CMD echo Hello world - ---> Running in 02071fceb21b - ---> f52f38b7823e - Successfully built f52f38b7823e - Removing intermediate container 9c9e81692ae9 - Removing intermediate container 02071fceb21b - -This example specifies that the `PATH` is -`.`, and so all the files in the local directory get -`tar`d and sent to the Docker daemon. The `PATH` -specifies where to find the files for the "context" of the build on the -Docker daemon. Remember that the daemon could be running on a remote -machine and that no parsing of the Dockerfile -happens at the client side (where you're running -`docker build`). That means that *all* the files at -`PATH` get sent, not just the ones listed to -[*ADD*](/reference/builder/#add) in the Dockerfile. - -The transfer of context from the local machine to the Docker daemon is -what the `docker` client means when you see the -"Sending build context" message. - -If you wish to keep the intermediate containers after the build is -complete, you must use `--rm=false`. This does not -affect the build cache. - - $ sudo docker build . - Uploading context 18.829 MB - Uploading context - Step 0 : FROM busybox - ---> 769b9341d937 - Step 1 : CMD echo Hello world - ---> Using cache - ---> 99cc1ad10469 - Successfully built 99cc1ad10469 - $ echo ".git" > .dockerignore - $ sudo docker build . - Uploading context 6.76 MB - Uploading context - Step 0 : FROM busybox - ---> 769b9341d937 - Step 1 : CMD echo Hello world - ---> Using cache - ---> 99cc1ad10469 - Successfully built 99cc1ad10469 - -This example shows the use of the `.dockerignore` file to exclude the `.git` -directory from the context. Its effect can be seen in the changed size of the -uploaded context. - - $ sudo docker build -t vieux/apache:2.0 . - -This will build like the previous example, but it will then tag the -resulting image. The repository name will be `vieux/apache` -and the tag will be `2.0` - - $ sudo docker build - < Dockerfile - -This will read a Dockerfile from `STDIN` without context. Due to the -lack of a context, no contents of any local directory will be sent to -the Docker daemon. Since there is no context, a Dockerfile `ADD` only -works if it refers to a remote URL. - - $ sudo docker build - < context.tar.gz - -This will build an image for a compressed context read from `STDIN`. -Supported formats are: bzip2, gzip and xz. - - $ sudo docker build github.com/creack/docker-firefox - -This will clone the GitHub repository and use the cloned repository as -context. The Dockerfile at the root of the -repository is used as Dockerfile. Note that you -can specify an arbitrary Git repository by using the `git://` or `git@` -schema. - - $ sudo docker build -f Dockerfile.debug . - -This will use a file called `Dockerfile.debug` for the build -instructions instead of `Dockerfile`. - - $ sudo docker build -f dockerfiles/Dockerfile.debug -t myapp_debug . - $ sudo docker build -f dockerfiles/Dockerfile.prod -t myapp_prod . - -The above commands will build the current build context (as specified by -the `.`) twice, once using a debug version of a `Dockerfile` and once using -a production version. - - $ cd /home/me/myapp/some/dir/really/deep - $ sudo docker build -f /home/me/myapp/dockerfiles/debug /home/me/myapp - $ sudo docker build -f ../../../../dockerfiles/debug /home/me/myapp - -These two `docker build` commands do the exact same thing. They both -use the contents of the `debug` file instead of looking for a `Dockerfile` -and will use `/home/me/myapp` as the root of the build context. Note that -`debug` is in the directory structure of the build context, regardless of how -you refer to it on the command line. - -> **Note:** `docker build` will return a `no such file or directory` error -> if the file or directory does not exist in the uploaded context. This may -> happen if there is no context, or if you specify a file that is elsewhere -> on the Host system. The context is limited to the current directory (and its -> children) for security reasons, and to ensure repeatable builds on remote -> Docker hosts. This is also the reason why `ADD ../file` will not work. - -## commit - - Usage: docker commit [OPTIONS] CONTAINER [REPOSITORY[:TAG]] - - Create a new image from a container's changes - - -a, --author="" Author (e.g., "John Hannibal Smith ") - -m, --message="" Commit message - -p, --pause=true Pause container during commit - -It can be useful to commit a container's file changes or settings into a -new image. This allows you debug a container by running an interactive -shell, or to export a working dataset to another server. Generally, it -is better to use Dockerfiles to manage your images in a documented and -maintainable way. - -By default, the container being committed and its processes will be paused -while the image is committed. This reduces the likelihood of -encountering data corruption during the process of creating the commit. -If this behavior is undesired, set the 'p' option to false. - -#### Commit an existing container - - $ sudo docker ps - ID IMAGE COMMAND CREATED STATUS PORTS - c3f279d17e0a ubuntu:12.04 /bin/bash 7 days ago Up 25 hours - 197387f1b436 ubuntu:12.04 /bin/bash 7 days ago Up 25 hours - $ sudo docker commit c3f279d17e0a SvenDowideit/testimage:version3 - f5283438590d - $ sudo docker images | head - REPOSITORY TAG ID CREATED VIRTUAL SIZE - SvenDowideit/testimage version3 f5283438590d 16 seconds ago 335.7 MB - -## cp - -Copy files/folders from a container's filesystem to the host -path. Paths are relative to the root of the filesystem. - - Usage: docker cp CONTAINER:PATH HOSTPATH - - Copy files/folders from the PATH to the HOSTPATH - -## create - -Creates a new container. - - Usage: docker create [OPTIONS] IMAGE [COMMAND] [ARG...] - - Create a new container - - -a, --attach=[] Attach to STDIN, STDOUT or STDERR. - --add-host=[] Add a custom host-to-IP mapping (host:ip) - -c, --cpu-shares=0 CPU shares (relative weight) - --cap-add=[] Add Linux capabilities - --cap-drop=[] Drop Linux capabilities - --cidfile="" Write the container ID to the file - --cpuset="" CPUs in which to allow execution (0-3, 0,1) - --device=[] Add a host device to the container (e.g. --device=/dev/sdc:/dev/xvdc:rwm) - --dns=[] Set custom DNS servers - --dns-search=[] Set custom DNS search domains (Use --dns-search=. if you don't wish to set the search domain) - -e, --env=[] Set environment variables - --entrypoint="" Overwrite the default ENTRYPOINT of the image - --env-file=[] Read in a line delimited file of environment variables - --expose=[] Expose a port or a range of ports (e.g. --expose=3300-3310) from the container without publishing it to your host - -h, --hostname="" Container host name - -i, --interactive=false Keep STDIN open even if not attached - --ipc="" Default is to create a private IPC namespace (POSIX SysV IPC) for the container - 'container:': reuses another container shared memory, semaphores and message queues - 'host': use the host shared memory,semaphores and message queues inside the container. Note: the host mode gives the container full access to local shared memory and is therefore considered insecure. - --link=[] Add link to another container in the form of :alias - --lxc-conf=[] (lxc exec-driver only) Add custom lxc options --lxc-conf="lxc.cgroup.cpuset.cpus = 0,1" - -m, --memory="" Memory limit (format: , where unit = b, k, m or g) - --mac-address="" Container MAC address (e.g. 92:d0:c6:0a:29:33) - --name="" Assign a name to the container - --net="bridge" Set the Network mode for the container - 'bridge': creates a new network stack for the container on the docker bridge - 'none': no networking for this container - 'container:': reuses another container network stack - 'host': use the host network stack inside the container. Note: the host mode gives the container full access to local system services such as D-bus and is therefore considered insecure. - -P, --publish-all=false Publish all exposed ports to random ports on the host interfaces - -p, --publish=[] Publish a container's port, or a range of ports (e.g., `-p 3300-3310`), to the host - format: ip:hostPort:containerPort | ip::containerPort | hostPort:containerPort | containerPort - Both hostPort and containerPort can be specified as a range of ports. - When specifying ranges for both, the number of container ports in the range must match the number of host ports in the range. (e.g., `-p 1234-1236:1234-1236/tcp`) - (use 'docker port' to see the actual mapping) - --privileged=false Give extended privileges to this container - --read-only=false Mount the container's root filesystem as read only - --restart="" Restart policy to apply when a container exits (no, on-failure[:max-retry], always) - --security-opt=[] Security Options - -t, --tty=false Allocate a pseudo-TTY - -u, --user="" Username or UID - -v, --volume=[] Bind mount a volume (e.g., from the host: -v /host:/container, from Docker: -v /container) - --volumes-from=[] Mount volumes from the specified container(s) - -w, --workdir="" Working directory inside the container - -The `docker create` command creates a writeable container layer over -the specified image and prepares it for running the specified command. -The container ID is then printed to `STDOUT`. -This is similar to `docker run -d` except the container is never started. -You can then use the `docker start ` command to start the -container at any point. - -This is useful when you want to set up a container configuration ahead -of time so that it is ready to start when you need it. - -Please see the [run command](#run) section for more details. - -#### Examples - - $ sudo docker create -t -i fedora bash - 6d8af538ec541dd581ebc2a24153a28329acb5268abe5ef868c1f1a261221752 - $ sudo docker start -a -i 6d8af538ec5 - bash-4.2# - -As of v1.4.0 container volumes are initialized during the `docker create` -phase (i.e., `docker run` too). For example, this allows you to `create` the -`data` volume container, and then use it from another container: - - $ docker create -v /data --name data ubuntu - 240633dfbb98128fa77473d3d9018f6123b99c454b3251427ae190a7d951ad57 - $ docker run --rm --volumes-from data ubuntu ls -la /data - total 8 - drwxr-xr-x 2 root root 4096 Dec 5 04:10 . - drwxr-xr-x 48 root root 4096 Dec 5 04:11 .. - -Similarly, `create` a host directory bind mounted volume container, which -can then be used from the subsequent container: - - $ docker create -v /home/docker:/docker --name docker ubuntu - 9aa88c08f319cd1e4515c3c46b0de7cc9aa75e878357b1e96f91e2c773029f03 - $ docker run --rm --volumes-from docker ubuntu ls -la /docker - total 20 - drwxr-sr-x 5 1000 staff 180 Dec 5 04:00 . - drwxr-xr-x 48 root root 4096 Dec 5 04:13 .. - -rw-rw-r-- 1 1000 staff 3833 Dec 5 04:01 .ash_history - -rw-r--r-- 1 1000 staff 446 Nov 28 11:51 .ashrc - -rw-r--r-- 1 1000 staff 25 Dec 5 04:00 .gitconfig - drwxr-sr-x 3 1000 staff 60 Dec 1 03:28 .local - -rw-r--r-- 1 1000 staff 920 Nov 28 11:51 .profile - drwx--S--- 2 1000 staff 460 Dec 5 00:51 .ssh - drwxr-xr-x 32 1000 staff 1140 Dec 5 04:01 docker - - -## diff - -List the changed files and directories in a container᾿s filesystem - - Usage: docker diff CONTAINER - - Inspect changes on a container's filesystem - -There are 3 events that are listed in the `diff`: - -1. `A` - Add -2. `D` - Delete -3. `C` - Change - -For example: - - $ sudo docker diff 7bb0e258aefe - - C /dev - A /dev/kmsg - C /etc - A /etc/mtab - A /go - A /go/src - A /go/src/github.com - A /go/src/github.com/docker - A /go/src/github.com/docker/docker - A /go/src/github.com/docker/docker/.git - .... - -## events - - Usage: docker events [OPTIONS] - - Get real time events from the server - - -f, --filter=[] Provide filter values (i.e., 'event=stop') - --since="" Show all events created since timestamp - --until="" Stream events until this timestamp - -Docker containers will report the following events: - - create, destroy, die, export, kill, oom, pause, restart, start, stop, unpause - -and Docker images will report: - - untag, delete - -#### Filtering - -The filtering flag (`-f` or `--filter`) format is of "key=value". If you would like to use -multiple filters, pass multiple flags (e.g., `--filter "foo=bar" --filter "bif=baz"`) - -Using the same filter multiple times will be handled as a *OR*; for example -`--filter container=588a23dac085 --filter container=a8f7720b8c22` will display events for -container 588a23dac085 *OR* container a8f7720b8c22 - -Using multiple filters will be handled as a *AND*; for example -`--filter container=588a23dac085 --filter event=start` will display events for container -container 588a23dac085 *AND* the event type is *start* - -Current filters: - * event - * image - * container - -#### Examples - -You'll need two shells for this example. - -**Shell 1: Listening for events:** - - $ sudo docker events - -**Shell 2: Start and Stop containers:** - - $ sudo docker start 4386fb97867d - $ sudo docker stop 4386fb97867d - $ sudo docker stop 7805c1d35632 - -**Shell 1: (Again .. now showing events):** - - 2014-05-10T17:42:14.999999999Z07:00 4386fb97867d: (from ubuntu-1:14.04) start - 2014-05-10T17:42:14.999999999Z07:00 4386fb97867d: (from ubuntu-1:14.04) die - 2014-05-10T17:42:14.999999999Z07:00 4386fb97867d: (from ubuntu-1:14.04) stop - 2014-05-10T17:42:14.999999999Z07:00 7805c1d35632: (from redis:2.8) die - 2014-05-10T17:42:14.999999999Z07:00 7805c1d35632: (from redis:2.8) stop - -**Show events in the past from a specified time:** - - $ sudo docker events --since 1378216169 - 2014-03-10T17:42:14.999999999Z07:00 4386fb97867d: (from ubuntu-1:14.04) die - 2014-05-10T17:42:14.999999999Z07:00 4386fb97867d: (from ubuntu-1:14.04) stop - 2014-05-10T17:42:14.999999999Z07:00 7805c1d35632: (from redis:2.8) die - 2014-03-10T17:42:14.999999999Z07:00 7805c1d35632: (from redis:2.8) stop - - $ sudo docker events --since '2013-09-03' - 2014-09-03T17:42:14.999999999Z07:00 4386fb97867d: (from ubuntu-1:14.04) start - 2014-09-03T17:42:14.999999999Z07:00 4386fb97867d: (from ubuntu-1:14.04) die - 2014-05-10T17:42:14.999999999Z07:00 4386fb97867d: (from ubuntu-1:14.04) stop - 2014-05-10T17:42:14.999999999Z07:00 7805c1d35632: (from redis:2.8) die - 2014-09-03T17:42:14.999999999Z07:00 7805c1d35632: (from redis:2.8) stop - - $ sudo docker events --since '2013-09-03T15:49:29' - 2014-09-03T15:49:29.999999999Z07:00 4386fb97867d: (from ubuntu-1:14.04) die - 2014-05-10T17:42:14.999999999Z07:00 4386fb97867d: (from ubuntu-1:14.04) stop - 2014-05-10T17:42:14.999999999Z07:00 7805c1d35632: (from redis:2.8) die - 2014-09-03T15:49:29.999999999Z07:00 7805c1d35632: (from redis:2.8) stop - -**Filter events:** - - $ sudo docker events --filter 'event=stop' - 2014-05-10T17:42:14.999999999Z07:00 4386fb97867d: (from ubuntu-1:14.04) stop - 2014-09-03T17:42:14.999999999Z07:00 7805c1d35632: (from redis:2.8) stop - - $ sudo docker events --filter 'image=ubuntu-1:14.04' - 2014-05-10T17:42:14.999999999Z07:00 4386fb97867d: (from ubuntu-1:14.04) start - 2014-05-10T17:42:14.999999999Z07:00 4386fb97867d: (from ubuntu-1:14.04) die - 2014-05-10T17:42:14.999999999Z07:00 4386fb97867d: (from ubuntu-1:14.04) stop - - $ sudo docker events --filter 'container=7805c1d35632' - 2014-05-10T17:42:14.999999999Z07:00 7805c1d35632: (from redis:2.8) die - 2014-09-03T15:49:29.999999999Z07:00 7805c1d35632: (from redis:2.8) stop - - $ sudo docker events --filter 'container=7805c1d35632' --filter 'container=4386fb97867d' - 2014-09-03T15:49:29.999999999Z07:00 4386fb97867d: (from ubuntu-1:14.04) die - 2014-05-10T17:42:14.999999999Z07:00 4386fb97867d: (from ubuntu-1:14.04) stop - 2014-05-10T17:42:14.999999999Z07:00 7805c1d35632: (from redis:2.8) die - 2014-09-03T15:49:29.999999999Z07:00 7805c1d35632: (from redis:2.8) stop - - $ sudo docker events --filter 'container=7805c1d35632' --filter 'event=stop' - 2014-09-03T15:49:29.999999999Z07:00 7805c1d35632: (from redis:2.8) stop - -## exec - - Usage: docker exec [OPTIONS] CONTAINER COMMAND [ARG...] - - Run a command in a running container - - -d, --detach=false Detached mode: run command in the background - -i, --interactive=false Keep STDIN open even if not attached - -t, --tty=false Allocate a pseudo-TTY - -The `docker exec` command runs a new command in a running container. - -The command started using `docker exec` will only run while the container's primary -process (`PID 1`) is running, and will not be restarted if the container is restarted. - -If the container is paused, then the `docker exec` command will fail with an error: - - $ docker pause test - test - $ docker ps - CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES - 1ae3b36715d2 ubuntu:latest "bash" 17 seconds ago Up 16 seconds (Paused) test - $ docker exec test ls - FATA[0000] Error response from daemon: Container test is paused, unpause the container before exec - $ echo $? - 1 - -#### Examples - - $ sudo docker run --name ubuntu_bash --rm -i -t ubuntu bash - -This will create a container named `ubuntu_bash` and start a Bash session. - - $ sudo docker exec -d ubuntu_bash touch /tmp/execWorks - -This will create a new file `/tmp/execWorks` inside the running container -`ubuntu_bash`, in the background. - - $ sudo docker exec -it ubuntu_bash bash - -This will create a new Bash session in the container `ubuntu_bash`. - -## export - - Usage: docker export CONTAINER - - Export the contents of a filesystem as a tar archive to STDOUT - -For example: - - $ sudo docker export red_panda > latest.tar - -> **Note:** -> `docker export` does not export the contents of volumes associated with the -> container. If a volume is mounted on top of an existing directory in the -> container, `docker export` will export the contents of the *underlying* -> directory, not the contents of the volume. -> -> Refer to [Backup, restore, or migrate data volumes](/userguide/dockervolumes/#backup-restore-or-migrate-data-volumes) -> in the user guide for examples on exporting data in a volume. - -## history - - Usage: docker history [OPTIONS] IMAGE - - Show the history of an image - - --no-trunc=false Don't truncate output - -q, --quiet=false Only show numeric IDs - -To see how the `docker:latest` image was built: - - $ sudo docker history docker - IMAGE CREATED CREATED BY SIZE - 3e23a5875458790b7a806f95f7ec0d0b2a5c1659bfc899c89f939f6d5b8f7094 8 days ago /bin/sh -c #(nop) ENV LC_ALL=C.UTF-8 0 B - 8578938dd17054dce7993d21de79e96a037400e8d28e15e7290fea4f65128a36 8 days ago /bin/sh -c dpkg-reconfigure locales && locale-gen C.UTF-8 && /usr/sbin/update-locale LANG=C.UTF-8 1.245 MB - be51b77efb42f67a5e96437b3e102f81e0a1399038f77bf28cea0ed23a65cf60 8 days ago /bin/sh -c apt-get update && apt-get install -y git libxml2-dev python build-essential make gcc python-dev locales python-pip 338.3 MB - 4b137612be55ca69776c7f30c2d2dd0aa2e7d72059820abf3e25b629f887a084 6 weeks ago /bin/sh -c #(nop) ADD jessie.tar.xz in / 121 MB - 750d58736b4b6cc0f9a9abe8f258cef269e3e9dceced1146503522be9f985ada 6 weeks ago /bin/sh -c #(nop) MAINTAINER Tianon Gravi - mkimage-debootstrap.sh -t jessie.tar.xz jessie http://http.debian.net/debian 0 B - 511136ea3c5a64f264b78b5433614aec563103b4d4702f3ba7d4d2698e22c158 9 months ago 0 B - -## images - - Usage: docker images [OPTIONS] [REPOSITORY] - - List images - - -a, --all=false Show all images (by default filter out the intermediate image layers) - -f, --filter=[] Provide filter values (i.e., 'dangling=true') - --no-trunc=false Don't truncate output - -q, --quiet=false Only show numeric IDs - -The default `docker images` will show all top level -images, their repository and tags, and their virtual size. - -Docker images have intermediate layers that increase reusability, -decrease disk usage, and speed up `docker build` by -allowing each step to be cached. These intermediate layers are not shown -by default. - -The `VIRTUAL SIZE` is the cumulative space taken up by the image and all -its parent images. This is also the disk space used by the contents of the -Tar file created when you `docker save` an image. - -An image will be listed more than once if it has multiple repository names -or tags. This single image (identifiable by its matching `IMAGE ID`) -uses up the `VIRTUAL SIZE` listed only once. - -#### Listing the most recently created images - - $ sudo docker images | head - REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE - 77af4d6b9913 19 hours ago 1.089 GB - committ latest b6fa739cedf5 19 hours ago 1.089 GB - 78a85c484f71 19 hours ago 1.089 GB - docker latest 30557a29d5ab 20 hours ago 1.089 GB - 5ed6274db6ce 24 hours ago 1.089 GB - postgres 9 746b819f315e 4 days ago 213.4 MB - postgres 9.3 746b819f315e 4 days ago 213.4 MB - postgres 9.3.5 746b819f315e 4 days ago 213.4 MB - postgres latest 746b819f315e 4 days ago 213.4 MB - - -#### Listing the full length image IDs - - $ sudo docker images --no-trunc | head - REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE - 77af4d6b9913e693e8d0b4b294fa62ade6054e6b2f1ffb617ac955dd63fb0182 19 hours ago 1.089 GB - committest latest b6fa739cedf5ea12a620a439402b6004d057da800f91c7524b5086a5e4749c9f 19 hours ago 1.089 GB - 78a85c484f71509adeaace20e72e941f6bdd2b25b4c75da8693efd9f61a37921 19 hours ago 1.089 GB - docker latest 30557a29d5abc51e5f1d5b472e79b7e296f595abcf19fe6b9199dbbc809c6ff4 20 hours ago 1.089 GB - 0124422dd9f9cf7ef15c0617cda3931ee68346455441d66ab8bdc5b05e9fdce5 20 hours ago 1.089 GB - 18ad6fad340262ac2a636efd98a6d1f0ea775ae3d45240d3418466495a19a81b 22 hours ago 1.082 GB - f9f1e26352f0a3ba6a0ff68167559f64f3e21ff7ada60366e2d44a04befd1d3a 23 hours ago 1.089 GB - tryout latest 2629d1fa0b81b222fca63371ca16cbf6a0772d07759ff80e8d1369b926940074 23 hours ago 131.5 MB - 5ed6274db6ceb2397844896966ea239290555e74ef307030ebb01ff91b1914df 24 hours ago 1.089 GB - -#### Filtering - -The filtering flag (`-f` or `--filter`) format is of "key=value". If there is more -than one filter, then pass multiple flags (e.g., `--filter "foo=bar" --filter "bif=baz"`) - -Current filters: - * dangling (boolean - true or false) - -##### Untagged images - - $ sudo docker images --filter "dangling=true" - - REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE - 8abc22fbb042 4 weeks ago 0 B - 48e5f45168b9 4 weeks ago 2.489 MB - bf747efa0e2f 4 weeks ago 0 B - 980fe10e5736 12 weeks ago 101.4 MB - dea752e4e117 12 weeks ago 101.4 MB - 511136ea3c5a 8 months ago 0 B - -This will display untagged images, that are the leaves of the images tree (not -intermediary layers). These images occur when a new build of an image takes the -`repo:tag` away from the image ID, leaving it untagged. A warning will be issued -if trying to remove an image when a container is presently using it. -By having this flag it allows for batch cleanup. - -Ready for use by `docker rmi ...`, like: - - $ sudo docker rmi $(sudo docker images -f "dangling=true" -q) - - 8abc22fbb042 - 48e5f45168b9 - bf747efa0e2f - 980fe10e5736 - dea752e4e117 - 511136ea3c5a - -NOTE: Docker will warn you if any containers exist that are using these untagged images. - -## import - - Usage: docker import URL|- [REPOSITORY[:TAG]] - - Create an empty filesystem image and import the contents of the tarball (.tar, .tar.gz, .tgz, .bzip, .tar.xz, .txz) into it, then optionally tag it. - -URLs must start with `http` and point to a single file archive (.tar, -.tar.gz, .tgz, .bzip, .tar.xz, or .txz) containing a root filesystem. If -you would like to import from a local directory or archive, you can use -the `-` parameter to take the data from `STDIN`. - -#### Examples - -**Import from a remote location:** - -This will create a new untagged image. - - $ sudo docker import http://example.com/exampleimage.tgz - -**Import from a local file:** - -Import to docker via pipe and `STDIN`. - - $ cat exampleimage.tgz | sudo docker import - exampleimagelocal:new - -**Import from a local directory:** - - $ sudo tar -c . | sudo docker import - exampleimagedir - -Note the `sudo` in this example – you must preserve -the ownership of the files (especially root ownership) during the -archiving with tar. If you are not root (or the sudo command) when you -tar, then the ownerships might not get preserved. - -## info - - - Usage: docker info - - Display system-wide information - -For example: - - $ sudo docker -D info - Containers: 14 - Images: 52 - Storage Driver: aufs - Root Dir: /var/lib/docker/aufs - Backing Filesystem: extfs - Dirs: 545 - Execution Driver: native-0.2 - Kernel Version: 3.13.0-24-generic - Operating System: Ubuntu 14.04 LTS - CPUs: 1 - Name: prod-server-42 - ID: 7TRN:IPZB:QYBB:VPBQ:UMPP:KARE:6ZNR:XE6T:7EWV:PKF4:ZOJD:TPYS - Total Memory: 2 GiB - Debug mode (server): false - Debug mode (client): true - Fds: 10 - Goroutines: 9 - EventsListeners: 0 - Init Path: /usr/bin/docker - Docker Root Dir: /var/lib/docker - Username: svendowideit - Registry: [https://index.docker.io/v1/] - Labels: - storage=ssd - -The global `-D` option tells all `docker` commands to output debug information. - -When sending issue reports, please use `docker version` and `docker -D info` to -ensure we know how your setup is configured. - -## inspect - - Usage: docker inspect [OPTIONS] CONTAINER|IMAGE [CONTAINER|IMAGE...] - - Return low-level information on a container or image - - -f, --format="" Format the output using the given go template. - -By default, this will render all results in a JSON array. If a format is -specified, the given template will be executed for each result. - -Go's [text/template](http://golang.org/pkg/text/template/) package -describes all the details of the format. - -#### Examples - -**Get an instance's IP address:** - -For the most part, you can pick out any field from the JSON in a fairly -straightforward manner. - - $ sudo docker inspect --format='{{.NetworkSettings.IPAddress}}' $INSTANCE_ID - -**Get an instance's MAC Address:** - -For the most part, you can pick out any field from the JSON in a fairly -straightforward manner. - - $ sudo docker inspect --format='{{.NetworkSettings.MacAddress}}' $INSTANCE_ID - -**List All Port Bindings:** - -One can loop over arrays and maps in the results to produce simple text -output: - - $ sudo docker inspect --format='{{range $p, $conf := .NetworkSettings.Ports}} {{$p}} -> {{(index $conf 0).HostPort}} {{end}}' $INSTANCE_ID - -**Find a Specific Port Mapping:** - -The `.Field` syntax doesn't work when the field name begins with a -number, but the template language's `index` function does. The -`.NetworkSettings.Ports` section contains a map of the internal port -mappings to a list of external address/port objects, so to grab just the -numeric public port, you use `index` to find the specific port map, and -then `index` 0 contains the first object inside of that. Then we ask for -the `HostPort` field to get the public address. - - $ sudo docker inspect --format='{{(index (index .NetworkSettings.Ports "8787/tcp") 0).HostPort}}' $INSTANCE_ID - -**Get config:** - -The `.Field` syntax doesn't work when the field contains JSON data, but -the template language's custom `json` function does. The `.config` -section contains complex JSON object, so to grab it as JSON, you use -`json` to convert the configuration object into JSON. - - $ sudo docker inspect --format='{{json .config}}' $INSTANCE_ID - -## kill - - Usage: docker kill [OPTIONS] CONTAINER [CONTAINER...] - - Kill a running container using SIGKILL or a specified signal - - -s, --signal="KILL" Signal to send to the container - -The main process inside the container will be sent `SIGKILL`, or any -signal specified with option `--signal`. - -## load - - Usage: docker load [OPTIONS] - - Load an image from a tar archive on STDIN - - -i, --input="" Read from a tar archive file, instead of STDIN - -Loads a tarred repository from a file or the standard input stream. -Restores both images and tags. - - $ sudo docker images - REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE - $ sudo docker load < busybox.tar - $ sudo docker images - REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE - busybox latest 769b9341d937 7 weeks ago 2.489 MB - $ sudo docker load --input fedora.tar - $ sudo docker images - REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE - busybox latest 769b9341d937 7 weeks ago 2.489 MB - fedora rawhide 0d20aec6529d 7 weeks ago 387 MB - fedora 20 58394af37342 7 weeks ago 385.5 MB - fedora heisenbug 58394af37342 7 weeks ago 385.5 MB - fedora latest 58394af37342 7 weeks ago 385.5 MB - -## login - - Usage: docker login [OPTIONS] [SERVER] - - Register or log in to a Docker registry server, if no server is specified "https://index.docker.io/v1/" is the default. - - -e, --email="" Email - -p, --password="" Password - -u, --username="" Username - -If you want to login to a self-hosted registry you can specify this by -adding the server name. - - example: - $ sudo docker login localhost:8080 - -## logout - - Usage: docker logout [SERVER] - - Log out from a Docker registry, if no server is specified "https://index.docker.io/v1/" is the default. - -For example: - - $ sudo docker logout localhost:8080 - -## logs - - Usage: docker logs [OPTIONS] CONTAINER - - Fetch the logs of a container - - -f, --follow=false Follow log output - -t, --timestamps=false Show timestamps - --tail="all" Output the specified number of lines at the end of logs (defaults to all logs) - -The `docker logs` command batch-retrieves logs present at the time of execution. - -The `docker logs --follow` command will continue streaming the new output from -the container's `STDOUT` and `STDERR`. - -Passing a negative number or a non-integer to `--tail` is invalid and the -value is set to `all` in that case. This behavior may change in the future. - -The `docker logs --timestamp` commands will add an RFC3339Nano -timestamp, for example `2014-09-16T06:17:46.000000000Z`, to each -log entry. To ensure that the timestamps for are aligned the -nano-second part of the timestamp will be padded with zero when necessary. - -## pause - - Usage: docker pause CONTAINER - - Pause all processes within a container - -The `docker pause` command uses the cgroups freezer to suspend all processes in -a container. Traditionally, when suspending a process the `SIGSTOP` signal is -used, which is observable by the process being suspended. With the cgroups freezer -the process is unaware, and unable to capture, that it is being suspended, -and subsequently resumed. - -See the -[cgroups freezer documentation](https://www.kernel.org/doc/Documentation/cgroups/freezer-subsystem.txt) -for further details. - -## port - - Usage: docker port CONTAINER [PRIVATE_PORT[/PROTO]] - - List port mappings for the CONTAINER, or lookup the public-facing port that is NAT-ed to the PRIVATE_PORT - -You can find out all the ports mapped by not specifying a `PRIVATE_PORT`, or -just a specific mapping: - - $ sudo docker ps test - CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES - b650456536c7 busybox:latest top 54 minutes ago Up 54 minutes 0.0.0.0:1234->9876/tcp, 0.0.0.0:4321->7890/tcp test - $ sudo docker port test - 7890/tcp -> 0.0.0.0:4321 - 9876/tcp -> 0.0.0.0:1234 - $ sudo docker port test 7890/tcp - 0.0.0.0:4321 - $ sudo docker port test 7890/udp - 2014/06/24 11:53:36 Error: No public port '7890/udp' published for test - $ sudo docker port test 7890 - 0.0.0.0:4321 - -## pause - - Usage: docker pause CONTAINER - - Pause all processes within a container - -The `docker pause` command uses the cgroups freezer to suspend all processes in -a container. Traditionally when suspending a process the `SIGSTOP` signal is -used, which is observable by the process being suspended. With the cgroups freezer -the process is unaware, and unable to capture, that it is being suspended, -and subsequently resumed. - -See the -[cgroups freezer documentation](https://www.kernel.org/doc/Documentation/cgroups/freezer-subsystem.txt) -for further details. - -## rename - - Usage: docker rename OLD_NAME NEW_NAME - - rename a existing container to a NEW_NAME - -The `docker rename` command allows the container to be renamed to a different name. - -## ps - - Usage: docker ps [OPTIONS] - - List containers - - -a, --all=false Show all containers. Only running containers are shown by default. - --before="" Show only container created before Id or Name, include non-running ones. - -f, --filter=[] Provide filter values. Valid filters: - exited= - containers with exit code of - status=(restarting|running|paused|exited) - -l, --latest=false Show only the latest created container, include non-running ones. - -n=-1 Show n last created containers, include non-running ones. - --no-trunc=false Don't truncate output - -q, --quiet=false Only display numeric IDs - -s, --size=false Display total file sizes - --since="" Show only containers created since Id or Name, include non-running ones. - -Running `docker ps --no-trunc` showing 2 linked containers. - - $ sudo docker ps - CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES - f7ee772232194fcc088c6bdec6ea09f7b3f6c54d53934658164b8602d7cd4744 ubuntu:12.04 bash 17 seconds ago Up 16 seconds webapp - d0963715a061c7c7b7cc80b2646da913a959fbf13e80a971d4a60f6997a2f595 crosbymichael/redis:latest /redis-server --dir 33 minutes ago Up 33 minutes 6379/tcp redis,webapp/db - -`docker ps` will show only running containers by default. To see all containers: -`docker ps -a` - -#### Filtering - -The filtering flag (`-f` or `--filter)` format is a `key=value` pair. If there is more -than one filter, then pass multiple flags (e.g. `--filter "foo=bar" --filter "bif=baz"`) - -Current filters: - * exited (int - the code of exited containers. Only useful with '--all') - * status (restarting|running|paused|exited) - -##### Successfully exited containers - - $ sudo docker ps -a --filter 'exited=0' - CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES - ea09c3c82f6e registry:latest /srv/run.sh 2 weeks ago Exited (0) 2 weeks ago 127.0.0.1:5000->5000/tcp desperate_leakey - 106ea823fe4e fedora:latest /bin/sh -c 'bash -l' 2 weeks ago Exited (0) 2 weeks ago determined_albattani - 48ee228c9464 fedora:20 bash 2 weeks ago Exited (0) 2 weeks ago tender_torvalds - -This shows all the containers that have exited with status of '0' - -## pull - - Usage: docker pull [OPTIONS] NAME[:TAG] - - Pull an image or a repository from the registry - - -a, --all-tags=false Download all tagged images in the repository - -Most of your images will be created on top of a base image from the -[Docker Hub](https://hub.docker.com) registry. - -[Docker Hub](https://hub.docker.com) contains many pre-built images that you -can `pull` and try without needing to define and configure your own. - -It is also possible to manually specify the path of a registry to pull from. -For example, if you have set up a local registry, you can specify its path to -pull from it. A repository path is similar to a URL, but does not contain -a protocol specifier (`https://`, for example). - -To download a particular image, or set of images (i.e., a repository), -use `docker pull`: - - $ sudo docker pull debian - # will pull the debian:latest image, its intermediate layers - # and any aliases of the same id - $ sudo docker pull debian:testing - # will pull the image named debian:testing and any intermediate - # layers it is based on. - # (Typically the empty `scratch` image, a MAINTAINER layer, - # and the un-tarred base). - $ sudo docker pull --all-tags centos - # will pull all the images from the centos repository - $ sudo docker pull registry.hub.docker.com/debian - # manually specifies the path to the default Docker registry. This could - # be replaced with the path to a local registry to pull from another source. - -## push - - Usage: docker push NAME[:TAG] - - Push an image or a repository to the registry - -Use `docker push` to share your images to the [Docker Hub](https://hub.docker.com) -registry or to a self-hosted one. - -## restart - - Usage: docker restart [OPTIONS] CONTAINER [CONTAINER...] - - Restart a running container - - -t, --time=10 Number of seconds to try to stop for before killing the container. Once killed it will then be restarted. Default is 10 seconds. - -## rm - - Usage: docker rm [OPTIONS] CONTAINER [CONTAINER...] - - Remove one or more containers - - -f, --force=false Force the removal of a running container (uses SIGKILL) - -l, --link=false Remove the specified link and not the underlying container - -v, --volumes=false Remove the volumes associated with the container - -#### Examples - - $ sudo docker rm /redis - /redis - -This will remove the container referenced under the link -`/redis`. - - $ sudo docker rm --link /webapp/redis - /webapp/redis - -This will remove the underlying link between `/webapp` and the `/redis` -containers removing all network communication. - - $ sudo docker rm --force redis - redis - -The main process inside the container referenced under the link `/redis` will receive -`SIGKILL`, then the container will be removed. - -This command will delete all stopped containers. The command `docker ps --a -q` will return all existing container IDs and pass them to the `rm` -command which will delete them. Any running containers will not be -deleted. - -## rmi - - Usage: docker rmi [OPTIONS] IMAGE [IMAGE...] - - Remove one or more images - - -f, --force=false Force removal of the image - --no-prune=false Do not delete untagged parents - -#### Removing tagged images - -Images can be removed either by their short or long IDs, or their image -names. If an image has more than one name, each of them needs to be -removed before the image is removed. - - $ sudo docker images - REPOSITORY TAG IMAGE ID CREATED SIZE - test1 latest fd484f19954f 23 seconds ago 7 B (virtual 4.964 MB) - test latest fd484f19954f 23 seconds ago 7 B (virtual 4.964 MB) - test2 latest fd484f19954f 23 seconds ago 7 B (virtual 4.964 MB) - - $ sudo docker rmi fd484f19954f - Error: Conflict, cannot delete image fd484f19954f because it is tagged in multiple repositories - 2013/12/11 05:47:16 Error: failed to remove one or more images - - $ sudo docker rmi test1 - Untagged: fd484f19954f4920da7ff372b5067f5b7ddb2fd3830cecd17b96ea9e286ba5b8 - $ sudo docker rmi test2 - Untagged: fd484f19954f4920da7ff372b5067f5b7ddb2fd3830cecd17b96ea9e286ba5b8 - - $ sudo docker images - REPOSITORY TAG IMAGE ID CREATED SIZE - test latest fd484f19954f 23 seconds ago 7 B (virtual 4.964 MB) - $ sudo docker rmi test - Untagged: fd484f19954f4920da7ff372b5067f5b7ddb2fd3830cecd17b96ea9e286ba5b8 - Deleted: fd484f19954f4920da7ff372b5067f5b7ddb2fd3830cecd17b96ea9e286ba5b8 - -## run - - Usage: docker run [OPTIONS] IMAGE [COMMAND] [ARG...] - - Run a command in a new container - - -a, --attach=[] Attach to STDIN, STDOUT or STDERR. - --add-host=[] Add a custom host-to-IP mapping (host:ip) - -c, --cpu-shares=0 CPU shares (relative weight) - --cap-add=[] Add Linux capabilities - --cap-drop=[] Drop Linux capabilities - --cidfile="" Write the container ID to the file - --cpuset="" CPUs in which to allow execution (0-3, 0,1) - -d, --detach=false Detached mode: run the container in the background and print the new container ID - --device=[] Add a host device to the container (e.g. --device=/dev/sdc:/dev/xvdc:rwm) - --dns=[] Set custom DNS servers - --dns-search=[] Set custom DNS search domains (Use --dns-search=. if you don't wish to set the search domain) - -e, --env=[] Set environment variables - --entrypoint="" Overwrite the default ENTRYPOINT of the image - --env-file=[] Read in a line delimited file of environment variables - --expose=[] Expose a port or a range of ports (e.g. --expose=3300-3310) from the container without publishing it to your host - -h, --hostname="" Container host name - -i, --interactive=false Keep STDIN open even if not attached - --ipc="" Default is to create a private IPC namespace (POSIX SysV IPC) for the container - 'container:': reuses another container shared memory, semaphores and message queues - 'host': use the host shared memory,semaphores and message queues inside the container. Note: the host mode gives the container full access to local shared memory and is therefore considered insecure. - --link=[] Add link to another container in the form of name:alias - --lxc-conf=[] (lxc exec-driver only) Add custom lxc options --lxc-conf="lxc.cgroup.cpuset.cpus = 0,1" - -m, --memory="" Memory limit (format: , where unit = b, k, m or g) - -memory-swap="" Total memory usage (memory + swap), set '-1' to disable swap (format: , where unit = b, k, m or g) - --mac-address="" Container MAC address (e.g. 92:d0:c6:0a:29:33) - --name="" Assign a name to the container - --net="bridge" Set the Network mode for the container - 'bridge': creates a new network stack for the container on the docker bridge - 'none': no networking for this container - 'container:': reuses another container network stack - 'host': use the host network stack inside the container. Note: the host mode gives the container full access to local system services such as D-bus and is therefore considered insecure. - -P, --publish-all=false Publish all exposed ports to random ports on the host interfaces - -p, --publish=[] Publish a container's port to the host - format: ip:hostPort:containerPort | ip::containerPort | hostPort:containerPort | containerPort - Both hostPort and containerPort can be specified as a range of ports. - When specifying ranges for both, the number of container ports in the range must match the number of host ports in the range. (e.g., `-p 1234-1236:1234-1236/tcp`) - (use 'docker port' to see the actual mapping) - --pid=host 'host': use the host PID namespace inside the container. Note: the host mode gives the container full access to local system services such as D-bus and is therefore considered insecure. - --privileged=false Give extended privileges to this container - --read-only=false Mount the container's root filesystem as read only - --restart="" Restart policy to apply when a container exits (no, on-failure[:max-retry], always) - --rm=false Automatically remove the container when it exits (incompatible with -d) - --security-opt=[] Security Options - --sig-proxy=true Proxy received signals to the process (non-TTY mode only). SIGCHLD, SIGSTOP, and SIGKILL are not proxied. - -t, --tty=false Allocate a pseudo-TTY - -u, --user="" Username or UID - -v, --volume=[] Bind mount a volume (e.g., from the host: -v /host:/container, from Docker: -v /container) - --volumes-from=[] Mount volumes from the specified container(s) - -w, --workdir="" Working directory inside the container - -The `docker run` command first `creates` a writeable container layer over the -specified image, and then `starts` it using the specified command. That is, -`docker run` is equivalent to the API `/containers/create` then -`/containers/(id)/start`. A stopped container can be restarted with all its -previous changes intact using `docker start`. See `docker ps -a` to view a list -of all containers. - -There is detailed information about `docker run` in the [Docker run reference]( -/reference/run/). - -The `docker run` command can be used in combination with `docker commit` to -[*change the command that a container runs*](#commit-an-existing-container). - -See the [Docker User Guide](/userguide/dockerlinks/) for more detailed -information about the `--expose`, `-p`, `-P` and `--link` parameters, -and linking containers. - -#### Examples - - $ sudo docker run --name test -it debian - $$ exit 13 - exit - $ echo $? - 13 - $ sudo docker ps -a | grep test - 275c44472aeb debian:7 "/bin/bash" 26 seconds ago Exited (13) 17 seconds ago test - -In this example, we are running `bash` interactively in the `debian:latest` image, and giving -the container the name `test`. We then quit `bash` by running `exit 13`, which means `bash` -will have an exit code of `13`. This is then passed on to the caller of `docker run`, and -is recorded in the `test` container metadata. - - $ sudo docker run --cidfile /tmp/docker_test.cid ubuntu echo "test" - -This will create a container and print `test` to the console. The `cidfile` -flag makes Docker attempt to create a new file and write the container ID to it. -If the file exists already, Docker will return an error. Docker will close this -file when `docker run` exits. - - $ sudo docker run -t -i --rm ubuntu bash - root@bc338942ef20:/# mount -t tmpfs none /mnt - mount: permission denied - -This will *not* work, because by default, most potentially dangerous kernel -capabilities are dropped; including `cap_sys_admin` (which is required to mount -filesystems). However, the `--privileged` flag will allow it to run: - - $ sudo docker run --privileged ubuntu bash - root@50e3f57e16e6:/# mount -t tmpfs none /mnt - root@50e3f57e16e6:/# df -h - Filesystem Size Used Avail Use% Mounted on - none 1.9G 0 1.9G 0% /mnt - -The `--privileged` flag gives *all* capabilities to the container, and it also -lifts all the limitations enforced by the `device` cgroup controller. In other -words, the container can then do almost everything that the host can do. This -flag exists to allow special use-cases, like running Docker within Docker. - - $ sudo docker run -w /path/to/dir/ -i -t ubuntu pwd - -The `-w` lets the command being executed inside directory given, here -`/path/to/dir/`. If the path does not exists it is created inside the container. - - $ sudo docker run -v `pwd`:`pwd` -w `pwd` -i -t ubuntu pwd - -The `-v` flag mounts the current working directory into the container. The `-w` -lets the command being executed inside the current working directory, by -changing into the directory to the value returned by `pwd`. So this -combination executes the command using the container, but inside the -current working directory. - - $ sudo docker run -v /doesnt/exist:/foo -w /foo -i -t ubuntu bash - -When the host directory of a bind-mounted volume doesn't exist, Docker -will automatically create this directory on the host for you. In the -example above, Docker will create the `/doesnt/exist` -folder before starting your container. - - $ sudo docker run --read-only -v /icanwrite busybox touch /icanwrite here - -Volumes can be used in combination with `--read-only` to control where -a container writes files. The `--read-only` flag mounts the container's root -filesystem as read only prohibiting writes to locations other than the -specified volumes for the container. - - $ sudo docker run -t -i -v /var/run/docker.sock:/var/run/docker.sock -v ./static-docker:/usr/bin/docker busybox sh - -By bind-mounting the docker unix socket and statically linked docker -binary (such as that provided by [https://get.docker.com]( -https://get.docker.com)), you give the container the full access to create and -manipulate the host's Docker daemon. - - $ sudo docker run -p 127.0.0.1:80:8080 ubuntu bash - -This binds port `8080` of the container to port `80` on `127.0.0.1` of -the host machine. The [Docker User Guide](/userguide/dockerlinks/) -explains in detail how to manipulate ports in Docker. - - $ sudo docker run --expose 80 ubuntu bash - -This exposes port `80` of the container for use within a link without -publishing the port to the host system's interfaces. The [Docker User -Guide](/userguide/dockerlinks) explains in detail how to manipulate -ports in Docker. - - $ sudo docker run -e MYVAR1 --env MYVAR2=foo --env-file ./env.list ubuntu bash - -This sets environmental variables in the container. For illustration all three -flags are shown here. Where `-e`, `--env` take an environment variable and -value, or if no `=` is provided, then that variable's current value is passed -through (i.e. `$MYVAR1` from the host is set to `$MYVAR1` in the container). -When no `=` is provided and that variable is not defined in the client's -environment then that variable will be removed from the container's list of -environment variables. -All three flags, `-e`, `--env` and `--env-file` can be repeated. - -Regardless of the order of these three flags, the `--env-file` are processed -first, and then `-e`, `--env` flags. This way, the `-e` or `--env` will -override variables as needed. - - $ cat ./env.list - TEST_FOO=BAR - $ sudo docker run --env TEST_FOO="This is a test" --env-file ./env.list busybox env | grep TEST_FOO - TEST_FOO=This is a test - -The `--env-file` flag takes a filename as an argument and expects each line -to be in the `VAR=VAL` format, mimicking the argument passed to `--env`. Comment -lines need only be prefixed with `#` - -An example of a file passed with `--env-file` - - $ cat ./env.list - TEST_FOO=BAR - - # this is a comment - TEST_APP_DEST_HOST=10.10.0.127 - TEST_APP_DEST_PORT=8888 - - # pass through this variable from the caller - TEST_PASSTHROUGH - $ sudo TEST_PASSTHROUGH=howdy docker run --env-file ./env.list busybox env - HOME=/ - PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin - HOSTNAME=5198e0745561 - TEST_FOO=BAR - TEST_APP_DEST_HOST=10.10.0.127 - TEST_APP_DEST_PORT=8888 - TEST_PASSTHROUGH=howdy - - $ sudo docker run --name console -t -i ubuntu bash - -This will create and run a new container with the container name being -`console`. - - $ sudo docker run --link /redis:redis --name console ubuntu bash - -The `--link` flag will link the container named `/redis` into the newly -created container with the alias `redis`. The new container can access the -network and environment of the `redis` container via environment variables. -The `--name` flag will assign the name `console` to the newly created -container. - - $ sudo docker run --volumes-from 777f7dc92da7 --volumes-from ba8c0c54f0f2:ro -i -t ubuntu pwd - -The `--volumes-from` flag mounts all the defined volumes from the referenced -containers. Containers can be specified by repetitions of the `--volumes-from` -argument. The container ID may be optionally suffixed with `:ro` or `:rw` to -mount the volumes in read-only or read-write mode, respectively. By default, -the volumes are mounted in the same mode (read write or read only) as -the reference container. - -The `-a` flag tells `docker run` to bind to the container's `STDIN`, `STDOUT` or -`STDERR`. This makes it possible to manipulate the output and input as needed. - - $ echo "test" | sudo docker run -i -a stdin ubuntu cat - - -This pipes data into a container and prints the container's ID by attaching -only to the container's `STDIN`. - - $ sudo docker run -a stderr ubuntu echo test - -This isn't going to print anything unless there's an error because we've -only attached to the `STDERR` of the container. The container's logs -still store what's been written to `STDERR` and `STDOUT`. - - $ cat somefile | sudo docker run -i -a stdin mybuilder dobuild - -This is how piping a file into a container could be done for a build. -The container's ID will be printed after the build is done and the build -logs could be retrieved using `docker logs`. This is -useful if you need to pipe a file or something else into a container and -retrieve the container's ID once the container has finished running. - - $ sudo docker run --device=/dev/sdc:/dev/xvdc --device=/dev/sdd --device=/dev/zero:/dev/nulo -i -t ubuntu ls -l /dev/{xvdc,sdd,nulo} - brw-rw---- 1 root disk 8, 2 Feb 9 16:05 /dev/xvdc - brw-rw---- 1 root disk 8, 3 Feb 9 16:05 /dev/sdd - crw-rw-rw- 1 root root 1, 5 Feb 9 16:05 /dev/nulo - -It is often necessary to directly expose devices to a container. The `--device` -option enables that. For example, a specific block storage device or loop -device or audio device can be added to an otherwise unprivileged container -(without the `--privileged` flag) and have the application directly access it. - -By default, the container will be able to `read`, `write` and `mknod` these devices. -This can be overridden using a third `:rwm` set of options to each `--device` -flag: - - -``` - $ sudo docker run --device=/dev/sda:/dev/xvdc --rm -it ubuntu fdisk /dev/xvdc - - Command (m for help): q - $ sudo docker run --device=/dev/sda:/dev/xvdc:r --rm -it ubuntu fdisk /dev/xvdc - You will not be able to write the partition table. - - Command (m for help): q - - $ sudo docker run --device=/dev/sda:/dev/xvdc --rm -it ubuntu fdisk /dev/xvdc - - Command (m for help): q - - $ sudo docker run --device=/dev/sda:/dev/xvdc:m --rm -it ubuntu fdisk /dev/xvdc - fdisk: unable to open /dev/xvdc: Operation not permitted -``` - -> **Note:** -> `--device` cannot be safely used with ephemeral devices. Block devices that -> may be removed should not be added to untrusted containers with `--device`. - -**A complete example:** - - $ sudo docker run -d --name static static-web-files sh - $ sudo docker run -d --expose=8098 --name riak riakserver - $ sudo docker run -d -m 100m -e DEVELOPMENT=1 -e BRANCH=example-code -v $(pwd):/app/bin:ro --name app appserver - $ sudo docker run -d -p 1443:443 --dns=10.0.0.1 --dns-search=dev.org -v /var/log/httpd --volumes-from static --link riak --link app -h www.sven.dev.org --name web webserver - $ sudo docker run -t -i --rm --volumes-from web -w /var/log/httpd busybox tail -f access.log - -This example shows five containers that might be set up to test a web -application change: - -1. Start a pre-prepared volume image `static-web-files` (in the background) - that has CSS, image and static HTML in it, (with a `VOLUME` instruction in - the Dockerfile to allow the web server to use those files); -2. Start a pre-prepared `riakserver` image, give the container name `riak` and - expose port `8098` to any containers that link to it; -3. Start the `appserver` image, restricting its memory usage to 100MB, setting - two environment variables `DEVELOPMENT` and `BRANCH` and bind-mounting the - current directory (`$(pwd)`) in the container in read-only mode as `/app/bin`; -4. Start the `webserver`, mapping port `443` in the container to port `1443` on - the Docker server, setting the DNS server to `10.0.0.1` and DNS search - domain to `dev.org`, creating a volume to put the log files into (so we can - access it from another container), then importing the files from the volume - exposed by the `static` container, and linking to all exposed ports from - `riak` and `app`. Lastly, we set the hostname to `web.sven.dev.org` so its - consistent with the pre-generated SSL certificate; -5. Finally, we create a container that runs `tail -f access.log` using the logs - volume from the `web` container, setting the workdir to `/var/log/httpd`. The - `--rm` option means that when the container exits, the container's layer is - removed. - -#### Restart Policies - -Use Docker's `--restart` to specify a container's *restart policy*. A restart -policy controls whether the Docker daemon restarts a container after exit. -Docker supports the following restart policies: - - - - - - - - - - - - - - - - - - - - - - -
PolicyResult
no - Do not automatically restart the container when it exits. This is the - default. -
- - on-failure[:max-retries] - - - Restart only if the container exits with a non-zero exit status. - Optionally, limit the number of restart retries the Docker - daemon attempts. -
always - Always restart the container regardless of the exit status. - When you specify always, the Docker daemon will try to restart - the container indefinitely. -
- - $ sudo docker run --restart=always redis - -This will run the `redis` container with a restart policy of **always** -so that if the container exits, Docker will restart it. - -More detailed information on restart policies can be found in the -[Restart Policies (--restart)](/reference/run/#restart-policies-restart) section -of the Docker run reference page. - -### Adding entries to a container hosts file - -You can add other hosts into a container's `/etc/hosts` file by using one or more -`--add-host` flags. This example adds a static address for a host named `docker`: - -``` - $ docker run --add-host=docker:10.180.0.1 --rm -it debian - $$ ping docker - PING docker (10.180.0.1): 48 data bytes - 56 bytes from 10.180.0.1: icmp_seq=0 ttl=254 time=7.600 ms - 56 bytes from 10.180.0.1: icmp_seq=1 ttl=254 time=30.705 ms - ^C--- docker ping statistics --- - 2 packets transmitted, 2 packets received, 0% packet loss - round-trip min/avg/max/stddev = 7.600/19.152/30.705/11.553 ms -``` - -> **Note:** -> Sometimes you need to connect to the Docker host, which means getting the IP -> address of the host. You can use the following shell commands to simplify this -> process: -> -> $ alias hostip="ip route show 0.0.0.0/0 | grep -Eo 'via \S+' | awk '{ print \$2 }'" -> $ docker run --add-host=docker:$(hostip) --rm -it debian - -## save - - Usage: docker save [OPTIONS] IMAGE [IMAGE...] - - Save an image(s) to a tar archive (streamed to STDOUT by default) - - -o, --output="" Write to a file, instead of STDOUT - -Produces a tarred repository to the standard output stream. -Contains all parent layers, and all tags + versions, or specified `repo:tag`, for -each argument provided. - -It is used to create a backup that can then be used with `docker load` - - $ sudo docker save busybox > busybox.tar - $ ls -sh busybox.tar - 2.7M busybox.tar - $ sudo docker save --output busybox.tar busybox - $ ls -sh busybox.tar - 2.7M busybox.tar - $ sudo docker save -o fedora-all.tar fedora - $ sudo docker save -o fedora-latest.tar fedora:latest - -It is even useful to cherry-pick particular tags of an image repository - - $ sudo docker save -o ubuntu.tar ubuntu:lucid ubuntu:saucy - -## search - -Search [Docker Hub](https://hub.docker.com) for images - - Usage: docker search [OPTIONS] TERM - - Search the Docker Hub for images - - --automated=false Only show automated builds - --no-trunc=false Don't truncate output - -s, --stars=0 Only displays with at least x stars - -See [*Find Public Images on Docker Hub*]( -/userguide/dockerrepos/#searching-for-images) for -more details on finding shared images from the command line. - -> **Note:** -> Search queries will only return up to 25 results - -## start - - Usage: docker start [OPTIONS] CONTAINER [CONTAINER...] - - Restart a stopped container - - -a, --attach=false Attach container's STDOUT and STDERR and forward all signals to the process - -i, --interactive=false Attach container's STDIN - -## stats - - Usage: docker stats CONTAINER [CONTAINER...] - - Display a live stream of one or more containers' resource usage statistics - - --help=false Print usage - -> **Note**: this functionality currently only works when using the *libcontainer* exec-driver. - -Running `docker stats` on multiple containers - - $ sudo docker stats redis1 redis2 - CONTAINER CPU % MEM USAGE/LIMIT MEM % NET I/O - redis1 0.07% 796 KiB/64 MiB 1.21% 788 B/648 B - redis2 0.07% 2.746 MiB/64 MiB 4.29% 1.266 KiB/648 B - - -The `docker stats` command will only return a live stream of data for running -containers. Stopped containers will not return any data. - -> **Note:** -> If you want more detailed information about a container's resource usage, use the API endpoint. - -## stop - - Usage: docker stop [OPTIONS] CONTAINER [CONTAINER...] - - Stop a running container by sending SIGTERM and then SIGKILL after a grace period - - -t, --time=10 Number of seconds to wait for the container to stop before killing it. Default is 10 seconds. - -The main process inside the container will receive `SIGTERM`, and after a -grace period, `SIGKILL`. - -## tag - - Usage: docker tag [OPTIONS] IMAGE[:TAG] [REGISTRYHOST/][USERNAME/]NAME[:TAG] - - Tag an image into a repository - - -f, --force=false Force - -You can group your images together using names and tags, and then upload -them to [*Share Images via Repositories*]( -/userguide/dockerrepos/#contributing-to-docker-hub). - -## top - - Usage: docker top CONTAINER [ps OPTIONS] - - Display the running processes of a container - -## unpause - - Usage: docker unpause CONTAINER - - Unpause all processes within a container - -The `docker unpause` command uses the cgroups freezer to un-suspend all -processes in a container. - -See the -[cgroups freezer documentation](https://www.kernel.org/doc/Documentation/cgroups/freezer-subsystem.txt) -for further details. - -## version - - Usage: docker version - - Show the Docker version information. - -Show the Docker version, API version, Git commit, and Go version of -both Docker client and daemon. - -## wait - - Usage: docker wait CONTAINER [CONTAINER...] - - Block until a container stops, then print its exit code. - diff --git a/reference/run.md~ b/reference/run.md~ deleted file mode 100644 index e10f614dd8..0000000000 --- a/reference/run.md~ +++ /dev/null @@ -1,792 +0,0 @@ -page_title: Docker run reference -page_description: Configure containers at runtime -page_keywords: docker, run, configure, runtime - -# Docker run reference - -**Docker runs processes in isolated containers**. When an operator -executes `docker run`, she starts a process with its own file system, -its own networking, and its own isolated process tree. The -[*Image*](/terms/image/#image) which starts the process may define -defaults related to the binary to run, the networking to expose, and -more, but `docker run` gives final control to the operator who starts -the container from the image. That's the main reason -[*run*](/reference/commandline/cli/#run) has more options than any -other `docker` command. - -## General form - -The basic `docker run` command takes this form: - - $ sudo docker run [OPTIONS] IMAGE[:TAG] [COMMAND] [ARG...] - -To learn how to interpret the types of `[OPTIONS]`, -see [*Option types*](/reference/commandline/cli/#option-types). - -The list of `[OPTIONS]` breaks down into two groups: - -1. Settings exclusive to operators, including: - * Detached or Foreground running, - * Container Identification, - * Network settings, and - * Runtime Constraints on CPU and Memory - * Privileges and LXC Configuration -2. Settings shared between operators and developers, where operators can - override defaults developers set in images at build time. - -Together, the `docker run [OPTIONS]` give the operator complete control over runtime -behavior, allowing them to override all defaults set by -the developer during `docker build` and nearly all the defaults set by -the Docker runtime itself. - -## Operator exclusive options - -Only the operator (the person executing `docker run`) can set the -following options. - - - [Detached vs Foreground](#detached-vs-foreground) - - [Detached (-d)](#detached-d) - - [Foreground](#foreground) - - [Container Identification](#container-identification) - - [Name (--name)](#name-name) - - [PID Equivalent](#pid-equivalent) - - [IPC Settings](#ipc-settings) - - [Network Settings](#network-settings) - - [Restart Policies
(--restart)](#restart-policies-restart) - - [Clean Up (--rm)](#clean-up-rm) - - [Runtime Constraints on CPU and Memory](#runtime-constraints-on-cpu-and-memory) - - [Runtime Privilege, Linux Capabilities, and LXC Configuration](#runtime-privilege-linux-capabilities-and-lxc-configuration) - -## Detached vs foreground - -When starting a Docker container, you must first decide if you want to -run the container in the background in a "detached" mode or in the -default foreground mode: - - -d=false: Detached mode: Run container in the background, print new container id - -### Detached (-d) - -In detached mode (`-d=true` or just `-d`), all I/O should be done -through network connections or shared volumes because the container is -no longer listening to the command line where you executed `docker run`. -You can reattach to a detached container with `docker` -[*attach*](/reference/commandline/cli/#attach). If you choose to run a -container in the detached mode, then you cannot use the `--rm` option. - -### Foreground - -In foreground mode (the default when `-d` is not specified), `docker -run` can start the process in the container and attach the console to -the process's standard input, output, and standard error. It can even -pretend to be a TTY (this is what most command line executables expect) -and pass along signals. All of that is configurable: - - -a=[] : Attach to `STDIN`, `STDOUT` and/or `STDERR` - -t=false : Allocate a pseudo-tty - --sig-proxy=true: Proxify all received signal to the process (non-TTY mode only) - -i=false : Keep STDIN open even if not attached - -If you do not specify `-a` then Docker will [attach all standard -streams]( https://github.com/docker/docker/blob/ -75a7f4d90cde0295bcfb7213004abce8d4779b75/commands.go#L1797). You can -specify to which of the three standard streams (`STDIN`, `STDOUT`, -`STDERR`) you'd like to connect instead, as in: - - $ sudo docker run -a stdin -a stdout -i -t ubuntu /bin/bash - -For interactive processes (like a shell), you must use `-i -t` together in -order to allocate a tty for the container process. Specifying `-t` is however -forbidden when the client standard output is redirected or pipe, such as in: -`echo test | docker run -i busybox cat`. - -## Container identification - -### Name (--name) - -The operator can identify a container in three ways: - -- UUID long identifier - ("f78375b1c487e03c9438c729345e54db9d20cfa2ac1fc3494b6eb60872e74778") -- UUID short identifier ("f78375b1c487") -- Name ("evil_ptolemy") - -The UUID identifiers come from the Docker daemon, and if you do not -assign a name to the container with `--name` then the daemon will also -generate a random string name too. The name can become a handy way to -add meaning to a container since you can use this name when defining -[*links*](/userguide/dockerlinks) (or any -other place you need to identify a container). This works for both -background and foreground Docker containers. - -### PID equivalent - -Finally, to help with automation, you can have Docker write the -container ID out to a file of your choosing. This is similar to how some -programs might write out their process ID to a file (you've seen them as -PID files): - - --cidfile="": Write the container ID to the file - -### Image[:tag] - -While not strictly a means of identifying a container, you can specify a version of an -image you'd like to run the container with by adding `image[:tag]` to the command. For -example, `docker run ubuntu:14.04`. - -## PID Settings - --pid="" : Set the PID (Process) Namespace mode for the container, - 'host': use the host's PID namespace inside the container -By default, all containers have the PID namespace enabled. - -PID namespace provides separation of processes. The PID Namespace removes the -view of the system processes, and allows process ids to be reused including -pid 1. - -In certain cases you want your container to share the host's process namespace, -basically allowing processes within the container to see all of the processes -on the system. For example, you could build a container with debugging tools -like `strace` or `gdb`, but want to use these tools when debugging processes -within the container. - - $ sudo docker run --pid=host rhel7 strace -p 1234 - -This command would allow you to use `strace` inside the container on pid 1234 on -the host. - -## IPC Settings - --ipc="" : Set the IPC mode for the container, - 'container:': reuses another container's IPC namespace - 'host': use the host's IPC namespace inside the container -By default, all containers have the IPC namespace enabled. - -IPC (POSIX/SysV IPC) namespace provides separation of named shared memory segments, semaphores and message queues. - -Shared memory segments are used to accelerate inter-process communication at -memory speed, rather than through pipes or through the network stack. Shared -memory is commonly used by databases and custom-built (typically C/OpenMPI, -C++/using boost libraries) high performance applications for scientific -computing and financial services industries. If these types of applications -are broken into multiple containers, you might need to share the IPC mechanisms -of the containers. - -## Network settings - - --dns=[] : Set custom dns servers for the container - --net="bridge" : Set the Network mode for the container - 'bridge': creates a new network stack for the container on the docker bridge - 'none': no networking for this container - 'container:': reuses another container network stack - 'host': use the host network stack inside the container - --add-host="" : Add a line to /etc/hosts (host:IP) - --mac-address="" : Sets the container's Ethernet device's MAC address - -By default, all containers have networking enabled and they can make any -outgoing connections. The operator can completely disable networking -with `docker run --net none` which disables all incoming and outgoing -networking. In cases like this, you would perform I/O through files or -`STDIN` and `STDOUT` only. - -Your container will use the same DNS servers as the host by default, but -you can override this with `--dns`. - -By default a random MAC is generated. You can set the container's MAC address -explicitly by providing a MAC via the `--mac-address` parameter (format: -`12:34:56:78:9a:bc`). - -Supported networking modes are: - -* none - no networking in the container -* bridge - (default) connect the container to the bridge via veth interfaces -* host - use the host's network stack inside the container. Note: This gives the container full access to local system services such as D-bus and is therefore considered insecure. -* container - use another container's network stack - -#### Mode: none - -With the networking mode set to `none` a container will not have a -access to any external routes. The container will still have a -`loopback` interface enabled in the container but it does not have any -routes to external traffic. - -#### Mode: bridge - -With the networking mode set to `bridge` a container will use docker's -default networking setup. A bridge is setup on the host, commonly named -`docker0`, and a pair of `veth` interfaces will be created for the -container. One side of the `veth` pair will remain on the host attached -to the bridge while the other side of the pair will be placed inside the -container's namespaces in addition to the `loopback` interface. An IP -address will be allocated for containers on the bridge's network and -traffic will be routed though this bridge to the container. - -#### Mode: host - -With the networking mode set to `host` a container will share the host's -network stack and all interfaces from the host will be available to the -container. The container's hostname will match the hostname on the host -system. Publishing ports and linking to other containers will not work -when sharing the host's network stack. - -#### Mode: container - -With the networking mode set to `container` a container will share the -network stack of another container. The other container's name must be -provided in the format of `--net container:`. - -Example running a Redis container with Redis binding to `localhost` then -running the `redis-cli` command and connecting to the Redis server over the -`localhost` interface. - - $ sudo docker run -d --name redis example/redis --bind 127.0.0.1 - $ # use the redis container's network stack to access localhost - $ sudo docker run --rm -ti --net container:redis example/redis-cli -h 127.0.0.1 - -### Managing /etc/hosts - -Your container will have lines in `/etc/hosts` which define the hostname of the -container itself as well as `localhost` and a few other common things. The -`--add-host` flag can be used to add additional lines to `/etc/hosts`. - - $ /docker run -ti --add-host db-static:86.75.30.9 ubuntu cat /etc/hosts - 172.17.0.22 09d03f76bf2c - fe00::0 ip6-localnet - ff00::0 ip6-mcastprefix - ff02::1 ip6-allnodes - ff02::2 ip6-allrouters - 127.0.0.1 localhost - ::1 localhost ip6-localhost ip6-loopback - 86.75.30.9 db-static - -## Restart policies (--restart) - -Using the `--restart` flag on Docker run you can specify a restart policy for -how a container should or should not be restarted on exit. - -When a restart policy is active on a container, it will be shown as either `Up` -or `Restarting` in [`docker ps`](/reference/commandline/cli/#ps). It can also be -useful to use [`docker events`](/reference/commandline/cli/#events) to see the -restart policy in effect. - -Docker supports the following restart policies: - - - - - - - - - - - - - - - - - - - - - - -
PolicyResult
no - Do not automatically restart the container when it exits. This is the - default. -
- - on-failure[:max-retries] - - - Restart only if the container exits with a non-zero exit status. - Optionally, limit the number of restart retries the Docker - daemon attempts. -
always - Always restart the container regardless of the exit status. - When you specify always, the Docker daemon will try to restart - the container indefinitely. -
- -An ever increasing delay (double the previous delay, starting at 100 -milliseconds) is added before each restart to prevent flooding the server. -This means the daemon will wait for 100 ms, then 200 ms, 400, 800, 1600, -and so on until either the `on-failure` limit is hit, or when you `docker stop` -or `docker rm -f` the container. - -If a container is succesfully restarted (the container is started and runs -for at least 10 seconds), the delay is reset to its default value of 100 ms. - -You can specify the maximum amount of times Docker will try to restart the -container when using the **on-failure** policy. The default is that Docker -will try forever to restart the container. The number of (attempted) restarts -for a container can be obtained via [`docker inspect`]( -/reference/commandline/cli/#inspect). For example, to get the number of restarts -for container "my-container"; - - $ sudo docker inspect -f "{{ .RestartCount }}" my-container - # 2 - -Or, to get the last time the container was (re)started; - - $ docker inspect -f "{{ .State.StartedAt }}" my-container - # 2015-03-04T23:47:07.691840179Z - -You cannot set any restart policy in combination with -["clean up (--rm)"](#clean-up-rm). Setting both `--restart` and `--rm` -results in an error. - -###Examples - - $ sudo docker run --restart=always redis - -This will run the `redis` container with a restart policy of **always** -so that if the container exits, Docker will restart it. - - $ sudo docker run --restart=on-failure:10 redis - -This will run the `redis` container with a restart policy of **on-failure** -and a maximum restart count of 10. If the `redis` container exits with a -non-zero exit status more than 10 times in a row Docker will abort trying to -restart the container. Providing a maximum restart limit is only valid for the -**on-failure** policy. - -## Clean up (--rm) - -By default a container's file system persists even after the container -exits. This makes debugging a lot easier (since you can inspect the -final state) and you retain all your data by default. But if you are -running short-term **foreground** processes, these container file -systems can really pile up. If instead you'd like Docker to -**automatically clean up the container and remove the file system when -the container exits**, you can add the `--rm` flag: - - --rm=false: Automatically remove the container when it exits (incompatible with -d) - -## Security configuration - --security-opt="label:user:USER" : Set the label user for the container - --security-opt="label:role:ROLE" : Set the label role for the container - --security-opt="label:type:TYPE" : Set the label type for the container - --security-opt="label:level:LEVEL" : Set the label level for the container - --security-opt="label:disable" : Turn off label confinement for the container - --security-opt="apparmor:PROFILE" : Set the apparmor profile to be applied - to the container - -You can override the default labeling scheme for each container by specifying -the `--security-opt` flag. For example, you can specify the MCS/MLS level, a -requirement for MLS systems. Specifying the level in the following command -allows you to share the same content between containers. - - # docker run --security-opt label:level:s0:c100,c200 -i -t fedora bash - -An MLS example might be: - - # docker run --security-opt label:level:TopSecret -i -t rhel7 bash - -To disable the security labeling for this container versus running with the -`--permissive` flag, use the following command: - - # docker run --security-opt label:disable -i -t fedora bash - -If you want a tighter security policy on the processes within a container, -you can specify an alternate type for the container. You could run a container -that is only allowed to listen on Apache ports by executing the following -command: - - # docker run --security-opt label:type:svirt_apache_t -i -t centos bash - -Note: - -You would have to write policy defining a `svirt_apache_t` type. - -## Runtime constraints on CPU and memory - -The operator can also adjust the performance parameters of the -container: - - -m="": Memory limit (format: , where unit = b, k, m or g) - -c=0 : CPU shares (relative weight) - -The operator can constrain the memory available to a container easily -with `docker run -m`. If the host supports swap memory, then the `-m` -memory setting can be larger than physical RAM. - -We have four ways to set memory usage: - - - memory=L<inf, memory-swap=inf (specify memory and set memory-swap as `-1`) - It is not allowed to use more than L bytes of memory, but use as much swap - as you want (only if the host supports swap memory). - - - memory=L<inf, memory-swap=2*L (specify memory without memory-swap) - It is not allowed to use more than L bytes of memory, swap *plus* memory - usage is double of that. - - - memory=L<inf, memory-swap=S<inf, L<=S (specify both memory and memory-swap) - It is not allowed to use more than L bytes of memory, swap *plus* memory - usage is limited by S. - -Similarly the operator can increase the priority of this container with -the `-c` option. By default, all containers run at the same priority and -get the same proportion of CPU cycles, but you can tell the kernel to -give more shares of CPU time to one or more containers when you start -them via Docker. - -The flag `-c` or `--cpu-shares` with value 0 indicates that the running -container has access to all 1024 (default) CPU shares. However, this value -can be modified to run a container with a different priority or different -proportion of CPU cycles. - -E.g., If we start three {C0, C1, C2} containers with default values -(`-c` OR `--cpu-shares` = 0) and one {C3} with (`-c` or `--cpu-shares`=512) -then C0, C1, and C2 would have access to 100% CPU shares (1024) and C3 would -only have access to 50% CPU shares (512). In the context of a time-sliced OS -with time quantum set as 100 milliseconds, containers C0, C1, and C2 will run -for full-time quantum, and container C3 will run for half-time quantum i.e 50 -milliseconds. - -## Runtime privilege, Linux capabilities, and LXC configuration - - --cap-add: Add Linux capabilities - --cap-drop: Drop Linux capabilities - --privileged=false: Give extended privileges to this container - --device=[]: Allows you to run devices inside the container without the --privileged flag. - --lxc-conf=[]: (lxc exec-driver only) Add custom lxc options --lxc-conf="lxc.cgroup.cpuset.cpus = 0,1" - -By default, Docker containers are "unprivileged" and cannot, for -example, run a Docker daemon inside a Docker container. This is because -by default a container is not allowed to access any devices, but a -"privileged" container is given access to all devices (see [lxc-template.go]( -https://github.com/docker/docker/blob/master/daemon/execdriver/lxc/lxc_template.go) -and documentation on [cgroups devices]( -https://www.kernel.org/doc/Documentation/cgroups/devices.txt)). - -When the operator executes `docker run --privileged`, Docker will enable -to access to all devices on the host as well as set some configuration -in AppArmor or SELinux to allow the container nearly all the same access to the -host as processes running outside containers on the host. Additional -information about running with `--privileged` is available on the -[Docker Blog](http://blog.docker.com/2013/09/docker-can-now-run-within-docker/). - -If you want to limit access to a specific device or devices you can use -the `--device` flag. It allows you to specify one or more devices that -will be accessible within the container. - - $ sudo docker run --device=/dev/snd:/dev/snd ... - -By default, the container will be able to `read`, `write`, and `mknod` these devices. -This can be overridden using a third `:rwm` set of options to each `--device` flag: - - -``` - $ sudo docker run --device=/dev/sda:/dev/xvdc --rm -it ubuntu fdisk /dev/xvdc - - Command (m for help): q - $ sudo docker run --device=/dev/sda:/dev/xvdc:r --rm -it ubuntu fdisk /dev/xvdc - You will not be able to write the partition table. - - Command (m for help): q - - $ sudo docker run --device=/dev/sda:/dev/xvdc:w --rm -it ubuntu fdisk /dev/xvdc - crash.... - - $ sudo docker run --device=/dev/sda:/dev/xvdc:m --rm -it ubuntu fdisk /dev/xvdc - fdisk: unable to open /dev/xvdc: Operation not permitted -``` - -In addition to `--privileged`, the operator can have fine grain control over the -capabilities using `--cap-add` and `--cap-drop`. By default, Docker has a default -list of capabilities that are kept. Both flags support the value `all`, so if the -operator wants to have all capabilities but `MKNOD` they could use: - - $ sudo docker run --cap-add=ALL --cap-drop=MKNOD ... - -For interacting with the network stack, instead of using `--privileged` they -should use `--cap-add=NET_ADMIN` to modify the network interfaces. - - $ docker run -t -i --rm ubuntu:14.04 ip link add dummy0 type dummy - RTNETLINK answers: Operation not permitted - $ docker run -t -i --rm --cap-add=NET_ADMIN ubuntu:14.04 ip link add dummy0 type dummy - -To mount a FUSE based filesystem, you need to combine both `--cap-add` and -`--device`: - - $ docker run --rm -it --cap-add SYS_ADMIN sshfs sshfs sven@10.10.10.20:/home/sven /mnt - fuse: failed to open /dev/fuse: Operation not permitted - $ docker run --rm -it --device /dev/fuse sshfs sshfs sven@10.10.10.20:/home/sven /mnt - fusermount: mount failed: Operation not permitted - $ docker run --rm -it --cap-add SYS_ADMIN --device /dev/fuse sshfs - # sshfs sven@10.10.10.20:/home/sven /mnt - The authenticity of host '10.10.10.20 (10.10.10.20)' can't be established. - ECDSA key fingerprint is 25:34:85:75:25:b0:17:46:05:19:04:93:b5:dd:5f:c6. - Are you sure you want to continue connecting (yes/no)? yes - sven@10.10.10.20's password: - root@30aa0cfaf1b5:/# ls -la /mnt/src/docker - total 1516 - drwxrwxr-x 1 1000 1000 4096 Dec 4 06:08 . - drwxrwxr-x 1 1000 1000 4096 Dec 4 11:46 .. - -rw-rw-r-- 1 1000 1000 16 Oct 8 00:09 .dockerignore - -rwxrwxr-x 1 1000 1000 464 Oct 8 00:09 .drone.yml - drwxrwxr-x 1 1000 1000 4096 Dec 4 06:11 .git - -rw-rw-r-- 1 1000 1000 461 Dec 4 06:08 .gitignore - .... - - -If the Docker daemon was started using the `lxc` exec-driver -(`docker -d --exec-driver=lxc`) then the operator can also specify LXC options -using one or more `--lxc-conf` parameters. These can be new parameters or -override existing parameters from the [lxc-template.go]( -https://github.com/docker/docker/blob/master/daemon/execdriver/lxc/lxc_template.go). -Note that in the future, a given host's docker daemon may not use LXC, so this -is an implementation-specific configuration meant for operators already -familiar with using LXC directly. - -> **Note:** -> If you use `--lxc-conf` to modify a container's configuration which is also -> managed by the Docker daemon, then the Docker daemon will not know about this -> modification, and you will need to manage any conflicts yourself. For example, -> you can use `--lxc-conf` to set a container's IP address, but this will not be -> reflected in the `/etc/hosts` file. - -## Overriding Dockerfile image defaults - -When a developer builds an image from a [*Dockerfile*](/reference/builder) -or when she commits it, the developer can set a number of default parameters -that take effect when the image starts up as a container. - -Four of the Dockerfile commands cannot be overridden at runtime: `FROM`, -`MAINTAINER`, `RUN`, and `ADD`. Everything else has a corresponding override -in `docker run`. We'll go through what the developer might have set in each -Dockerfile instruction and how the operator can override that setting. - - - [CMD (Default Command or Options)](#cmd-default-command-or-options) - - [ENTRYPOINT (Default Command to Execute at Runtime)]( - #entrypoint-default-command-to-execute-at-runtime) - - [EXPOSE (Incoming Ports)](#expose-incoming-ports) - - [ENV (Environment Variables)](#env-environment-variables) - - [VOLUME (Shared Filesystems)](#volume-shared-filesystems) - - [USER](#user) - - [WORKDIR](#workdir) - -## CMD (default command or options) - -Recall the optional `COMMAND` in the Docker -commandline: - - $ sudo docker run [OPTIONS] IMAGE[:TAG] [COMMAND] [ARG...] - -This command is optional because the person who created the `IMAGE` may -have already provided a default `COMMAND` using the Dockerfile `CMD` -instruction. As the operator (the person running a container from the -image), you can override that `CMD` instruction just by specifying a new -`COMMAND`. - -If the image also specifies an `ENTRYPOINT` then the `CMD` or `COMMAND` -get appended as arguments to the `ENTRYPOINT`. - -## ENTRYPOINT (default command to execute at runtime) - - --entrypoint="": Overwrite the default entrypoint set by the image - -The `ENTRYPOINT` of an image is similar to a `COMMAND` because it -specifies what executable to run when the container starts, but it is -(purposely) more difficult to override. The `ENTRYPOINT` gives a -container its default nature or behavior, so that when you set an -`ENTRYPOINT` you can run the container *as if it were that binary*, -complete with default options, and you can pass in more options via the -`COMMAND`. But, sometimes an operator may want to run something else -inside the container, so you can override the default `ENTRYPOINT` at -runtime by using a string to specify the new `ENTRYPOINT`. Here is an -example of how to run a shell in a container that has been set up to -automatically run something else (like `/usr/bin/redis-server`): - - $ sudo docker run -i -t --entrypoint /bin/bash example/redis - -or two examples of how to pass more parameters to that ENTRYPOINT: - - $ sudo docker run -i -t --entrypoint /bin/bash example/redis -c ls -l - $ sudo docker run -i -t --entrypoint /usr/bin/redis-cli example/redis --help - -## EXPOSE (incoming ports) - -The Dockerfile doesn't give much control over networking, only providing -the `EXPOSE` instruction to give a hint to the operator about what -incoming ports might provide services. The following options work with -or override the Dockerfile's exposed defaults: - - --expose=[]: Expose a port or a range of ports from the container - without publishing it to your host - -P=false : Publish all exposed ports to the host interfaces - -p=[] : Publish a container᾿s port or a range of ports to the host - format: ip:hostPort:containerPort | ip::containerPort | hostPort:containerPort | containerPort - Both hostPort and containerPort can be specified as a range of ports. - When specifying ranges for both, the number of container ports in the range must match the number of host ports in the range. (e.g., `-p 1234-1236:1234-1236/tcp`) - (use 'docker port' to see the actual mapping) - --link="" : Add link to another container (:alias) - -As mentioned previously, `EXPOSE` (and `--expose`) makes ports available -**in** a container for incoming connections. The port number on the -inside of the container (where the service listens) does not need to be -the same number as the port exposed on the outside of the container -(where clients connect), so inside the container you might have an HTTP -service listening on port 80 (and so you `EXPOSE 80` in the Dockerfile), -but outside the container the port might be 42800. - -To help a new client container reach the server container's internal -port operator `--expose`'d by the operator or `EXPOSE`'d by the -developer, the operator has three choices: start the server container -with `-P` or `-p,` or start the client container with `--link`. - -If the operator uses `-P` or `-p` then Docker will make the exposed port -accessible on the host and the ports will be available to any client -that can reach the host. When using `-P`, Docker will bind the exposed -ports to a random port on the host between 49153 and 65535. To find the -mapping between the host ports and the exposed ports, use `docker port`. - -If the operator uses `--link` when starting the new client container, -then the client container can access the exposed port via a private -networking interface. Docker will set some environment variables in the -client container to help indicate which interface and port to use. - -## ENV (environment variables) - -When a new container is created, Docker will set the following environment -variables automatically: - - - - - - - - - - - - - - - - - - - - - -
VariableValue
HOME - Set based on the value of USER -
HOSTNAME - The hostname associated with the container -
PATH - Includes popular directories, such as :
- /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin -
TERMxterm if the container is allocated a psuedo-TTY
- -The container may also include environment variables defined -as a result of the container being linked with another container. See -the [*Container Links*](/userguide/dockerlinks/#container-linking) -section for more details. - -Additionally, the operator can **set any environment variable** in the -container by using one or more `-e` flags, even overriding those mentioned -above, or already defined by the developer with a Dockerfile `ENV`: - - $ sudo docker run -e "deep=purple" --rm ubuntu /bin/bash -c export - declare -x HOME="/" - declare -x HOSTNAME="85bc26a0e200" - declare -x OLDPWD - declare -x PATH="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin" - declare -x PWD="/" - declare -x SHLVL="1" - declare -x container="lxc" - declare -x deep="purple" - -Similarly the operator can set the **hostname** with `-h`. - -`--link :alias` also sets environment variables, using the *alias* string to -define environment variables within the container that give the IP and PORT -information for connecting to the service container. Let's imagine we have a -container running Redis: - - # Start the service container, named redis-name - $ sudo docker run -d --name redis-name dockerfiles/redis - 4241164edf6f5aca5b0e9e4c9eccd899b0b8080c64c0cd26efe02166c73208f3 - - # The redis-name container exposed port 6379 - $ sudo docker ps - CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES - 4241164edf6f $ dockerfiles/redis:latest /redis-stable/src/re 5 seconds ago Up 4 seconds 6379/tcp redis-name - - # Note that there are no public ports exposed since we didn᾿t use -p or -P - $ sudo docker port 4241164edf6f 6379 - 2014/01/25 00:55:38 Error: No public port '6379' published for 4241164edf6f - -Yet we can get information about the Redis container's exposed ports -with `--link`. Choose an alias that will form a -valid environment variable! - - $ sudo docker run --rm --link redis-name:redis_alias --entrypoint /bin/bash dockerfiles/redis -c export - declare -x HOME="/" - declare -x HOSTNAME="acda7f7b1cdc" - declare -x OLDPWD - declare -x PATH="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin" - declare -x PWD="/" - declare -x REDIS_ALIAS_NAME="/distracted_wright/redis" - declare -x REDIS_ALIAS_PORT="tcp://172.17.0.32:6379" - declare -x REDIS_ALIAS_PORT_6379_TCP="tcp://172.17.0.32:6379" - declare -x REDIS_ALIAS_PORT_6379_TCP_ADDR="172.17.0.32" - declare -x REDIS_ALIAS_PORT_6379_TCP_PORT="6379" - declare -x REDIS_ALIAS_PORT_6379_TCP_PROTO="tcp" - declare -x SHLVL="1" - declare -x container="lxc" - -And we can use that information to connect from another container as a client: - - $ sudo docker run -i -t --rm --link redis-name:redis_alias --entrypoint /bin/bash dockerfiles/redis -c '/redis-stable/src/redis-cli -h $REDIS_ALIAS_PORT_6379_TCP_ADDR -p $REDIS_ALIAS_PORT_6379_TCP_PORT' - 172.17.0.32:6379> - -Docker will also map the private IP address to the alias of a linked -container by inserting an entry into `/etc/hosts`. You can use this -mechanism to communicate with a linked container by its alias: - - $ sudo docker run -d --name servicename busybox sleep 30 - $ sudo docker run -i -t --link servicename:servicealias busybox ping -c 1 servicealias - -If you restart the source container (`servicename` in this case), the recipient -container's `/etc/hosts` entry will be automatically updated. - -> **Note**: -> Unlike host entries in the `/ets/hosts` file, IP addresses stored in the -> environment variables are not automatically updated if the source container is -> restarted. We recommend using the host entries in `/etc/hosts` to resolve the -> IP address of linked containers. - -## VOLUME (shared filesystems) - - -v=[]: Create a bind mount with: [host-dir]:[container-dir]:[rw|ro]. - If "container-dir" is missing, then docker creates a new volume. - --volumes-from="": Mount all volumes from the given container(s) - -The volumes commands are complex enough to have their own documentation -in section [*Managing data in -containers*](/userguide/dockervolumes). A developer can define -one or more `VOLUME`'s associated with an image, but only the operator -can give access from one container to another (or from a container to a -volume mounted on the host). - -## USER - -The default user within a container is `root` (id = 0), but if the -developer created additional users, those are accessible too. The -developer can set a default user to run the first process with the -Dockerfile `USER` instruction, but the operator can override it: - - -u="": Username or UID - -> **Note:** if you pass numeric uid, it must be in range 0-2147483647. - -## WORKDIR - -The default working directory for running binaries within a container is the -root directory (`/`), but the developer can set a different default with the -Dockerfile `WORKDIR` command. The operator can override this with: - - -w="": Working directory inside the container diff --git a/static_files/README.md~ b/static_files/README.md~ deleted file mode 100644 index 99dc104d4c..0000000000 --- a/static_files/README.md~ +++ /dev/null @@ -1,11 +0,0 @@ -Static files dir -================ - -Files you put in /sources/static_files/ will be copied to the web visible /_static/ - -Be careful not to override pre-existing static files from the template. - -Generally, layout related files should go in the /theme directory. - -If you want to add images to your particular documentation page. Just put them next to -your .rst source file and reference them relatively. \ No newline at end of file diff --git a/swarm/API.md~ b/swarm/API.md~ deleted file mode 100644 index 1630be4b39..0000000000 --- a/swarm/API.md~ +++ /dev/null @@ -1,57 +0,0 @@ -no_version_dropdown: true--- -page_title: Docker Swarm API -page_description: Swarm API -page_keywords: docker, swarm, clustering, api ---- - -# Docker Swarm API - -The Docker Swarm API is compatible with the [Offical Docker API](https://docs.docker.com/reference/api/docker_remote_api/): - -Here are the main differences: - -## Some endpoints are not (yet) implemented - -``` -GET "/images/get" -GET "/images/{name:.*}/get" -GET "/containers/{name:.*}/attach/ws" - -POST "/commit" -POST "/build" -POST "/images/create" -POST "/images/load" -POST "/images/{name:.*}/push" -POST "/images/{name:.*}/tag" - -DELETE "/images/{name:.*}" -``` - -## Some endpoints have more information - -* `GET "/containers/{name:.*}/json"`: New field `Node` added: - -```json -"Node": { - "ID": "ODAI:IC6Q:MSBL:TPB5:HIEE:6IKC:VCAM:QRNH:PRGX:ERZT:OK46:PMFX", - "IP": "0.0.0.0", - "Addr": "http://0.0.0.0:4243", - "Name": "vagrant-ubuntu-saucy-64", - "Cpus": 1, - "Memory": 2099654656, - "Labels": { - "executiondriver": "native-0.2", - "kernelversion": "3.11.0-15-generic", - "operatingsystem": "Ubuntu 13.10", - "storagedriver": "aufs" - } - }, -``` -* `GET "/containers/{name:.*}/json"`: `HostIP` replaced by the the actual Node's IP if `HostIP` is `0.0.0.0` - -* `GET "/containers/json"`: Node's name prepended to the container name. - -* `GET "/containers/json"`: `HostIP` replaced by the the actual Node's IP if `HostIP` is `0.0.0.0` - -* `GET "/containers/json"` : Containers started from the `swarm` official image are hidden by default, use `all=1` to display them. - diff --git a/swarm/discovery.md~ b/swarm/discovery.md~ deleted file mode 100644 index 4408514cc3..0000000000 --- a/swarm/discovery.md~ +++ /dev/null @@ -1,179 +0,0 @@ -no_version_dropdown: true--- -page_title: Docker Swarm discovery -page_description: Swarm discovery -page_keywords: docker, swarm, clustering, discovery ---- - -# Discovery - -`Docker Swarm` comes with multiple Discovery backends - -## Examples - -### Using the hosted discovery service - -```bash -# create a cluster -$ swarm create -6856663cdefdec325839a4b7e1de38e8 # <- this is your unique - -# on each of your nodes, start the swarm agent -# doesn't have to be public (eg. 192.168.0.X), -# as long as the swarm manager can access it. -$ swarm join --addr= token:// - -# start the manager on any machine or your laptop -$ swarm manage -H tcp:// token:// - -# use the regular docker cli -$ docker -H tcp:// info -$ docker -H tcp:// run ... -$ docker -H tcp:// ps -$ docker -H tcp:// logs ... -... - -# list nodes in your cluster -$ swarm list token:// - -``` - -### Using a static file describing the cluster - -```bash -# for each of your nodes, add a line to a file -# doesn't have to be public (eg. 192.168.0.X), -# as long as the swarm manager can access it. -$ echo >> /tmp/my_cluster -$ echo >> /tmp/my_cluster -$ echo >> /tmp/my_cluster - -# start the manager on any machine or your laptop -$ swarm manage -H tcp:// file:///tmp/my_cluster - -# use the regular docker cli -$ docker -H tcp:// info -$ docker -H tcp:// run ... -$ docker -H tcp:// ps -$ docker -H tcp:// logs ... -... - -# list nodes in your cluster -$ swarm list file:///tmp/my_cluster - - - -``` - -### Using etcd - -```bash -# on each of your nodes, start the swarm agent -# doesn't have to be public (eg. 192.168.0.X), -# as long as the swarm manager can access it. -$ swarm join --addr= etcd:/// - -# start the manager on any machine or your laptop -$ swarm manage -H tcp:// etcd:/// - -# use the regular docker cli -$ docker -H tcp:// info -$ docker -H tcp:// run ... -$ docker -H tcp:// ps -$ docker -H tcp:// logs ... -... - -# list nodes in your cluster -$ swarm list etcd:/// - -``` - -### Using consul - -```bash -# on each of your nodes, start the swarm agent -# doesn't have to be public (eg. 192.168.0.X), -# as long as the swarm manager can access it. -$ swarm join --addr= consul:/// - -# start the manager on any machine or your laptop -$ swarm manage -H tcp:// consul:/// - -# use the regular docker cli -$ docker -H tcp:// info -$ docker -H tcp:// run ... -$ docker -H tcp:// ps -$ docker -H tcp:// logs ... -... - -# list nodes in your cluster -$ swarm list consul:/// - -``` - -### Using zookeeper - -```bash -# on each of your nodes, start the swarm agent -# doesn't have to be public (eg. 192.168.0.X), -# as long as the swarm manager can access it. -$ swarm join --addr= zk://,/ - -# start the manager on any machine or your laptop -$ swarm manage -H tcp:// zk://,/ - -# use the regular docker cli -$ docker -H tcp:// info -$ docker -H tcp:// run ... -$ docker -H tcp:// ps -$ docker -H tcp:// logs ... -... - -# list nodes in your cluster -$ swarm list zk://,/ - -``` - -### Using a static list of ips - -```bash -# start the manager on any machine or your laptop -$ swarm manage -H nodes://, -#or -$ swarm manage -H nodes://, - -# use the regular docker cli -$ docker -H info -$ docker -H run ... -$ docker -H ps -$ docker -H logs ... -... -``` - -## Contributing - -Contributing a new discovery backend is easy, -simply implements this interface: - -```go -type DiscoveryService interface { - Initialize(string, int) error - Fetch() ([]string, error) - Watch(WatchCallback) - Register(string) error -} -``` - -## Extra tips - -### Initialize -take the `discovery` without the scheme and a heartbeat (in seconds) - -### Fetch -returns the list of all the nodes from the discovery - -### Watch -triggers an update (`Fetch`),it can happen either via -a timer (like `token`) or use backend specific features (like `etcd`) - -### Register -add a new node to the discovery diff --git a/swarm/index.md~ b/swarm/index.md~ deleted file mode 100644 index 10e71bff3d..0000000000 --- a/swarm/index.md~ +++ /dev/null @@ -1,99 +0,0 @@ -no_version_dropdown: true--- -page_title: Docker Swarm -page_description: Swarm: a Docker-native clustering system -page_keywords: docker, swarm, clustering ---- - -# Docker Swarm - -Docker Swarm is native clustering for Docker. It turns a pool of Docker hosts -into a single, virtual host. - -Swarm serves the standard Docker API, so any tool which already communicates -with a Docker daemon can use Swarm to transparently scale to multiple hosts: -Dokku, Compose, Krane, Flynn, Deis, DockerUI, Shipyard, Drone, Jenkins... and, -of course, the Docker client itself. - -Like other Docker projects, Swarm follows the "batteries included but removable" -principle. It ships with a simple scheduling backend out of the box, and as -initial development settles, an API will develop to enable pluggable backends. -The goal is to provide a smooth out-of-box experience for simple use cases, and -allow swapping in more powerful backends, like Mesos, for large scale production -deployments. - -## Installation - -> **Note**: The only requirement for Swarm nodes is they all run the _same_ release -> Docker daemon (version `1.4.0` and later), configured to listen to a `tcp` -> port that the Swarm manager can access. - -The easiest way to get started with Swarm is to use the -[official Docker image](https://registry.hub.docker.com/_/swarm/). - -```bash -docker pull swarm -``` - -## Nodes setup - -Each swarm node will run a swarm node agent which will register the referenced -Docker daemon, and will then monitor it, updating the discovery backend to its -status. - -The following example uses the Docker Hub based `token` discovery service: - -```bash -# create a cluster -$ docker run --rm swarm create -6856663cdefdec325839a4b7e1de38e8 # <- this is your unique - -# on each of your nodes, start the swarm agent -# doesn't have to be public (eg. 192.168.0.X), -# as long as the swarm manager can access it. -$ docker run -d swarm join --addr= token:// - -# start the manager on any machine or your laptop -$ docker run -d -p :2375 swarm manage token:// - -# use the regular docker cli -$ docker -H tcp:// info -$ docker -H tcp:// run ... -$ docker -H tcp:// ps -$ docker -H tcp:// logs ... -... - -# list nodes in your cluster -$ docker run --rm swarm list token:// - -``` - -> **Note**: In order for the Swarm manager to be able to communicate with the node agent on -each node, they must listen to a common network interface. This can be achieved -by starting with the `-H` flag (e.g. `-H tcp://0.0.0.0:2375`). - - -## TLS - -Swarm supports TLS authentication between the CLI and Swarm but also between -Swarm and the Docker nodes. _However_, all the Docker daemon certificates and client -certificates **must** be signed using the same CA-certificate. - -In order to enable TLS for both client and server, the same command line options -as Docker can be specified: - -`swarm manage --tlsverify --tlscacert= --tlscert= --tlskey= [...]` - -Please refer to the [Docker documentation](https://docs.docker.com/articles/https/) -for more information on how to set up TLS authentication on Docker and generating -the certificates. - -> **Note**: Swarm certificates must be generated with`extendedKeyUsage = clientAuth,serverAuth`. - -## Discovery services - -See the [Discovery service](discovery.md) document for more information. - -## Advanced Scheduling - -See [filters](scheduler/filter.md) and [strategies](scheduler/strategy.md) to learn -more about advanced scheduling. diff --git a/swarm/scheduler/filter.md~ b/swarm/scheduler/filter.md~ deleted file mode 100644 index 5fbaaa92d5..0000000000 --- a/swarm/scheduler/filter.md~ +++ /dev/null @@ -1,260 +0,0 @@ -no_version_dropdown: true--- -page_title: Docker Swarm filters -page_description: Swarm filters -page_keywords: docker, swarm, clustering, filters ---- - -# Filters - -The `Docker Swarm` scheduler comes with multiple filters. - -These filters are used to schedule containers on a subset of nodes. - -`Docker Swarm` currently supports 4 filters: -* [Constraint](#constraint-filter) -* [Affinity](#affinity-filter) -* [Port](#port-filter) -* [Health](#health-filter) - -You can choose the filter(s) you want to use with the `--filter` flag of `swarm manage` - -## Constraint Filter - -Constraints are key/value pairs associated to particular nodes. You can see them -as *node tags*. - -When creating a container, the user can select a subset of nodes that should be -considered for scheduling by specifying one or more sets of matching key/value pairs. - -This approach has several practical use cases such as: -* Selecting specific host properties (such as `storage=ssd`, in order to schedule - containers on specific hardware). -* Tagging nodes based on their physical location (`region=us-east`, to force - containers to run on a given location). -* Logical cluster partitioning (`environment=production`, to split a cluster into - sub-clusters with different properties). - -To tag a node with a specific set of key/value pairs, one must pass a list of -`--label` options at docker startup time. - -For instance, let's start `node-1` with the `storage=ssd` label: - -```bash -$ docker -d --label storage=ssd -$ swarm join --addr=192.168.0.42:2375 token://XXXXXXXXXXXXXXXXXX -``` - -Again, but this time `node-2` with `storage=disk`: - -```bash -$ docker -d --label storage=disk -$ swarm join --addr=192.168.0.43:2375 token://XXXXXXXXXXXXXXXXXX -``` - -Once the nodes are registered with the cluster, the master pulls their respective -tags and will take them into account when scheduling new containers. - -Let's start a MySQL server and make sure it gets good I/O performance by selecting -nodes with flash drives: - -``` -$ docker run -d -P -e constraint:storage==ssd --name db mysql -f8b693db9cd6 - -$ docker ps -CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NODE NAMES -f8b693db9cd6 mysql:latest "mysqld" Less than a second ago running 192.168.0.42:49178->3306/tcp node-1 db -``` - -In this case, the master selected all nodes that met the `storage=ssd` constraint -and applied resource management on top of them, as discussed earlier. -`node-1` was selected in this example since it's the only host running flash. - -Now we want to run an `nginx` frontend in our cluster. However, we don't want -*flash* drives since we'll mostly write logs to disk. - -``` -$ docker run -d -P -e constraint:storage==disk --name frontend nginx -963841b138d8 - -$ docker ps -CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NODE NAMES -963841b138d8 nginx:latest "nginx" Less than a second ago running 192.168.0.43:49177->80/tcp node-2 frontend -f8b693db9cd6 mysql:latest "mysqld" Up About a minute running 192.168.0.42:49178->3306/tcp node-1 db -``` - -The scheduler selected `node-2` since it was started with the `storage=disk` label. - -## Standard Constraints - -Additionally, a standard set of constraints can be used when scheduling containers -without specifying them when starting the node. Those tags are sourced from -`docker info` and currently include: - -* storagedriver -* executiondriver -* kernelversion -* operatingsystem - -## Affinity Filter - -#### Containers - -You can schedule 2 containers and make the container #2 next to the container #1. - -``` -$ docker run -d -p 80:80 --name front nginx - 87c4376856a8 - -$ docker ps -CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NODE NAMES -87c4376856a8 nginx:latest "nginx" Less than a second ago running 192.168.0.42:80->80/tcp node-1 front -``` - -Using `-e affinity:container==front` will schedule a container next to the container `front`. -You can also use IDs instead of name: `-e affinity:container==87c4376856a8` - -``` -$ docker run -d --name logger -e affinity:container==front logger - 87c4376856a8 - -$ docker ps -CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NODE NAMES -87c4376856a8 nginx:latest "nginx" Less than a second ago running 192.168.0.42:80->80/tcp node-1 front -963841b138d8 logger:latest "logger" Less than a second ago running node-1 logger -``` - -The `logger` container ends up on `node-1` because his affinity with the container `front`. - -#### Images - -You can schedule a container only on nodes where the images is already pulled. - -``` -$ docker -H node-1:2375 pull redis -$ docker -H node-2:2375 pull mysql -$ docker -H node-3:2375 pull redis -``` - -Here only `node-1` and `node-3` have the `redis` image. Using `-e affinity:image=redis` we can -schedule container only on these 2 nodes. You can also use the image ID instead of it's name. - -``` -$ docker run -d --name redis1 -e affinity:image==redis redis -$ docker run -d --name redis2 -e affinity:image==redis redis -$ docker run -d --name redis3 -e affinity:image==redis redis -$ docker run -d --name redis4 -e affinity:image==redis redis -$ docker run -d --name redis5 -e affinity:image==redis redis -$ docker run -d --name redis6 -e affinity:image==redis redis -$ docker run -d --name redis7 -e affinity:image==redis redis -$ docker run -d --name redis8 -e affinity:image==redis redis - -$ docker ps -CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NODE NAMES -87c4376856a8 redis:latest "redis" Less than a second ago running node-1 redis1 -1212386856a8 redis:latest "redis" Less than a second ago running node-1 redis2 -87c4376639a8 redis:latest "redis" Less than a second ago running node-3 redis3 -1234376856a8 redis:latest "redis" Less than a second ago running node-1 redis4 -86c2136253a8 redis:latest "redis" Less than a second ago running node-3 redis5 -87c3236856a8 redis:latest "redis" Less than a second ago running node-3 redis6 -87c4376856a8 redis:latest "redis" Less than a second ago running node-3 redis7 -963841b138d8 redis:latest "redis" Less than a second ago running node-1 redis8 -``` - -As you can see here, the containers were only scheduled on nodes with the redis image already pulled. - -#### Expression Syntax - -An affinity or a constraint expression consists of a `key` and a `value`. -A `key` must conform the alpha-numeric pattern, with the leading alphabet or underscore. - -A `value` must be one of the following: -* An alpha-numeric string, dots, hyphens, and underscores. -* A globbing pattern, i.e., `abc*`. -* A regular expression in the form of `/regexp/`. We support the Go's regular expression syntax. - -Current `swarm` supports affinity/constraint operators as the following: `==` and `!=`. - -For example, -* `constraint:node==node1` will match node `node1`. -* `constraint:node!=node1` will match all nodes, except `node1`. -* `constraint:region!=us*` will match all nodes outside the regions prefixed with `us`. -* `constraint:node==/node[12]/` will match nodes `node1` and `node2`. -* `constraint:node==/node\d/` will match all nodes with `node` + 1 digit. -* `constraint:node!=/node-[01]/` will match all nodes, except `node-0` and `node-1`. -* `constraint:node!=/foo\[bar\]/` will match all nodes, except `foo[bar]`. You can see the use of escape characters here. -* `constraint:node==/(?i)node1/` will match node `node1` case-insensitive. So 'NoDe1' or 'NODE1' will also matched. - -## Port Filter - -With this filter, `ports` are considered as a unique resource. - -``` -$ docker run -d -p 80:80 nginx -87c4376856a8 - -$ docker ps -CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NODE NAMES -87c4376856a8 nginx:latest "nginx" Less than a second ago running 192.168.0.42:80->80/tcp node-1 prickly_engelbart -``` - -Docker cluster selects a node where the public `80` port is available and schedules -a container on it, in this case `node-1`. - -Attempting to run another container with the public `80` port will result in -clustering selecting a different node, since that port is already occupied on `node-1`: - -``` -$ docker run -d -p 80:80 nginx -963841b138d8 - -$ docker ps -CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NODE NAMES -963841b138d8 nginx:latest "nginx" Less than a second ago running 192.168.0.43:80->80/tcp node-2 dreamy_turing -87c4376856a8 nginx:latest "nginx" Up About a minute running 192.168.0.42:80->80/tcp node-1 prickly_engelbart -``` - -Again, repeating the same command will result in the selection of `node-3`, since -port `80` is neither available on `node-1` nor `node-2`: - -``` -$ docker run -d -p 80:80 nginx -963841b138d8 - -$ docker ps -CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NODE NAMES -f8b693db9cd6 nginx:latest "nginx" Less than a second ago running 192.168.0.44:80->80/tcp node-3 stoic_albattani -963841b138d8 nginx:latest "nginx" Up About a minute running 192.168.0.43:80->80/tcp node-2 dreamy_turing -87c4376856a8 nginx:latest "nginx" Up About a minute running 192.168.0.42:80->80/tcp node-1 prickly_engelbart -``` - -Finally, Docker Cluster will refuse to run another container that requires port -`80` since not a single node in the cluster has it available: - -``` -$ docker run -d -p 80:80 nginx -2014/10/29 00:33:20 Error response from daemon: no resources available to schedule container -``` - -## Dependency Filter - -This filter co-schedules dependent containers on the same node. - -Currently, dependencies are declared as follows: - -- Shared volumes: `--volumes-from=dependency` -- Links: `--link=dependency:alias` -- Shared network stack: `--net=container:dependency` - -Swarm will attempt to co-locate the dependent container on the same node. If it -cannot be done (because the dependent container doesn't exist, or because the -node doesn't have enough resources), it will prevent the container creation. - -The combination of multiple dependencies will be honored if possible. For -instance, `--volumes-from=A --net=container:B` will attempt to co-locate the -container on the same node as `A` and `B`. If those containers are running on -different nodes, Swarm will prevent you from scheduling the container. - -## Health Filter - -This filter will prevent scheduling containers on unhealthy nodes. diff --git a/swarm/scheduler/strategy.md~ b/swarm/scheduler/strategy.md~ deleted file mode 100644 index dd8b09c32e..0000000000 --- a/swarm/scheduler/strategy.md~ +++ /dev/null @@ -1,56 +0,0 @@ -no_version_dropdown: true--- -page_title: Docker Swarm strategies -page_description: Swarm strategies -page_keywords: docker, swarm, clustering, strategies ---- - -# Strategies - -The `Docker Swarm` scheduler comes with multiple strategies. - -These strategies are used to rank nodes using a scores computed by the strategy. - -`Docker Swarm` currently supports 2 strategies: -* [BinPacking](#binpacking-strategy) -* [Random](#random-strategy) - -You can choose the strategy you want to use with the `--strategy` flag of `swarm manage` - -## BinPacking strategy - -The BinPacking strategy will rank the nodes using their CPU and RAM available and will return the -node the most packed already. This avoid fragementation, it will leave room for bigger containers -on usunsed machines. - -For instance, let's says that both `node-1` and `node-2` have 2G of RAM: - -```bash -$ docker run -d -P -m 1G --name db mysql -f8b693db9cd6 - -$ docker ps -CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NODE NAMES -f8b693db9cd6 mysql:latest "mysqld" Less than a second ago running 192.168.0.42:49178->3306/tcp node-1 db -``` - -In this case, `node-1` was chosen randomly, because no container were running, so `node-1` and -`node-2` had the same score. - -Now we start another container, asking for 1G of RAM again. - -```bash -$ docker run -d -P -m 1G --name frontend nginx -963841b138d8 - -$ docker ps -CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NODE NAMES -963841b138d8 nginx:latest "nginx" Less than a second ago running 192.168.0.42:49177->80/tcp node-1 frontend -f8b693db9cd6 mysql:latest "mysqld" Up About a minute running 192.168.0.42:49178->3306/tcp node-1 db -``` - -The container `frontend` was also started on `node-1` because it was the node the most packed -already. This allows us to start a container requiring 2G of RAM on `node-2`. - -## Random strategy - -The Random strategy, as it's name says, chose a random node, it's used mainly for debug. diff --git a/terms/container.md~ b/terms/container.md~ deleted file mode 100644 index 8b42868788..0000000000 --- a/terms/container.md~ +++ /dev/null @@ -1,44 +0,0 @@ -page_title: Container -page_description: Definitions of a container -page_keywords: containers, lxc, concepts, explanation, image, container - -# Container - -## Introduction - -![](/terms/images/docker-filesystems-busyboxrw.png) - -Once you start a process in Docker from an [*Image*](/terms/image), Docker -fetches the image and its [*Parent Image*](/terms/image), and repeats the -process until it reaches the [*Base Image*](/terms/image/#base-image-def). Then -the [*Union File System*](/terms/layer) adds a read-write layer on top. That -read-write layer, plus the information about its [*Parent -Image*](/terms/image) -and some additional information like its unique id, networking -configuration, and resource limits is called a **container**. - -## Container State - -Containers can change, and so they have state. A container may be -**running** or **exited**. - -When a container is running, the idea of a "container" also includes a -tree of processes running on the CPU, isolated from the other processes -running on the host. - -When the container is exited, the state of the file system and its exit -value is preserved. You can start, stop, and restart a container. The -processes restart from scratch (their memory state is **not** preserved -in a container), but the file system is just as it was when the -container was stopped. - -You can promote a container to an [*Image*](/terms/image) with `docker commit`. -Once a container is an image, you can use it as a parent for new containers. - -## Container IDs - -All containers are identified by a 64 hexadecimal digit string -(internally a 256bit value). To simplify their use, a short ID of the -first 12 characters can be used on the command line. There is a small -possibility of short id collisions, so the docker server will always -return the long ID. diff --git a/terms/filesystem.md~ b/terms/filesystem.md~ deleted file mode 100644 index 5587e3c831..0000000000 --- a/terms/filesystem.md~ +++ /dev/null @@ -1,35 +0,0 @@ -page_title: File Systems -page_description: How Linux organizes its persistent storage -page_keywords: containers, files, linux - -# File System - -## Introduction - -![](/terms/images/docker-filesystems-generic.png) - -In order for a Linux system to run, it typically needs two [file -systems](http://en.wikipedia.org/wiki/Filesystem): - -1. boot file system (bootfs) -2. root file system (rootfs) - -The **boot file system** contains the bootloader and the kernel. The -user never makes any changes to the boot file system. In fact, soon -after the boot process is complete, the entire kernel is in memory, and -the boot file system is unmounted to free up the RAM associated with the -initrd disk image. - -The **root file system** includes the typical directory structure we -associate with Unix-like operating systems: -`/dev, /proc, /bin, /etc, /lib, /usr,` and `/tmp` plus all the configuration -files, binaries and libraries required to run user applications (like bash, -ls, and so forth). - -While there can be important kernel differences between different Linux -distributions, the contents and organization of the root file system are -usually what make your software packages dependent on one distribution -versus another. Docker can help solve this problem by running multiple -distributions at the same time. - -![](/terms/images/docker-filesystems-multiroot.png) diff --git a/terms/image.md~ b/terms/image.md~ deleted file mode 100644 index e42a6cfa12..0000000000 --- a/terms/image.md~ +++ /dev/null @@ -1,40 +0,0 @@ -page_title: Images -page_description: Definition of an image -page_keywords: containers, lxc, concepts, explanation, image, container - -# Image - -## Introduction - -![](/terms/images/docker-filesystems-debian.png) - -In Docker terminology, a read-only [*Layer*](/terms/layer/#layer) is -called an **image**. An image never changes. - -Since Docker uses a [*Union File System*](/terms/layer/#union-file-system), the -processes think the whole file system is mounted read-write. But all the -changes go to the top-most writeable layer, and underneath, the original -file in the read-only image is unchanged. Since images don't change, -images do not have state. - -![](/terms/images/docker-filesystems-debianrw.png) - -## Parent Image - -![](/terms/images/docker-filesystems-multilayer.png) - -Each image may depend on one more image which forms the layer beneath -it. We sometimes say that the lower image is the **parent** of the upper -image. - -## Base Image - -An image that has no parent is a **base image**. - -## Image IDs - -All images are identified by a 64 hexadecimal digit string (internally a -256bit value). To simplify their use, a short ID of the first 12 -characters can be used on the command line. There is a small possibility -of short id collisions, so the docker server will always return the long -ID. diff --git a/terms/layer.md~ b/terms/layer.md~ deleted file mode 100644 index 3e8704cd07..0000000000 --- a/terms/layer.md~ +++ /dev/null @@ -1,35 +0,0 @@ -page_title: Layers -page_description: Organizing the Docker Root File System -page_keywords: containers, lxc, concepts, explanation, image, container - -# Layers - -## Introduction - -In a traditional Linux boot, the kernel first mounts the root [*File -System*](/terms/filesystem) as read-only, checks its -integrity, and then switches the whole rootfs volume to read-write mode. - -## Layer - -When Docker mounts the rootfs, it starts read-only, as in a traditional -Linux boot, but then, instead of changing the file system to read-write -mode, it takes advantage of a [union -mount](http://en.wikipedia.org/wiki/Union_mount) to add a read-write -file system *over* the read-only file system. In fact there may be -multiple read-only file systems stacked on top of each other. We think -of each one of these file systems as a **layer**. - -![](/terms/images/docker-filesystems-multilayer.png) - -At first, the top read-write layer has nothing in it, but any time a -process creates a file, this happens in the top layer. And if something -needs to update an existing file in a lower layer, then the file gets -copied to the upper layer and changes go into the copy. The version of -the file on the lower layer cannot be seen by the applications anymore, -but it is there, unchanged. - -## Union File System - -We call the union of the read-write layer and all the read-only layers a -**union file system**. diff --git a/terms/registry.md~ b/terms/registry.md~ deleted file mode 100644 index 8a7e6237ec..0000000000 --- a/terms/registry.md~ +++ /dev/null @@ -1,20 +0,0 @@ -page_title: Registry -page_description: Definition of an Registry -page_keywords: containers, concepts, explanation, image, repository, container - -# Registry - -## Introduction - -A Registry is a hosted service containing -[*repositories*](/terms/repository/#repository-def) of -[*images*](/terms/image/#image-def) which responds to the Registry API. - -The default registry can be accessed using a browser at -[Docker Hub](https://hub.docker.com) or using the -`sudo docker search` command. - -## Further Reading - -For more information see [*Working with -Repositories*](/userguide/dockerrepos/#working-with-the-repository) diff --git a/terms/repository.md~ b/terms/repository.md~ deleted file mode 100644 index c4d1d43539..0000000000 --- a/terms/repository.md~ +++ /dev/null @@ -1,35 +0,0 @@ -page_title: Repository -page_description: Definition of an Repository -page_keywords: containers, concepts, explanation, image, repository, container - -# Repository - -## Introduction - -A repository is a set of images either on your local Docker server, or -shared, by pushing it to a [*Registry*](/terms/registry/#registry-def) -server. - -Images can be associated with a repository (or multiple) by giving them -an image name using one of three different commands: - -1. At build time (e.g., `sudo docker build -t IMAGENAME`), -2. When committing a container (e.g., - `sudo docker commit CONTAINERID IMAGENAME`) or -3. When tagging an image id with an image name (e.g., - `sudo docker tag IMAGEID IMAGENAME`). - -A Fully Qualified Image Name (FQIN) can be made up of 3 parts: - -`[registry_hostname[:port]/][user_name/](repository_name:version_tag)` - -`username` and `registry_hostname` default to an empty string. When -`registry_hostname` is an empty string, then `docker push` will push to -`index.docker.io:80`. - -If you create a new repository which you want to share, you will need to -set at least the `user_name`, as the `default` blank `user_name` prefix is -reserved for official Docker images. - -For more information see [*Working with -Repositories*](/userguide/dockerrepos/#working-with-the-repository) diff --git a/userguide/dockerhub.md~ b/userguide/dockerhub.md~ deleted file mode 100644 index 62438b9948..0000000000 --- a/userguide/dockerhub.md~ +++ /dev/null @@ -1,71 +0,0 @@ -page_title: Getting started with Docker Hub -page_description: Introductory guide to getting an account on Docker Hub -page_keywords: documentation, docs, the docker guide, docker guide, docker, docker platform, virtualization framework, docker.io, central service, services, how to, container, containers, automation, collaboration, collaborators, registry, repo, repository, technology, github webhooks, trusted builds - -# Getting Started with Docker Hub - - -This section provides a quick introduction to the [Docker Hub](https://hub.docker.com), -including how to create an account. - -The [Docker Hub](https://hub.docker.com) is a centralized resource for working with -Docker and its components. Docker Hub helps you collaborate with colleagues and get the -most out of Docker.To do this, it provides services such as: - -* Docker image hosting. -* User authentication. -* Automated image builds and work-flow tools such as build triggers and web - hooks. -* Integration with GitHub and BitBucket. - -In order to use Docker Hub, you will first need to register and create an account. Don't -worry, creating an account is simple and free. - -## Creating a Docker Hub Account - -There are two ways for you to register and create an account: - -1. Via the web, or -2. Via the command line. - -### Register via the web - -Fill in the [sign-up form](https://hub.docker.com/account/signup/) by -choosing your user name and password and entering a valid email address. You can also -sign up for the Docker Weekly mailing list, which has lots of information about what's -going on in the world of Docker. - -![Register using the sign-up page](/userguide/register-web.png) - -### Register via the command line - -You can also create a Docker Hub account via the command line with the -`docker login` command. - - $ sudo docker login - -### Confirm your email - -Once you've filled in the form, check your email for a welcome message asking for -confirmation so we can activate your account. - - -### Login - -After you complete the confirmation process, you can login using the web console: - -![Login using the web console](/userguide/login-web.png) - -Or via the command line with the `docker login` command: - - $ sudo docker login - -Your Docker Hub account is now active and ready to use. - -## Next steps - -Next, let's start learning how to Dockerize applications with our "Hello world" -exercise. - -Go to [Dockerizing Applications](/userguide/dockerizing). - diff --git a/userguide/dockerimages.md~ b/userguide/dockerimages.md~ deleted file mode 100644 index 6224479fb7..0000000000 --- a/userguide/dockerimages.md~ +++ /dev/null @@ -1,546 +0,0 @@ -page_title: Working with Docker Images -page_description: How to work with Docker images. -page_keywords: documentation, docs, the docker guide, docker guide, docker, docker platform, virtualization framework, docker.io, Docker images, Docker image, image management, Docker repos, Docker repositories, docker, docker tag, docker tags, Docker Hub, collaboration - -# Working with Docker Images - -In the [introduction](/introduction/understanding-docker/) we've discovered that Docker -images are the basis of containers. In the -[previous](/userguide/dockerizing/) [sections](/userguide/usingdocker/) -we've used Docker images that already exist, for example the `ubuntu` -image and the `training/webapp` image. - -We've also discovered that Docker stores downloaded images on the Docker -host. If an image isn't already present on the host then it'll be -downloaded from a registry: by default the -[Docker Hub Registry](https://registry.hub.docker.com). - -In this section we're going to explore Docker images a bit more -including: - -* Managing and working with images locally on your Docker host; -* Creating basic images; -* Uploading images to [Docker Hub Registry](https://registry.hub.docker.com). - -## Listing images on the host - -Let's start with listing the images we have locally on our host. You can -do this using the `docker images` command like so: - - $ sudo docker images - REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE - training/webapp latest fc77f57ad303 3 weeks ago 280.5 MB - ubuntu 13.10 5e019ab7bf6d 4 weeks ago 180 MB - ubuntu saucy 5e019ab7bf6d 4 weeks ago 180 MB - ubuntu 12.04 74fe38d11401 4 weeks ago 209.6 MB - ubuntu precise 74fe38d11401 4 weeks ago 209.6 MB - ubuntu 12.10 a7cf8ae4e998 4 weeks ago 171.3 MB - ubuntu quantal a7cf8ae4e998 4 weeks ago 171.3 MB - ubuntu 14.04 99ec81b80c55 4 weeks ago 266 MB - ubuntu latest 99ec81b80c55 4 weeks ago 266 MB - ubuntu trusty 99ec81b80c55 4 weeks ago 266 MB - ubuntu 13.04 316b678ddf48 4 weeks ago 169.4 MB - ubuntu raring 316b678ddf48 4 weeks ago 169.4 MB - ubuntu 10.04 3db9c44f4520 4 weeks ago 183 MB - ubuntu lucid 3db9c44f4520 4 weeks ago 183 MB - -We can see the images we've previously used in our [user guide](/userguide/). -Each has been downloaded from [Docker Hub](https://hub.docker.com) when we -launched a container using that image. - -We can see three crucial pieces of information about our images in the listing. - -* What repository they came from, for example `ubuntu`. -* The tags for each image, for example `14.04`. -* The image ID of each image. - -A repository potentially holds multiple variants of an image. In the case of -our `ubuntu` image we can see multiple variants covering Ubuntu 10.04, 12.04, -12.10, 13.04, 13.10 and 14.04. Each variant is identified by a tag and you can -refer to a tagged image like so: - - ubuntu:14.04 - -So when we run a container we refer to a tagged image like so: - - $ sudo docker run -t -i ubuntu:14.04 /bin/bash - -If instead we wanted to run an Ubuntu 12.04 image we'd use: - - $ sudo docker run -t -i ubuntu:12.04 /bin/bash - -If you don't specify a variant, for example you just use `ubuntu`, then Docker -will default to using the `ubuntu:latest` image. - -> **Tip:** -> We recommend you always use a specific tagged image, for example -> `ubuntu:12.04`. That way you always know exactly what variant of an image is -> being used. - -## Getting a new image - -So how do we get new images? Well Docker will automatically download any image -we use that isn't already present on the Docker host. But this can potentially -add some time to the launch of a container. If we want to pre-load an image we -can download it using the `docker pull` command. Let's say we'd like to -download the `centos` image. - - $ sudo docker pull centos - Pulling repository centos - b7de3133ff98: Pulling dependent layers - 5cc9e91966f7: Pulling fs layer - 511136ea3c5a: Download complete - ef52fb1fe610: Download complete - . . . - - Status: Downloaded newer image for centos - -We can see that each layer of the image has been pulled down and now we -can run a container from this image and we won't have to wait to -download the image. - - $ sudo docker run -t -i centos /bin/bash - bash-4.1# - -## Finding images - -One of the features of Docker is that a lot of people have created Docker -images for a variety of purposes. Many of these have been uploaded to -[Docker Hub](https://hub.docker.com). We can search these images on the -[Docker Hub](https://hub.docker.com) website. - -![indexsearch](/userguide/search.png) - -We can also search for images on the command line using the `docker search` -command. Let's say our team wants an image with Ruby and Sinatra installed on -which to do our web application development. We can search for a suitable image -by using the `docker search` command to find all the images that contain the -term `sinatra`. - - $ sudo docker search sinatra - NAME DESCRIPTION STARS OFFICIAL AUTOMATED - training/sinatra Sinatra training image 0 [OK] - marceldegraaf/sinatra Sinatra test app 0 - mattwarren/docker-sinatra-demo 0 [OK] - luisbebop/docker-sinatra-hello-world 0 [OK] - bmorearty/handson-sinatra handson-ruby + Sinatra for Hands on with D... 0 - subwiz/sinatra 0 - bmorearty/sinatra 0 - . . . - -We can see we've returned a lot of images that use the term `sinatra`. We've -returned a list of image names, descriptions, Stars (which measure the social -popularity of images - if a user likes an image then they can "star" it), and -the Official and Automated build statuses. Official repositories are built and -maintained by the [Stackbrew](https://github.com/docker/stackbrew) project, -and Automated repositories are [Automated Builds]( -/userguide/dockerrepos/#automated-builds) that allow you to validate the source -and content of an image. - -We've reviewed the images available to use and we decided to use the -`training/sinatra` image. So far we've seen two types of images repositories, -images like `ubuntu`, which are called base or root images. These base images -are provided by Docker Inc and are built, validated and supported. These can be -identified by their single word names. - -We've also seen user images, for example the `training/sinatra` image we've -chosen. A user image belongs to a member of the Docker community and is built -and maintained by them. You can identify user images as they are always -prefixed with the user name, here `training`, of the user that created them. - -## Pulling our image - -We've identified a suitable image, `training/sinatra`, and now we can download it using the `docker pull` command. - - $ sudo docker pull training/sinatra - -The team can now use this image by running their own containers. - - $ sudo docker run -t -i training/sinatra /bin/bash - root@a8cb6ce02d85:/# - -## Creating our own images - -The team has found the `training/sinatra` image pretty useful but it's not quite what -they need and we need to make some changes to it. There are two ways we can -update and create images. - -1. We can update a container created from an image and commit the results to an image. -2. We can use a `Dockerfile` to specify instructions to create an image. - - -### Updating and committing an image - -To update an image we first need to create a container from the image -we'd like to update. - - $ sudo docker run -t -i training/sinatra /bin/bash - root@0b2616b0e5a8:/# - -> **Note:** -> Take note of the container ID that has been created, `0b2616b0e5a8`, as we'll -> need it in a moment. - -Inside our running container let's add the `json` gem. - - root@0b2616b0e5a8:/# gem install json - -Once this has completed let's exit our container using the `exit` -command. - -Now we have a container with the change we want to make. We can then -commit a copy of this container to an image using the `docker commit` -command. - - $ sudo docker commit -m "Added json gem" -a "Kate Smith" \ - 0b2616b0e5a8 ouruser/sinatra:v2 - 4f177bd27a9ff0f6dc2a830403925b5360bfe0b93d476f7fc3231110e7f71b1c - -Here we've used the `docker commit` command. We've specified two flags: `-m` -and `-a`. The `-m` flag allows us to specify a commit message, much like you -would with a commit on a version control system. The `-a` flag allows us to -specify an author for our update. - -We've also specified the container we want to create this new image from, -`0b2616b0e5a8` (the ID we recorded earlier) and we've specified a target for -the image: - - ouruser/sinatra:v2 - -Let's break this target down. It consists of a new user, `ouruser`, that we're -writing this image to. We've also specified the name of the image, here we're -keeping the original image name `sinatra`. Finally we're specifying a tag for -the image: `v2`. - -We can then look at our new `ouruser/sinatra` image using the `docker images` -command. - - $ sudo docker images - REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE - training/sinatra latest 5bc342fa0b91 10 hours ago 446.7 MB - ouruser/sinatra v2 3c59e02ddd1a 10 hours ago 446.7 MB - ouruser/sinatra latest 5db5f8471261 10 hours ago 446.7 MB - -To use our new image to create a container we can then: - - $ sudo docker run -t -i ouruser/sinatra:v2 /bin/bash - root@78e82f680994:/# - -### Building an image from a `Dockerfile` - -Using the `docker commit` command is a pretty simple way of extending an image -but it's a bit cumbersome and it's not easy to share a development process for -images amongst a team. Instead we can use a new command, `docker build`, to -build new images from scratch. - -To do this we create a `Dockerfile` that contains a set of instructions that -tell Docker how to build our image. - -Let's create a directory and a `Dockerfile` first. - - $ mkdir sinatra - $ cd sinatra - $ touch Dockerfile - -Each instruction creates a new layer of the image. Let's look at a simple -example now for building our own Sinatra image for our development team. - - # This is a comment - FROM ubuntu:14.04 - MAINTAINER Kate Smith - RUN apt-get update && apt-get install -y ruby ruby-dev - RUN gem install sinatra - -Let's look at what our `Dockerfile` does. Each instruction prefixes a statement and is capitalized. - - INSTRUCTION statement - -> **Note:** -> We use `#` to indicate a comment - -The first instruction `FROM` tells Docker what the source of our image is, in -this case we're basing our new image on an Ubuntu 14.04 image. - -Next we use the `MAINTAINER` instruction to specify who maintains our new image. - -Lastly, we've specified two `RUN` instructions. A `RUN` instruction executes -a command inside the image, for example installing a package. Here we're -updating our APT cache, installing Ruby and RubyGems and then installing the -Sinatra gem. - -> **Note:** -> There are [a lot more instructions available to us in a Dockerfile](/reference/builder). - -Now let's take our `Dockerfile` and use the `docker build` command to build an image. - - $ sudo docker build -t ouruser/sinatra:v2 . - Sending build context to Docker daemon 2.048 kB - Sending build context to Docker daemon - Step 0 : FROM ubuntu:14.04 - ---> e54ca5efa2e9 - Step 1 : MAINTAINER Kate Smith - ---> Using cache - ---> 851baf55332b - Step 2 : RUN apt-get update && apt-get install -y ruby ruby-dev - ---> Running in 3a2558904e9b - Selecting previously unselected package libasan0:amd64. - (Reading database ... 11518 files and directories currently installed.) - Preparing to unpack .../libasan0_4.8.2-19ubuntu1_amd64.deb ... - Unpacking libasan0:amd64 (4.8.2-19ubuntu1) ... - Selecting previously unselected package libatomic1:amd64. - Preparing to unpack .../libatomic1_4.8.2-19ubuntu1_amd64.deb ... - Unpacking libatomic1:amd64 (4.8.2-19ubuntu1) ... - Selecting previously unselected package libgmp10:amd64. - Preparing to unpack .../libgmp10_2%3a5.1.3+dfsg-1ubuntu1_amd64.deb ... - Unpacking libgmp10:amd64 (2:5.1.3+dfsg-1ubuntu1) ... - Selecting previously unselected package libisl10:amd64. - Preparing to unpack .../libisl10_0.12.2-1_amd64.deb ... - Unpacking libisl10:amd64 (0.12.2-1) ... - Selecting previously unselected package libcloog-isl4:amd64. - Preparing to unpack .../libcloog-isl4_0.18.2-1_amd64.deb ... - Unpacking libcloog-isl4:amd64 (0.18.2-1) ... - Selecting previously unselected package libgomp1:amd64. - Preparing to unpack .../libgomp1_4.8.2-19ubuntu1_amd64.deb ... - Unpacking libgomp1:amd64 (4.8.2-19ubuntu1) ... - Selecting previously unselected package libitm1:amd64. - Preparing to unpack .../libitm1_4.8.2-19ubuntu1_amd64.deb ... - Unpacking libitm1:amd64 (4.8.2-19ubuntu1) ... - Selecting previously unselected package libmpfr4:amd64. - Preparing to unpack .../libmpfr4_3.1.2-1_amd64.deb ... - Unpacking libmpfr4:amd64 (3.1.2-1) ... - Selecting previously unselected package libquadmath0:amd64. - Preparing to unpack .../libquadmath0_4.8.2-19ubuntu1_amd64.deb ... - Unpacking libquadmath0:amd64 (4.8.2-19ubuntu1) ... - Selecting previously unselected package libtsan0:amd64. - Preparing to unpack .../libtsan0_4.8.2-19ubuntu1_amd64.deb ... - Unpacking libtsan0:amd64 (4.8.2-19ubuntu1) ... - Selecting previously unselected package libyaml-0-2:amd64. - Preparing to unpack .../libyaml-0-2_0.1.4-3ubuntu3_amd64.deb ... - Unpacking libyaml-0-2:amd64 (0.1.4-3ubuntu3) ... - Selecting previously unselected package libmpc3:amd64. - Preparing to unpack .../libmpc3_1.0.1-1ubuntu1_amd64.deb ... - Unpacking libmpc3:amd64 (1.0.1-1ubuntu1) ... - Selecting previously unselected package openssl. - Preparing to unpack .../openssl_1.0.1f-1ubuntu2.4_amd64.deb ... - Unpacking openssl (1.0.1f-1ubuntu2.4) ... - Selecting previously unselected package ca-certificates. - Preparing to unpack .../ca-certificates_20130906ubuntu2_all.deb ... - Unpacking ca-certificates (20130906ubuntu2) ... - Selecting previously unselected package manpages. - Preparing to unpack .../manpages_3.54-1ubuntu1_all.deb ... - Unpacking manpages (3.54-1ubuntu1) ... - Selecting previously unselected package binutils. - Preparing to unpack .../binutils_2.24-5ubuntu3_amd64.deb ... - Unpacking binutils (2.24-5ubuntu3) ... - Selecting previously unselected package cpp-4.8. - Preparing to unpack .../cpp-4.8_4.8.2-19ubuntu1_amd64.deb ... - Unpacking cpp-4.8 (4.8.2-19ubuntu1) ... - Selecting previously unselected package cpp. - Preparing to unpack .../cpp_4%3a4.8.2-1ubuntu6_amd64.deb ... - Unpacking cpp (4:4.8.2-1ubuntu6) ... - Selecting previously unselected package libgcc-4.8-dev:amd64. - Preparing to unpack .../libgcc-4.8-dev_4.8.2-19ubuntu1_amd64.deb ... - Unpacking libgcc-4.8-dev:amd64 (4.8.2-19ubuntu1) ... - Selecting previously unselected package gcc-4.8. - Preparing to unpack .../gcc-4.8_4.8.2-19ubuntu1_amd64.deb ... - Unpacking gcc-4.8 (4.8.2-19ubuntu1) ... - Selecting previously unselected package gcc. - Preparing to unpack .../gcc_4%3a4.8.2-1ubuntu6_amd64.deb ... - Unpacking gcc (4:4.8.2-1ubuntu6) ... - Selecting previously unselected package libc-dev-bin. - Preparing to unpack .../libc-dev-bin_2.19-0ubuntu6_amd64.deb ... - Unpacking libc-dev-bin (2.19-0ubuntu6) ... - Selecting previously unselected package linux-libc-dev:amd64. - Preparing to unpack .../linux-libc-dev_3.13.0-30.55_amd64.deb ... - Unpacking linux-libc-dev:amd64 (3.13.0-30.55) ... - Selecting previously unselected package libc6-dev:amd64. - Preparing to unpack .../libc6-dev_2.19-0ubuntu6_amd64.deb ... - Unpacking libc6-dev:amd64 (2.19-0ubuntu6) ... - Selecting previously unselected package ruby. - Preparing to unpack .../ruby_1%3a1.9.3.4_all.deb ... - Unpacking ruby (1:1.9.3.4) ... - Selecting previously unselected package ruby1.9.1. - Preparing to unpack .../ruby1.9.1_1.9.3.484-2ubuntu1_amd64.deb ... - Unpacking ruby1.9.1 (1.9.3.484-2ubuntu1) ... - Selecting previously unselected package libruby1.9.1. - Preparing to unpack .../libruby1.9.1_1.9.3.484-2ubuntu1_amd64.deb ... - Unpacking libruby1.9.1 (1.9.3.484-2ubuntu1) ... - Selecting previously unselected package manpages-dev. - Preparing to unpack .../manpages-dev_3.54-1ubuntu1_all.deb ... - Unpacking manpages-dev (3.54-1ubuntu1) ... - Selecting previously unselected package ruby1.9.1-dev. - Preparing to unpack .../ruby1.9.1-dev_1.9.3.484-2ubuntu1_amd64.deb ... - Unpacking ruby1.9.1-dev (1.9.3.484-2ubuntu1) ... - Selecting previously unselected package ruby-dev. - Preparing to unpack .../ruby-dev_1%3a1.9.3.4_all.deb ... - Unpacking ruby-dev (1:1.9.3.4) ... - Setting up libasan0:amd64 (4.8.2-19ubuntu1) ... - Setting up libatomic1:amd64 (4.8.2-19ubuntu1) ... - Setting up libgmp10:amd64 (2:5.1.3+dfsg-1ubuntu1) ... - Setting up libisl10:amd64 (0.12.2-1) ... - Setting up libcloog-isl4:amd64 (0.18.2-1) ... - Setting up libgomp1:amd64 (4.8.2-19ubuntu1) ... - Setting up libitm1:amd64 (4.8.2-19ubuntu1) ... - Setting up libmpfr4:amd64 (3.1.2-1) ... - Setting up libquadmath0:amd64 (4.8.2-19ubuntu1) ... - Setting up libtsan0:amd64 (4.8.2-19ubuntu1) ... - Setting up libyaml-0-2:amd64 (0.1.4-3ubuntu3) ... - Setting up libmpc3:amd64 (1.0.1-1ubuntu1) ... - Setting up openssl (1.0.1f-1ubuntu2.4) ... - Setting up ca-certificates (20130906ubuntu2) ... - debconf: unable to initialize frontend: Dialog - debconf: (TERM is not set, so the dialog frontend is not usable.) - debconf: falling back to frontend: Readline - debconf: unable to initialize frontend: Readline - debconf: (This frontend requires a controlling tty.) - debconf: falling back to frontend: Teletype - Setting up manpages (3.54-1ubuntu1) ... - Setting up binutils (2.24-5ubuntu3) ... - Setting up cpp-4.8 (4.8.2-19ubuntu1) ... - Setting up cpp (4:4.8.2-1ubuntu6) ... - Setting up libgcc-4.8-dev:amd64 (4.8.2-19ubuntu1) ... - Setting up gcc-4.8 (4.8.2-19ubuntu1) ... - Setting up gcc (4:4.8.2-1ubuntu6) ... - Setting up libc-dev-bin (2.19-0ubuntu6) ... - Setting up linux-libc-dev:amd64 (3.13.0-30.55) ... - Setting up libc6-dev:amd64 (2.19-0ubuntu6) ... - Setting up manpages-dev (3.54-1ubuntu1) ... - Setting up libruby1.9.1 (1.9.3.484-2ubuntu1) ... - Setting up ruby1.9.1-dev (1.9.3.484-2ubuntu1) ... - Setting up ruby-dev (1:1.9.3.4) ... - Setting up ruby (1:1.9.3.4) ... - Setting up ruby1.9.1 (1.9.3.484-2ubuntu1) ... - Processing triggers for libc-bin (2.19-0ubuntu6) ... - Processing triggers for ca-certificates (20130906ubuntu2) ... - Updating certificates in /etc/ssl/certs... 164 added, 0 removed; done. - Running hooks in /etc/ca-certificates/update.d....done. - ---> c55c31703134 - Removing intermediate container 3a2558904e9b - Step 3 : RUN gem install sinatra - ---> Running in 6b81cb6313e5 - unable to convert "\xC3" to UTF-8 in conversion from ASCII-8BIT to UTF-8 to US-ASCII for README.rdoc, skipping - unable to convert "\xC3" to UTF-8 in conversion from ASCII-8BIT to UTF-8 to US-ASCII for README.rdoc, skipping - Successfully installed rack-1.5.2 - Successfully installed tilt-1.4.1 - Successfully installed rack-protection-1.5.3 - Successfully installed sinatra-1.4.5 - 4 gems installed - Installing ri documentation for rack-1.5.2... - Installing ri documentation for tilt-1.4.1... - Installing ri documentation for rack-protection-1.5.3... - Installing ri documentation for sinatra-1.4.5... - Installing RDoc documentation for rack-1.5.2... - Installing RDoc documentation for tilt-1.4.1... - Installing RDoc documentation for rack-protection-1.5.3... - Installing RDoc documentation for sinatra-1.4.5... - ---> 97feabe5d2ed - Removing intermediate container 6b81cb6313e5 - Successfully built 97feabe5d2ed - -We've specified our `docker build` command and used the `-t` flag to identify -our new image as belonging to the user `ouruser`, the repository name `sinatra` -and given it the tag `v2`. - -We've also specified the location of our `Dockerfile` using the `.` to -indicate a `Dockerfile` in the current directory. - -> **Note:** -> You can also specify a path to a `Dockerfile`. - -Now we can see the build process at work. The first thing Docker does is -upload the build context: basically the contents of the directory you're -building in. This is done because the Docker daemon does the actual -build of the image and it needs the local context to do it. - -Next we can see each instruction in the `Dockerfile` being executed -step-by-step. We can see that each step creates a new container, runs -the instruction inside that container and then commits that change - -just like the `docker commit` work flow we saw earlier. When all the -instructions have executed we're left with the `97feabe5d2ed` image -(also helpfully tagged as `ouruser/sinatra:v2`) and all intermediate -containers will get removed to clean things up. - -> **Note:** -> An image can't have more than 127 layers regardless of the storage driver. -> This limitation is set globally to encourage optimization of the overall -> size of images. - -We can then create a container from our new image. - - $ sudo docker run -t -i ouruser/sinatra:v2 /bin/bash - root@8196968dac35:/# - -> **Note:** -> This is just a brief introduction to creating images. We've -> skipped a whole bunch of other instructions that you can use. We'll see more of -> those instructions in later sections of the Guide or you can refer to the -> [`Dockerfile`](/reference/builder/) reference for a -> detailed description and examples of every instruction. -> To help you write a clear, readable, maintainable `Dockerfile`, we've also -> written a [`Dockerfile` Best Practices guide](/articles/dockerfile_best-practices). - -### More - -To learn more, check out the [Dockerfile tutorial](/userguide/level1). - -## Setting tags on an image - -You can also add a tag to an existing image after you commit or build it. We -can do this using the `docker tag` command. Let's add a new tag to our -`ouruser/sinatra` image. - - $ sudo docker tag 5db5f8471261 ouruser/sinatra:devel - -The `docker tag` command takes the ID of the image, here `5db5f8471261`, and our -user name, the repository name and the new tag. - -Let's see our new tag using the `docker images` command. - - $ sudo docker images ouruser/sinatra - REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE - ouruser/sinatra latest 5db5f8471261 11 hours ago 446.7 MB - ouruser/sinatra devel 5db5f8471261 11 hours ago 446.7 MB - ouruser/sinatra v2 5db5f8471261 11 hours ago 446.7 MB - -## Push an image to Docker Hub - -Once you've built or created a new image you can push it to [Docker -Hub](https://hub.docker.com) using the `docker push` command. This -allows you to share it with others, either publicly, or push it into [a -private repository](https://registry.hub.docker.com/plans/). - - $ sudo docker push ouruser/sinatra - The push refers to a repository [ouruser/sinatra] (len: 1) - Sending image list - Pushing repository ouruser/sinatra (3 tags) - . . . - -## Remove an image from the host - -You can also remove images on your Docker host in a way [similar to -containers]( -/userguide/usingdocker) using the `docker rmi` command. - -Let's delete the `training/sinatra` image as we don't need it anymore. - - $ sudo docker rmi training/sinatra - Untagged: training/sinatra:latest - Deleted: 5bc342fa0b91cabf65246837015197eecfa24b2213ed6a51a8974ae250fedd8d - Deleted: ed0fffdcdae5eb2c3a55549857a8be7fc8bc4241fb19ad714364cbfd7a56b22f - Deleted: 5c58979d73ae448df5af1d8142436d81116187a7633082650549c52c3a2418f0 - -> **Note:** In order to remove an image from the host, please make sure -> that there are no containers actively based on it. - -# Next steps - -Until now we've seen how to build individual applications inside Docker -containers. Now learn how to build whole application stacks with Docker -by linking together multiple Docker containers. - -Test your Dockerfile knowledge with the -[Dockerfile tutorial](/userguide/level1). - -Go to [Linking Containers Together](/userguide/dockerlinks). - - diff --git a/userguide/dockerizing.md~ b/userguide/dockerizing.md~ deleted file mode 100644 index 6f56a56955..0000000000 --- a/userguide/dockerizing.md~ +++ /dev/null @@ -1,194 +0,0 @@ -page_title: Dockerizing Applications: A "Hello world" -page_description: A simple "Hello world" exercise that introduced you to Docker. -page_keywords: docker guide, docker, docker platform, virtualization framework, how to, dockerize, dockerizing apps, dockerizing applications, container, containers - -# Dockerizing Applications: A "Hello world" - -*So what's this Docker thing all about?* - -Docker allows you to run applications inside containers. Running an -application inside a container takes a single command: `docker run`. - -{{ include "no-remote-sudo.md" }} - -## Hello world - -Let's try it now. - - $ sudo docker run ubuntu:14.04 /bin/echo 'Hello world' - Hello world - -And you just launched your first container! - -So what just happened? Let's step through what the `docker run` command -did. - -First we specified the `docker` binary and the command we wanted to -execute, `run`. The `docker run` combination *runs* containers. - -Next we specified an image: `ubuntu:14.04`. This is the source of the container -we ran. Docker calls this an image. In this case we used an Ubuntu 14.04 -operating system image. - -When you specify an image, Docker looks first for the image on your -Docker host. If it can't find it then it downloads the image from the public -image registry: [Docker Hub](https://hub.docker.com). - -Next we told Docker what command to run inside our new container: - - /bin/echo 'Hello world' - -When our container was launched Docker created a new Ubuntu 14.04 -environment and then executed the `/bin/echo` command inside it. We saw -the result on the command line: - - Hello world - -So what happened to our container after that? Well Docker containers -only run as long as the command you specify is active. Here, as soon as -`Hello world` was echoed, the container stopped. - -## An Interactive Container - -Let's try the `docker run` command again, this time specifying a new -command to run in our container. - - $ sudo docker run -t -i ubuntu:14.04 /bin/bash - root@af8bae53bdd3:/# - -Here we've again specified the `docker run` command and launched an -`ubuntu:14.04` image. But we've also passed in two flags: `-t` and `-i`. -The `-t` flag assigns a pseudo-tty or terminal inside our new container -and the `-i` flag allows us to make an interactive connection by -grabbing the standard in (`STDIN`) of the container. - -We've also specified a new command for our container to run: -`/bin/bash`. This will launch a Bash shell inside our container. - -So now when our container is launched we can see that we've got a -command prompt inside it: - - root@af8bae53bdd3:/# - -Let's try running some commands inside our container: - - root@af8bae53bdd3:/# pwd - / - root@af8bae53bdd3:/# ls - bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var - -You can see we've run the `pwd` to show our current directory and can -see we're in the `/` root directory. We've also done a directory listing -of the root directory which shows us what looks like a typical Linux -file system. - -You can play around inside this container and when you're done you can -use the `exit` command or enter Ctrl-D to finish. - - root@af8bae53bdd3:/# exit - -As with our previous container, once the Bash shell process has -finished, the container is stopped. - -## A Daemonized Hello world - -Now a container that runs a command and then exits has some uses but -it's not overly helpful. Let's create a container that runs as a daemon, -like most of the applications we're probably going to run with Docker. - -Again we can do this with the `docker run` command: - - $ sudo docker run -d ubuntu:14.04 /bin/sh -c "while true; do echo hello world; sleep 1; done" - 1e5535038e285177d5214659a068137486f96ee5c2e85a4ac52dc83f2ebe4147 - -Wait what? Where's our "Hello world" Let's look at what we've run here. -It should look pretty familiar. We ran `docker run` but this time we -specified a flag: `-d`. The `-d` flag tells Docker to run the container -and put it in the background, to daemonize it. - -We also specified the same image: `ubuntu:14.04`. - -Finally, we specified a command to run: - - /bin/sh -c "while true; do echo hello world; sleep 1; done" - -This is the (hello) world's silliest daemon: a shell script that echoes -`hello world` forever. - -So why aren't we seeing any `hello world`'s? Instead Docker has returned -a really long string: - - 1e5535038e285177d5214659a068137486f96ee5c2e85a4ac52dc83f2ebe4147 - -This really long string is called a *container ID*. It uniquely -identifies a container so we can work with it. - -> **Note:** -> The container ID is a bit long and unwieldy and a bit later -> on we'll see a shorter ID and some ways to name our containers to make -> working with them easier. - -We can use this container ID to see what's happening with our `hello world` daemon. - -Firstly let's make sure our container is running. We can -do that with the `docker ps` command. The `docker ps` command queries -the Docker daemon for information about all the containers it knows -about. - - $ sudo docker ps - CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES - 1e5535038e28 ubuntu:14.04 /bin/sh -c 'while tr 2 minutes ago Up 1 minute insane_babbage - -Here we can see our daemonized container. The `docker ps` has returned some useful -information about it, starting with a shorter variant of its container ID: -`1e5535038e28`. - -We can also see the image we used to build it, `ubuntu:14.04`, the command it -is running, its status and an automatically assigned name, -`insane_babbage`. - -> **Note:** -> Docker automatically names any containers you start, a -> little later on we'll see how you can specify your own names. - -Okay, so we now know it's running. But is it doing what we asked it to do? To see this -we're going to look inside the container using the `docker logs` -command. Let's use the container name Docker assigned. - - $ sudo docker logs insane_babbage - hello world - hello world - hello world - . . . - -The `docker logs` command looks inside the container and returns its standard -output: in this case the output of our command `hello world`. - -Awesome! Our daemon is working and we've just created our first -Dockerized application! - -Now we've established we can create our own containers let's tidy up -after ourselves and stop our daemonized container. To do this we use the -`docker stop` command. - - $ sudo docker stop insane_babbage - insane_babbage - -The `docker stop` command tells Docker to politely stop the running -container. If it succeeds it will return the name of the container it -has just stopped. - -Let's check it worked with the `docker ps` command. - - $ sudo docker ps - CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES - -Excellent. Our container has been stopped. - -# Next steps - -Now we've seen how simple it is to get started with Docker let's learn how to -do some more advanced tasks. - -Go to [Working With Containers](/userguide/usingdocker). - diff --git a/userguide/dockerizing.md~~ b/userguide/dockerizing.md~~ deleted file mode 100644 index 6f56a56955..0000000000 --- a/userguide/dockerizing.md~~ +++ /dev/null @@ -1,194 +0,0 @@ -page_title: Dockerizing Applications: A "Hello world" -page_description: A simple "Hello world" exercise that introduced you to Docker. -page_keywords: docker guide, docker, docker platform, virtualization framework, how to, dockerize, dockerizing apps, dockerizing applications, container, containers - -# Dockerizing Applications: A "Hello world" - -*So what's this Docker thing all about?* - -Docker allows you to run applications inside containers. Running an -application inside a container takes a single command: `docker run`. - -{{ include "no-remote-sudo.md" }} - -## Hello world - -Let's try it now. - - $ sudo docker run ubuntu:14.04 /bin/echo 'Hello world' - Hello world - -And you just launched your first container! - -So what just happened? Let's step through what the `docker run` command -did. - -First we specified the `docker` binary and the command we wanted to -execute, `run`. The `docker run` combination *runs* containers. - -Next we specified an image: `ubuntu:14.04`. This is the source of the container -we ran. Docker calls this an image. In this case we used an Ubuntu 14.04 -operating system image. - -When you specify an image, Docker looks first for the image on your -Docker host. If it can't find it then it downloads the image from the public -image registry: [Docker Hub](https://hub.docker.com). - -Next we told Docker what command to run inside our new container: - - /bin/echo 'Hello world' - -When our container was launched Docker created a new Ubuntu 14.04 -environment and then executed the `/bin/echo` command inside it. We saw -the result on the command line: - - Hello world - -So what happened to our container after that? Well Docker containers -only run as long as the command you specify is active. Here, as soon as -`Hello world` was echoed, the container stopped. - -## An Interactive Container - -Let's try the `docker run` command again, this time specifying a new -command to run in our container. - - $ sudo docker run -t -i ubuntu:14.04 /bin/bash - root@af8bae53bdd3:/# - -Here we've again specified the `docker run` command and launched an -`ubuntu:14.04` image. But we've also passed in two flags: `-t` and `-i`. -The `-t` flag assigns a pseudo-tty or terminal inside our new container -and the `-i` flag allows us to make an interactive connection by -grabbing the standard in (`STDIN`) of the container. - -We've also specified a new command for our container to run: -`/bin/bash`. This will launch a Bash shell inside our container. - -So now when our container is launched we can see that we've got a -command prompt inside it: - - root@af8bae53bdd3:/# - -Let's try running some commands inside our container: - - root@af8bae53bdd3:/# pwd - / - root@af8bae53bdd3:/# ls - bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var - -You can see we've run the `pwd` to show our current directory and can -see we're in the `/` root directory. We've also done a directory listing -of the root directory which shows us what looks like a typical Linux -file system. - -You can play around inside this container and when you're done you can -use the `exit` command or enter Ctrl-D to finish. - - root@af8bae53bdd3:/# exit - -As with our previous container, once the Bash shell process has -finished, the container is stopped. - -## A Daemonized Hello world - -Now a container that runs a command and then exits has some uses but -it's not overly helpful. Let's create a container that runs as a daemon, -like most of the applications we're probably going to run with Docker. - -Again we can do this with the `docker run` command: - - $ sudo docker run -d ubuntu:14.04 /bin/sh -c "while true; do echo hello world; sleep 1; done" - 1e5535038e285177d5214659a068137486f96ee5c2e85a4ac52dc83f2ebe4147 - -Wait what? Where's our "Hello world" Let's look at what we've run here. -It should look pretty familiar. We ran `docker run` but this time we -specified a flag: `-d`. The `-d` flag tells Docker to run the container -and put it in the background, to daemonize it. - -We also specified the same image: `ubuntu:14.04`. - -Finally, we specified a command to run: - - /bin/sh -c "while true; do echo hello world; sleep 1; done" - -This is the (hello) world's silliest daemon: a shell script that echoes -`hello world` forever. - -So why aren't we seeing any `hello world`'s? Instead Docker has returned -a really long string: - - 1e5535038e285177d5214659a068137486f96ee5c2e85a4ac52dc83f2ebe4147 - -This really long string is called a *container ID*. It uniquely -identifies a container so we can work with it. - -> **Note:** -> The container ID is a bit long and unwieldy and a bit later -> on we'll see a shorter ID and some ways to name our containers to make -> working with them easier. - -We can use this container ID to see what's happening with our `hello world` daemon. - -Firstly let's make sure our container is running. We can -do that with the `docker ps` command. The `docker ps` command queries -the Docker daemon for information about all the containers it knows -about. - - $ sudo docker ps - CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES - 1e5535038e28 ubuntu:14.04 /bin/sh -c 'while tr 2 minutes ago Up 1 minute insane_babbage - -Here we can see our daemonized container. The `docker ps` has returned some useful -information about it, starting with a shorter variant of its container ID: -`1e5535038e28`. - -We can also see the image we used to build it, `ubuntu:14.04`, the command it -is running, its status and an automatically assigned name, -`insane_babbage`. - -> **Note:** -> Docker automatically names any containers you start, a -> little later on we'll see how you can specify your own names. - -Okay, so we now know it's running. But is it doing what we asked it to do? To see this -we're going to look inside the container using the `docker logs` -command. Let's use the container name Docker assigned. - - $ sudo docker logs insane_babbage - hello world - hello world - hello world - . . . - -The `docker logs` command looks inside the container and returns its standard -output: in this case the output of our command `hello world`. - -Awesome! Our daemon is working and we've just created our first -Dockerized application! - -Now we've established we can create our own containers let's tidy up -after ourselves and stop our daemonized container. To do this we use the -`docker stop` command. - - $ sudo docker stop insane_babbage - insane_babbage - -The `docker stop` command tells Docker to politely stop the running -container. If it succeeds it will return the name of the container it -has just stopped. - -Let's check it worked with the `docker ps` command. - - $ sudo docker ps - CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES - -Excellent. Our container has been stopped. - -# Next steps - -Now we've seen how simple it is to get started with Docker let's learn how to -do some more advanced tasks. - -Go to [Working With Containers](/userguide/usingdocker). - diff --git a/userguide/dockerlinks.md~ b/userguide/dockerlinks.md~ deleted file mode 100644 index 17a9c41b5f..0000000000 --- a/userguide/dockerlinks.md~ +++ /dev/null @@ -1,328 +0,0 @@ -page_title: Linking Containers Together -page_description: Learn how to connect Docker containers together. -page_keywords: Examples, Usage, user guide, links, linking, docker, documentation, examples, names, name, container naming, port, map, network port, network - -# Linking Containers Together - -In [the Using Docker section](/userguide/usingdocker), you saw how you can -connect to a service running inside a Docker container via a network -port. But a port connection is only one way you can interact with services and -applications running inside Docker containers. In this section, we'll briefly revisit -connecting via a network port and then we'll introduce you to another method of access: -container linking. - -## Connect using Network port mapping - -In [the Using Docker section](/userguide/usingdocker), you created a -container that ran a Python Flask application: - - $ sudo docker run -d -P training/webapp python app.py - -> **Note:** -> Containers have an internal network and an IP address -> (as we saw when we used the `docker inspect` command to show the container's -> IP address in the [Using Docker](/userguide/usingdocker/) section). -> Docker can have a variety of network configurations. You can see more -> information on Docker networking [here](/articles/networking/). - -When that container was created, the `-P` flag was used to automatically map any -network ports inside it to a random high port from the range 49153 -to 65535 on our Docker host. Next, when `docker ps` was run, you saw that -port 5000 in the container was bound to port 49155 on the host. - - $ sudo docker ps nostalgic_morse - CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES - bc533791f3f5 training/webapp:latest python app.py 5 seconds ago Up 2 seconds 0.0.0.0:49155->5000/tcp nostalgic_morse - -You also saw how you can bind a container's ports to a specific port using -the `-p` flag: - - $ sudo docker run -d -p 5000:5000 training/webapp python app.py - -And you saw why this isn't such a great idea because it constrains you to -only one container on that specific port. - -There are also a few other ways you can configure the `-p` flag. By -default the `-p` flag will bind the specified port to all interfaces on -the host machine. But you can also specify a binding to a specific -interface, for example only to the `localhost`. - - $ sudo docker run -d -p 127.0.0.1:5000:5000 training/webapp python app.py - -This would bind port 5000 inside the container to port 5000 on the -`localhost` or `127.0.0.1` interface on the host machine. - -Or, to bind port 5000 of the container to a dynamic port but only on the -`localhost`, you could use: - - $ sudo docker run -d -p 127.0.0.1::5000 training/webapp python app.py - -You can also bind UDP ports by adding a trailing `/udp`. For example: - - $ sudo docker run -d -p 127.0.0.1:5000:5000/udp training/webapp python app.py - -You also learned about the useful `docker port` shortcut which showed us the -current port bindings. This is also useful for showing you specific port -configurations. For example, if you've bound the container port to the -`localhost` on the host machine, then the `docker port` output will reflect that. - - $ sudo docker port nostalgic_morse 5000 - 127.0.0.1:49155 - -> **Note:** -> The `-p` flag can be used multiple times to configure multiple ports. - -## Connect with the linking system - -Network port mappings are not the only way Docker containers can connect -to one another. Docker also has a linking system that allows you to link -multiple containers together and send connection information from one to another. -When containers are linked, information about a source container can be sent to a -recipient container. This allows the recipient to see selected data describing -aspects of the source container. - -### The importance of naming - -To establish links, Docker relies on the names of your containers. -You've already seen that each container you create has an automatically -created name; indeed you've become familiar with our old friend -`nostalgic_morse` during this guide. You can also name containers -yourself. This naming provides two useful functions: - -1. It can be useful to name containers that do specific functions in a way - that makes it easier for you to remember them, for example naming a - container containing a web application `web`. - -2. It provides Docker with a reference point that allows it to refer to other - containers, for example, you can specify to link the container `web` to container `db`. - -You can name your container by using the `--name` flag, for example: - - $ sudo docker run -d -P --name web training/webapp python app.py - -This launches a new container and uses the `--name` flag to -name the container `web`. You can see the container's name using the -`docker ps` command. - - $ sudo docker ps -l - CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES - aed84ee21bde training/webapp:latest python app.py 12 hours ago Up 2 seconds 0.0.0.0:49154->5000/tcp web - -You can also use `docker inspect` to return the container's name. - - $ sudo docker inspect -f "{{ .Name }}" aed84ee21bde - /web - -> **Note:** -> Container names have to be unique. That means you can only call -> one container `web`. If you want to re-use a container name you must delete -> the old container (with `docker rm`) before you can create a new -> container with the same name. As an alternative you can use the `--rm` -> flag with the `docker run` command. This will delete the container -> immediately after it is stopped. - -## Communication across links - -Links allow containers to discover each other and securely transfer information about one -container to another container. When you set up a link, you create a conduit between a -source container and a recipient container. The recipient can then access select data -about the source. To create a link, you use the `--link` flag. First, create a new -container, this time one containing a database. - - $ sudo docker run -d --name db training/postgres - -This creates a new container called `db` from the `training/postgres` -image, which contains a PostgreSQL database. - -Now, you need to delete the `web` container you created previously so you can replace it -with a linked one: - - $ sudo docker rm -f web - -Now, create a new `web` container and link it with your `db` container. - - $ sudo docker run -d -P --name web --link db:db training/webapp python app.py - -This will link the new `web` container with the `db` container you created -earlier. The `--link` flag takes the form: - - --link :alias - -Where `name` is the name of the container we're linking to and `alias` is an -alias for the link name. You'll see how that alias gets used shortly. - -Next, inspect your linked containers with `docker inspect`: - - $ sudo docker inspect -f "{{ .HostConfig.Links }}" web - [/db:/web/db] - -You can see that the `web` container is now linked to the `db` container -`web/db`. Which allows it to access information about the `db` container. - -So what does linking the containers actually do? You've learned that a link allows a -source container to provide information about itself to a recipient container. In -our example, the recipient, `web`, can access information about the source `db`. To do -this, Docker creates a secure tunnel between the containers that doesn't need to -expose any ports externally on the container; you'll note when we started the -`db` container we did not use either the `-P` or `-p` flags. That's a big benefit of -linking: we don't need to expose the source container, here the PostgreSQL database, to -the network. - -Docker exposes connectivity information for the source container to the -recipient container in two ways: - -* Environment variables, -* Updating the `/etc/hosts` file. - -### Environment Variables - -Docker creates several environment variables when you link containers. Docker -automatically creates environment variables in the target container based on -the `--link` parameters. It will also expose all environment variables -originating from Docker from the source container. These include variables from: - -* the `ENV` commands in the source container's Dockerfile -* the `-e`, `--env` and `--env-file` options on the `docker run` -command when the source container is started - -These environment variables enable programmatic discovery from within the -target container of information related to the source container. - -> **Warning**: -> It is important to understand that *all* environment variables originating -> from Docker within a container are made available to *any* container -> that links to it. This could have serious security implications if sensitive -> data is stored in them. - -Docker sets an `_NAME` environment variable for each target container -listed in the `--link` parameter. For example, if a new container called -`web` is linked to a database container called `db` via `--link db:webdb`, -then Docker creates a `WEBDB_NAME=/web/webdb` variable in the `web` container. - -Docker also defines a set of environment variables for each port exposed by the -source container. Each variable has a unique prefix in the form: - -`_PORT__` - -The components in this prefix are: - -* the alias `` specified in the `--link` parameter (for example, `webdb`) -* the `` number exposed -* a `` which is either TCP or UDP - -Docker uses this prefix format to define three distinct environment variables: - -* The `prefix_ADDR` variable contains the IP Address from the URL, for -example `WEBDB_PORT_8080_TCP_ADDR=172.17.0.82`. -* The `prefix_PORT` variable contains just the port number from the URL for -example `WEBDB_PORT_8080_TCP_PORT=8080`. -* The `prefix_PROTO` variable contains just the protocol from the URL for -example `WEBDB_PORT_8080_TCP_PROTO=tcp`. - -If the container exposes multiple ports, an environment variable set is -defined for each one. This means, for example, if a container exposes 4 ports -that Docker creates 12 environment variables, 3 for each port. - -Additionally, Docker creates an environment variable called `_PORT`. -This variable contains the URL of the source container's first exposed port. -The 'first' port is defined as the exposed port with the lowest number. -For example, consider the `WEBDB_PORT=tcp://172.17.0.82:8080` variable. If -that port is used for both tcp and udp, then the tcp one is specified. - -Finally, Docker also exposes each Docker originated environment variable -from the source container as an environment variable in the target. For each -variable Docker creates an `_ENV_` variable in the target -container. The variable's value is set to the value Docker used when it -started the source container. - -Returning back to our database example, you can run the `env` -command to list the specified container's environment variables. - -``` - $ sudo docker run --rm --name web2 --link db:db training/webapp env - . . . - DB_NAME=/web2/db - DB_PORT=tcp://172.17.0.5:5432 - DB_PORT_5432_TCP=tcp://172.17.0.5:5432 - DB_PORT_5432_TCP_PROTO=tcp - DB_PORT_5432_TCP_PORT=5432 - DB_PORT_5432_TCP_ADDR=172.17.0.5 - . . . -``` - -You can see that Docker has created a series of environment variables with -useful information about the source `db` container. Each variable is prefixed -with -`DB_`, which is populated from the `alias` you specified above. If the `alias` -were `db1`, the variables would be prefixed with `DB1_`. You can use these -environment variables to configure your applications to connect to the database -on the `db` container. The connection will be secure and private; only the -linked `web` container will be able to talk to the `db` container. - -### Important notes on Docker environment variables - -Unlike host entries in the [`/etc/hosts` file](#updating-the-etchosts-file), -IP addresses stored in the environment variables are not automatically updated -if the source container is restarted. We recommend using the host entries in -`/etc/hosts` to resolve the IP address of linked containers. - -These environment variables are only set for the first process in the -container. Some daemons, such as `sshd`, will scrub them when spawning shells -for connection. - -### Updating the `/etc/hosts` file - -In addition to the environment variables, Docker adds a host entry for the -source container to the `/etc/hosts` file. Here's an entry for the `web` -container: - - $ sudo docker run -t -i --rm --link db:db training/webapp /bin/bash - root@aed84ee21bde:/opt/webapp# cat /etc/hosts - 172.17.0.7 aed84ee21bde - . . . - 172.17.0.5 db - -You can see two relevant host entries. The first is an entry for the `web` -container that uses the Container ID as a host name. The second entry uses the -link alias to reference the IP address of the `db` container. You can ping -that host now via this host name. - - root@aed84ee21bde:/opt/webapp# apt-get install -yqq inetutils-ping - root@aed84ee21bde:/opt/webapp# ping db - PING db (172.17.0.5): 48 data bytes - 56 bytes from 172.17.0.5: icmp_seq=0 ttl=64 time=0.267 ms - 56 bytes from 172.17.0.5: icmp_seq=1 ttl=64 time=0.250 ms - 56 bytes from 172.17.0.5: icmp_seq=2 ttl=64 time=0.256 ms - -> **Note:** -> In the example, you'll note you had to install `ping` because it was not included -> in the container initially. - -Here, you used the `ping` command to ping the `db` container using its host entry, -which resolves to `172.17.0.5`. You can use this host entry to configure an application -to make use of your `db` container. - -> **Note:** -> You can link multiple recipient containers to a single source. For -> example, you could have multiple (differently named) web containers attached to your ->`db` container. - -If you restart the source container, the linked containers `/etc/hosts` files -will be automatically updated with the source container's new IP address, -allowing linked communication to continue. - - $ sudo docker restart db - db - $ sudo docker run -t -i --rm --link db:db training/webapp /bin/bash - root@aed84ee21bde:/opt/webapp# cat /etc/hosts - 172.17.0.7 aed84ee21bde - . . . - 172.17.0.9 db - -# Next step - -Now that you know how to link Docker containers together, the next step is -learning how to manage data, volumes and mounts inside your containers. - -Go to [Managing Data in Containers](/userguide/dockervolumes). - diff --git a/userguide/dockerrepos.md~ b/userguide/dockerrepos.md~ deleted file mode 100644 index d8dc44e69e..0000000000 --- a/userguide/dockerrepos.md~ +++ /dev/null @@ -1,171 +0,0 @@ -page_title: Working with Docker Hub -page_description: Learn how to use the Docker Hub to manage Docker images and work flow -page_keywords: repo, Docker Hub, Docker Hub, registry, index, repositories, usage, pull image, push image, image, documentation - -# Working with Docker Hub - -So far you've learned how to use the command line to run Docker on your local host. -You've learned how to [pull down images](/userguide/usingdocker/) to build containers -from existing images and you've learned how to [create your own images](/userguide/dockerimages). - -Next, you're going to learn how to use the [Docker Hub](https://hub.docker.com) to -simplify and enhance your Docker workflows. - -The [Docker Hub](https://hub.docker.com) is a public registry maintained by Docker, -Inc. It contains over 15,000 images you can download and use to build containers. It also -provides authentication, work group structure, workflow tools like webhooks and build -triggers, and privacy tools like private repositories for storing images you don't want -to share publicly. - -## Docker commands and Docker Hub - -Docker itself provides access to Docker Hub services via the `docker search`, -`pull`, `login`, and `push` commands. This page will show you how these commands work. - -### Account creation and login -Typically, you'll want to start by creating an account on Docker Hub (if you haven't -already) and logging in. You can create your account directly on -[Docker Hub](https://hub.docker.com/account/signup/), or by running: - - $ sudo docker login - -This will prompt you for a user name, which will become the public namespace for your -public repositories. -If your user name is available, Docker will prompt you to enter a password and your -e-mail address. It will then automatically log you in. You can now commit and -push your own images up to your repos on Docker Hub. - -> **Note:** -> Your authentication credentials will be stored in the `.dockercfg` -> authentication file in your home directory. - -## Searching for images - -You can search the [Docker Hub](https://hub.docker.com) registry via its search -interface or by using the command line interface. Searching can find images by image -name, user name, or description: - - $ sudo docker search centos - NAME DESCRIPTION STARS OFFICIAL TRUSTED - centos Official CentOS 6 Image as of 12 April 2014 88 - tianon/centos CentOS 5 and 6, created using rinse instea... 21 - ... - -There you can see two example results: `centos` and -`tianon/centos`. The second result shows that it comes from -the public repository of a user, named `tianon/`, while the first result, -`centos`, doesn't explicitly list a repository which means that it comes from the -trusted top-level namespace. The `/` character separates a user's -repository from the image name. - -Once you've found the image you want, you can download it with `docker pull `: - - $ sudo docker pull centos - Pulling repository centos - 0b443ba03958: Download complete - 539c0211cd76: Download complete - 511136ea3c5a: Download complete - 7064731afe90: Download complete - - Status: Downloaded newer image for centos - -You now have an image from which you can run containers. - -## Contributing to Docker Hub - -Anyone can pull public images from the [Docker Hub](https://hub.docker.com) -registry, but if you would like to share your own images, then you must -register first, as we saw in the [first section of the Docker User -Guide](/userguide/dockerhub/). - -## Pushing a repository to Docker Hub - -In order to push a repository to its registry, you need to have named an image -or committed your container to a named image as we saw -[here](/userguide/dockerimages). - -Now you can push this repository to the registry designated by its name or tag. - - $ sudo docker push yourname/newimage - -The image will then be uploaded and available for use by your team-mates and/or the -community. - -## Features of Docker Hub - -Let's take a closer look at some of the features of Docker Hub. You can find more -information [here](http://docs.docker.com/docker-hub/). - -* Private repositories -* Organizations and teams -* Automated Builds -* Webhooks - -### Private Repositories - -Sometimes you have images you don't want to make public and share with -everyone. So Docker Hub allows you to have private repositories. You can -sign up for a plan [here](https://registry.hub.docker.com/plans/). - -### Organizations and teams - -One of the useful aspects of private repositories is that you can share -them only with members of your organization or team. Docker Hub lets you -create organizations where you can collaborate with your colleagues and -manage private repositories. You can learn how to create and manage an organization -[here](https://registry.hub.docker.com/account/organizations/). - -### Automated Builds - -Automated Builds automate the building and updating of images from -[GitHub](https://www.github.com) or [BitBucket](http://bitbucket.com), directly on Docker -Hub. It works by adding a commit hook to your selected GitHub or BitBucket repository, -triggering a build and update when you push a commit. - -#### To setup an Automated Build - -1. Create a [Docker Hub account](https://hub.docker.com/) and login. -2. Link your GitHub or BitBucket account through the ["Link Accounts"](https://registry.hub.docker.com/account/accounts/) menu. -3. [Configure an Automated Build](https://registry.hub.docker.com/builds/add/). -4. Pick a GitHub or BitBucket project that has a `Dockerfile` that you want to build. -5. Pick the branch you want to build (the default is the `master` branch). -6. Give the Automated Build a name. -7. Assign an optional Docker tag to the Build. -8. Specify where the `Dockerfile` is located. The default is `/`. - -Once the Automated Build is configured it will automatically trigger a -build and, in a few minutes, you should see your new Automated Build on the [Docker Hub](https://hub.docker.com) -Registry. It will stay in sync with your GitHub and BitBucket repository until you -deactivate the Automated Build. - -If you want to see the status of your Automated Builds, you can go to your -[Automated Builds page](https://registry.hub.docker.com/builds/) on the Docker Hub, -and it will show you the status of your builds and their build history. - -Once you've created an Automated Build you can deactivate or delete it. You -cannot, however, push to an Automated Build with the `docker push` command. -You can only manage it by committing code to your GitHub or BitBucket -repository. - -You can create multiple Automated Builds per repository and configure them -to point to specific `Dockerfile`'s or Git branches. - -#### Build Triggers - -Automated Builds can also be triggered via a URL on Docker Hub. This -allows you to rebuild an Automated build image on demand. - -### Webhooks - -Webhooks are attached to your repositories and allow you to trigger an -event when an image or updated image is pushed to the repository. With -a webhook you can specify a target URL and a JSON payload that will be -delivered when the image is pushed. - -See the Docker Hub documentation for [more information on -webhooks](http://docs.docker.com/docker-hub/repos/#webhooks) - -## Next steps - -Go and use Docker! - diff --git a/userguide/dockervolumes.md~ b/userguide/dockervolumes.md~ deleted file mode 100644 index fcf7c55943..0000000000 --- a/userguide/dockervolumes.md~ +++ /dev/null @@ -1,200 +0,0 @@ -page_title: Managing Data in Containers -page_description: How to manage data inside your Docker containers. -page_keywords: Examples, Usage, volume, docker, documentation, user guide, data, volumes - -# Managing Data in Containers - -So far we've been introduced to some [basic Docker -concepts](/userguide/usingdocker/), seen how to work with [Docker -images](/userguide/dockerimages/) as well as learned about [networking -and links between containers](/userguide/dockerlinks/). In this section -we're going to discuss how you can manage data inside and between your -Docker containers. - -We're going to look at the two primary ways you can manage data in -Docker. - -* Data volumes, and -* Data volume containers. - -## Data volumes - -A *data volume* is a specially-designated directory within one or more -containers that bypasses the [*Union File -System*](/terms/layer/#union-file-system). Data volumes provide several -useful features for persistent or shared data: - -- Volumes are initialized when a container is created. If the container's - base image contains data at the specified mount point, that data is - copied into the new volume. -- Data volumes can be shared and reused among containers. -- Changes to a data volume are made directly. -- Changes to a data volume will not be included when you update an image. -- Data volumes persist even if the container itself is deleted. - -Data volumes are designed to persist data, independent of the container's life -cycle. Docker therefore *never* automatically delete volumes when you remove -a container, nor will it "garbage collect" volumes that are no longer -referenced by a container. - -### Adding a data volume - -You can add a data volume to a container using the `-v` flag with the -`docker create` and `docker run` command. You can use the `-v` multiple times -to mount multiple data volumes. Let's mount a single volume now in our web -application container. - - $ sudo docker run -d -P --name web -v /webapp training/webapp python app.py - -This will create a new volume inside a container at `/webapp`. - -> **Note:** -> You can also use the `VOLUME` instruction in a `Dockerfile` to add one or -> more new volumes to any container created from that image. - -### Mount a Host Directory as a Data Volume - -In addition to creating a volume using the `-v` flag you can also mount a -directory from your Docker daemon's host into a container. - -> **Note:** -> If you are using Boot2Docker, your Docker daemon only has limited access to -> your OSX/Windows filesystem. Boot2Docker tries to auto-share your `/Users` -> (OSX) or `C:\Users` (Windows) directory - and so you can mount files or directories -> using `docker run -v /Users/:/ ...` (OSX) or -> `docker run -v /c/Users/:/ come from the Boot2Docker virtual machine's filesystem. - - $ sudo docker run -d -P --name web -v /src/webapp:/opt/webapp training/webapp python app.py - -This will mount the host directory, `/src/webapp`, into the container at -`/opt/webapp`. - -> **Note:** -> If the path `/opt/webapp` already exists inside the container's image, its -> contents will be replaced by the contents of `/src/webapp` on the host to stay -> consistent with the expected behavior of `mount` - -This is very useful for testing, for example we can -mount our source code inside the container and see our application at work as -we change the source code. The directory on the host must be specified as an -absolute path and if the directory doesn't exist Docker will automatically -create it for you. - -> **Note:** -> This is not available from a `Dockerfile` due to the portability -> and sharing purpose of built images. The host directory is, by its nature, -> host-dependent, so a host directory specified in a `Dockerfile` probably -> wouldn't work on all hosts. - -Docker defaults to a read-write volume but we can also mount a directory -read-only. - - $ sudo docker run -d -P --name web -v /src/webapp:/opt/webapp:ro training/webapp python app.py - -Here we've mounted the same `/src/webapp` directory but we've added the `ro` -option to specify that the mount should be read-only. - -### Mount a Host File as a Data Volume - -The `-v` flag can also be used to mount a single file - instead of *just* -directories - from the host machine. - - $ sudo docker run --rm -it -v ~/.bash_history:/.bash_history ubuntu /bin/bash - -This will drop you into a bash shell in a new container, you will have your bash -history from the host and when you exit the container, the host will have the -history of the commands typed while in the container. - -> **Note:** -> Many tools used to edit files including `vi` and `sed --in-place` may result -> in an inode change. Since Docker v1.1.0, this will produce an error such as -> "*sed: cannot rename ./sedKdJ9Dy: Device or resource busy*". In the case where -> you want to edit the mounted file, it is often easiest to instead mount the -> parent directory. - -## Creating and mounting a Data Volume Container - -If you have some persistent data that you want to share between -containers, or want to use from non-persistent containers, it's best to -create a named Data Volume Container, and then to mount the data from -it. - -Let's create a new named container with a volume to share. -While this container doesn't run an application, it reuses the `training/postgres` -image so that all containers are using layers in common, saving disk space. - - $ sudo docker create -v /dbdata --name dbdata training/postgres - -You can then use the `--volumes-from` flag to mount the `/dbdata` volume in another container. - - $ sudo docker run -d --volumes-from dbdata --name db1 training/postgres - -And another: - - $ sudo docker run -d --volumes-from dbdata --name db2 training/postgres - -In this case, if the `postgres` image contained a directory called `/dbdata` -then mounting the volumes from the `dbdata` container hides the -`/dbdata` files from the `postgres` image. The result is only the files -from the `dbdata` container are visible. - -You can use multiple `--volumes-from` parameters to bring together multiple data -volumes from multiple containers. - -You can also extend the chain by mounting the volume that came from the -`dbdata` container in yet another container via the `db1` or `db2` containers. - - $ sudo docker run -d --name db3 --volumes-from db1 training/postgres - -If you remove containers that mount volumes, including the initial `dbdata` -container, or the subsequent containers `db1` and `db2`, the volumes will not -be deleted. To delete the volume from disk, you must explicitly call -`docker rm -v` against the last container with a reference to the volume. This -allows you to upgrade, or effectively migrate data volumes between containers. - -> **Note:** Docker will not warn you when removing a container *without* -> providing the `-v` option to delete its volumes. If you remove containers -> without using the `-v` option, you may end up with "dangling" volumes; -> volumes that are no longer referenced by a container. -> Dangling volumes are difficult to get rid of and can take up a large amount -> of disk space. We're working on improving volume management and you can check -> progress on this in [pull request #8484](https://github.com/docker/docker/pull/8484) - -## Backup, restore, or migrate data volumes - -Another useful function we can perform with volumes is use them for -backups, restores or migrations. We do this by using the -`--volumes-from` flag to create a new container that mounts that volume, -like so: - - $ sudo docker run --volumes-from dbdata -v $(pwd):/backup ubuntu tar cvf /backup/backup.tar /dbdata - -Here we've launched a new container and mounted the volume from the -`dbdata` container. We've then mounted a local host directory as -`/backup`. Finally, we've passed a command that uses `tar` to backup the -contents of the `dbdata` volume to a `backup.tar` file inside our -`/backup` directory. When the command completes and the container stops -we'll be left with a backup of our `dbdata` volume. - -You could then restore it to the same container, or another that you've made -elsewhere. Create a new container. - - $ sudo docker run -v /dbdata --name dbdata2 ubuntu /bin/bash - -Then un-tar the backup file in the new container's data volume. - - $ sudo docker run --volumes-from dbdata2 -v $(pwd):/backup busybox tar xvf /backup/backup.tar - -You can use the techniques above to automate backup, migration and -restore testing using your preferred tools. - -# Next steps - -Now we've learned a bit more about how to use Docker we're going to see how to -combine Docker with the services available on -[Docker Hub](https://hub.docker.com) including Automated Builds and private -repositories. - -Go to [Working with Docker Hub](/userguide/dockerrepos). - diff --git a/userguide/index.md~ b/userguide/index.md~ deleted file mode 100644 index d0dbdb84ee..0000000000 --- a/userguide/index.md~ +++ /dev/null @@ -1,123 +0,0 @@ -page_title: The Docker User Guide -page_description: The Docker User Guide home page -page_keywords: docker, introduction, documentation, about, technology, docker.io, user, guide, user's, manual, platform, framework, virtualization, home, intro - -# Welcome to the Docker User Guide - -In the [Introduction](/) you got a taste of what Docker is and how it -works. In this guide we're going to take you through the fundamentals of -using Docker and integrating it into your environment. - -We’ll teach you how to use Docker to: - -* Dockerize your applications. -* Run your own containers. -* Build Docker images. -* Share your Docker images with others. -* And a whole lot more! - -We've broken this guide into major sections that take you through -the Docker life cycle: - -## Getting Started with Docker Hub - -*How do I use Docker Hub?* - -Docker Hub is the central hub for Docker. It hosts public Docker images -and provides services to help you build and manage your Docker -environment. To learn more: - -Go to [Using Docker Hub](/userguide/dockerhub). - -## Dockerizing Applications: A "Hello world" - -*How do I run applications inside containers?* - -Docker offers a *container-based* virtualization platform to power your -applications. To learn how to Dockerize applications and run them: - -Go to [Dockerizing Applications](/userguide/dockerizing). - -## Working with Containers - -*How do I manage my containers?* - -Once you get a grip on running your applications in Docker containers -we're going to show you how to manage those containers. To find out -about how to inspect, monitor and manage containers: - -Go to [Working With Containers](/userguide/usingdocker). - -## Working with Docker Images - -*How can I access, share and build my own images?* - -Once you've learnt how to use Docker it's time to take the next step and -learn how to build your own application images with Docker. - -Go to [Working with Docker Images](/userguide/dockerimages). - -## Linking Containers Together - -Until now we've seen how to build individual applications inside Docker -containers. Now learn how to build whole application stacks with Docker -by linking together multiple Docker containers. - -Go to [Linking Containers Together](/userguide/dockerlinks). - -## Managing Data in Containers - -Now we know how to link Docker containers together the next step is -learning how to manage data, volumes and mounts inside our containers. - -Go to [Managing Data in Containers](/userguide/dockervolumes). - -## Working with Docker Hub - -Now we've learned a bit more about how to use Docker we're going to see -how to combine Docker with the services available on Docker Hub including -Trusted Builds and private repositories. - -Go to [Working with Docker Hub](/userguide/dockerrepos). - -## Docker Compose - -Docker Compose allows you to define a application's components -- their containers, -configuration, links and volumes -- in a single file. Then a single command -will set everything up and start your application running. - -Go to [Docker Compose user guide](/compose/). - -## Docker Machine - -Docker Machine helps you get Docker Engines up and running quickly. Machine -can set up hosts for Docker Engines on your computer, on cloud providers, -and/or in your data center, and then configure your Docker client to securely -talk to them. - -Go to [Docker Machine user guide](/machine/). - -## Docker Swarm - -Docker Swarm pools several Docker Engines together and exposes them as a single -virtual Docker Engine. It serves the standard Docker API, so any tool that already -works with Docker can now transparently scale up to multiple hosts. - -Go to [Docker Swarm user guide](/swarm/). - -## Getting help - -* [Docker homepage](http://www.docker.com/) -* [Docker Hub](https://hub.docker.com) -* [Docker blog](http://blog.docker.com/) -* [Docker documentation](http://docs.docker.com/) -* [Docker Getting Started Guide](http://www.docker.com/gettingstarted/) -* [Docker code on GitHub](https://github.com/docker/docker) -* [Docker mailing - list](https://groups.google.com/forum/#!forum/docker-user) -* Docker on IRC: irc.freenode.net and channel #docker -* [Docker on Twitter](http://twitter.com/docker) -* Get [Docker help](http://stackoverflow.com/search?q=docker) on - StackOverflow -* [Docker.com](http://www.docker.com/) - diff --git a/userguide/level1.md~ b/userguide/level1.md~ deleted file mode 100644 index cca77dc362..0000000000 --- a/userguide/level1.md~ +++ /dev/null @@ -1,72 +0,0 @@ -page_title: Docker Images Test -page_description: How to work with Docker images. -page_keywords: documentation, docs, the docker guide, docker guide, docker, docker platform, virtualization framework, docker.io, Docker images, Docker image, image management, Docker repos, Docker repositories, docker, docker tag, docker tags, Docker Hub, collaboration - -Back - -# Dockerfile Tutorial - -## Test your Dockerfile knowledge - Level 1 - -### Questions - -
- What is the Dockerfile instruction to specify the base image ?
- - -
- What is the Dockerfile instruction to execute any commands on the current image and commit the results?
- - -
- What is the Dockerfile instruction to specify the maintainer of the Dockerfile?
- - -
- What is the character used to add comment in Dockerfiles?
- - -

-

- - -

- -
- -### Fill the Dockerfile -Your best friend Eric Bardin sent you a Dockerfile, but some parts were lost in the ocean. Can you find the missing parts? -
-
-# This is a Dockerfile to create an image with Memcached and Emacs installed. 
-# VERSION 1.0
-# use the ubuntu base image provided by dotCloud - ub
- E B, eric.bardin@dotcloud.com
-# make sure the package repository is up to date - echo "deb http://archive.ubuntu.com/ubuntu precise main universe" > /etc/apt/sources.list - apt-get update
-# install memcached -RUN apt-get install -y
-# install emacs - apt-get install -y emacs23 -
-
- - - -
-

- -## What's next? - -

In the next level, we will go into more detail about how to specify which command should be executed when the container starts, -which user to use, and how expose a particular port.

- -Back -Go to the next level diff --git a/userguide/level2.md~ b/userguide/level2.md~ deleted file mode 100644 index fe6654e71f..0000000000 --- a/userguide/level2.md~ +++ /dev/null @@ -1,96 +0,0 @@ -page_title: Docker Images Test -page_description: How to work with Docker images. -page_keywords: documentation, docs, the docker guide, docker guide, docker, docker platform, virtualization framework, docker.io, Docker images, Docker image, image management, Docker repos, Docker repositories, docker, docker tag, docker tags, Docker Hub, collaboration - -Back - -#Dockerfile Tutorial - -## Test your Dockerfile knowledge - Level 2 - -### Questions: - -
-What is the Dockerfile instruction to specify the base image?
- -
- Which Dockerfile instruction sets the default command for your image?
- -
- What is the character used to add comments in Dockerfiles?
- -
- Which Dockerfile instruction sets the username to use when running the image?
- -
- What is the Dockerfile instruction to execute any command on the current image and commit the results?
- -
- Which Dockerfile instruction sets ports to be exposed when running the image?
- -
- What is the Dockerfile instruction to specify the maintainer of the Dockerfile?
- -
- Which Dockerfile instruction lets you trigger a command as soon as the container starts?
- -
-

- -

- - -

- -
- -### Fill the Dockerfile -
-Your best friend Roberto Hashioka sent you a Dockerfile, but some parts were lost in the ocean. Can you find the missing parts? -
-
-# Redis
-#
-# VERSION       0.42
-#
-# use the ubuntu base image provided by dotCloud
-  ub
-MAINT Ro Ha roberto.hashioka@dotcloud.com
-# make sure the package repository is up to date - echo "deb http://archive.ubuntu.com/ubuntu precise main universe" > /etc/apt/sources.list - apt-get update
-# install wget (required for redis installation) - apt-get install -y wget
-# install make (required for redis installation) - apt-get install -y make
-# install gcc (required for redis installation) -RUN apt-get install -y
-# install apache2 - wget http://download.redis.io/redis-stable.tar.gz -tar xvzf redis-stable.tar.gz -cd redis-stable && make && make install
-# launch redis when starting the image - ["redis-server"]
-# run as user dameon - daemon
-# expose port 6379 - 6379 -
- - -
-

-
- -## What's next? -

-Thanks for going through our tutorial! We will be posting Level 3 in the future. - -To improve your Dockerfile writing skills even further, visit the Dockerfile best practices page. - -Back to the Docs! diff --git a/userguide/usingdocker.md~ b/userguide/usingdocker.md~ deleted file mode 100644 index 12a6b6fb2f..0000000000 --- a/userguide/usingdocker.md~ +++ /dev/null @@ -1,322 +0,0 @@ -page_title: Working with Containers -page_description: Learn how to manage and operate Docker containers. -page_keywords: docker, the docker guide, documentation, docker.io, monitoring containers, docker top, docker inspect, docker port, ports, docker logs, log, Logs - -# Working with Containers - -In the [last section of the Docker User Guide](/userguide/dockerizing) -we launched our first containers. We launched two containers using the -`docker run` command. - -* Containers we ran interactively in the foreground. -* One container we ran daemonized in the background. - -In the process we learned about several Docker commands: - -* `docker ps` - Lists containers. -* `docker logs` - Shows us the standard output of a container. -* `docker stop` - Stops running containers. - -> **Tip:** -> Another way to learn about `docker` commands is our -> [interactive tutorial](https://www.docker.com/tryit/). - -The `docker` client is pretty simple. Each action you can take -with Docker is a command and each command can take a series of -flags and arguments. - - # Usage: [sudo] docker [command] [flags] [arguments] .. - # Example: - $ sudo docker run -i -t ubuntu /bin/bash - -Let's see this in action by using the `docker version` command to return -version information on the currently installed Docker client and daemon. - - $ sudo docker version - -This command will not only provide you the version of Docker client and -daemon you are using, but also the version of Go (the programming -language powering Docker). - - Client version: 0.8.0 - Go version (client): go1.2 - - Git commit (client): cc3a8c8 - Server version: 0.8.0 - - Git commit (server): cc3a8c8 - Go version (server): go1.2 - - Last stable version: 0.8.0 - -### Seeing what the Docker client can do - -We can see all of the commands available to us with the Docker client by -running the `docker` binary without any options. - - $ sudo docker - -You will see a list of all currently available commands. - - Commands: - attach Attach to a running container - build Build an image from a Dockerfile - commit Create a new image from a container's changes - . . . - -### Seeing Docker command usage - -You can also zoom in and review the usage for specific Docker commands. - -Try typing Docker followed with a `[command]` to see the usage for that -command: - - $ sudo docker attach - Help output . . . - -Or you can also pass the `--help` flag to the `docker` binary. - - $ sudo docker attach --help - -This will display the help text and all available flags: - - Usage: docker attach [OPTIONS] CONTAINER - - Attach to a running container - - --no-stdin=false: Do not attach stdin - --sig-proxy=true: Proxify all received signal to the process (non-TTY mode only) - -> **Note:** -> You can see a full list of Docker's commands -> [here](/reference/commandline/cli/). - -## Running a Web Application in Docker - -So now we've learnt a bit more about the `docker` client let's move onto -the important stuff: running more containers. So far none of the -containers we've run did anything particularly useful though. So let's -build on that experience by running an example web application in -Docker. - -For our web application we're going to run a Python Flask application. -Let's start with a `docker run` command. - - $ sudo docker run -d -P training/webapp python app.py - -Let's review what our command did. We've specified two flags: `-d` and -`-P`. We've already seen the `-d` flag which tells Docker to run the -container in the background. The `-P` flag is new and tells Docker to -map any required network ports inside our container to our host. This -lets us view our web application. - -We've specified an image: `training/webapp`. This image is a -pre-built image we've created that contains a simple Python Flask web -application. - -Lastly, we've specified a command for our container to run: `python app.py`. This launches our web application. - -> **Note:** -> You can see more detail on the `docker run` command in the [command -> reference](/reference/commandline/cli/#run) and the [Docker Run -> Reference](/reference/run/). - -## Viewing our Web Application Container - -Now let's see our running container using the `docker ps` command. - - $ sudo docker ps -l - CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES - bc533791f3f5 training/webapp:latest python app.py 5 seconds ago Up 2 seconds 0.0.0.0:49155->5000/tcp nostalgic_morse - -You can see we've specified a new flag, `-l`, for the `docker ps` -command. This tells the `docker ps` command to return the details of the -*last* container started. - -> **Note:** -> By default, the `docker ps` command only shows information about running -> containers. If you want to see stopped containers too use the `-a` flag. - -We can see the same details we saw [when we first Dockerized a -container](/userguide/dockerizing) with one important addition in the `PORTS` -column. - - PORTS - 0.0.0.0:49155->5000/tcp - -When we passed the `-P` flag to the `docker run` command Docker mapped any -ports exposed in our image to our host. - -> **Note:** -> We'll learn more about how to expose ports in Docker images when -> [we learn how to build images](/userguide/dockerimages). - -In this case Docker has exposed port 5000 (the default Python Flask -port) on port 49155. - -Network port bindings are very configurable in Docker. In our last -example the `-P` flag is a shortcut for `-p 5000` that maps port 5000 -inside the container to a high port (from the range 49153 to 65535) on -the local Docker host. We can also bind Docker containers to specific -ports using the `-p` flag, for example: - - $ sudo docker run -d -p 5000:5000 training/webapp python app.py - -This would map port 5000 inside our container to port 5000 on our local -host. You might be asking about now: why wouldn't we just want to always -use 1:1 port mappings in Docker containers rather than mapping to high -ports? Well 1:1 mappings have the constraint of only being able to map -one of each port on your local host. Let's say you want to test two -Python applications: both bound to port 5000 inside their own containers. -Without Docker's port mapping you could only access one at a time on the -Docker host. - -So let's now browse to port 49155 in a web browser to -see the application. - -![Viewing the web application](/userguide/webapp1.png). - -Our Python application is live! - -> **Note:** -> If you have used the boot2docker virtual machine on OS X, Windows or Linux, -> you'll need to get the IP of the virtual host instead of using localhost. -> You can do this by running the following in -> the boot2docker shell. -> -> $ boot2docker ip -> The VM's Host only interface IP address is: 192.168.59.103 -> -> In this case you'd browse to http://192.168.59.103:49155 for the above example. - -## A Network Port Shortcut - -Using the `docker ps` command to return the mapped port is a bit clumsy so -Docker has a useful shortcut we can use: `docker port`. To use `docker port` we -specify the ID or name of our container and then the port for which we need the -corresponding public-facing port. - - $ sudo docker port nostalgic_morse 5000 - 0.0.0.0:49155 - -In this case we've looked up what port is mapped externally to port 5000 inside -the container. - -## Viewing the Web Application's Logs - -Let's also find out a bit more about what's happening with our application and -use another of the commands we've learnt, `docker logs`. - - $ sudo docker logs -f nostalgic_morse - * Running on http://0.0.0.0:5000/ - 10.0.2.2 - - [23/May/2014 20:16:31] "GET / HTTP/1.1" 200 - - 10.0.2.2 - - [23/May/2014 20:16:31] "GET /favicon.ico HTTP/1.1" 404 - - -This time though we've added a new flag, `-f`. This causes the `docker -logs` command to act like the `tail -f` command and watch the -container's standard out. We can see here the logs from Flask showing -the application running on port 5000 and the access log entries for it. - -## Looking at our Web Application Container's processes - -In addition to the container's logs we can also examine the processes -running inside it using the `docker top` command. - - $ sudo docker top nostalgic_morse - PID USER COMMAND - 854 root python app.py - -Here we can see our `python app.py` command is the only process running inside -the container. - -## Inspecting our Web Application Container - -Lastly, we can take a low-level dive into our Docker container using the -`docker inspect` command. It returns a JSON hash of useful configuration -and status information about Docker containers. - - $ sudo docker inspect nostalgic_morse - -Let's see a sample of that JSON output. - - [{ - "ID": "bc533791f3f500b280a9626688bc79e342e3ea0d528efe3a86a51ecb28ea20", - "Created": "2014-05-26T05:52:40.808952951Z", - "Path": "python", - "Args": [ - "app.py" - ], - "Config": { - "Hostname": "bc533791f3f5", - "Domainname": "", - "User": "", - . . . - -We can also narrow down the information we want to return by requesting a -specific element, for example to return the container's IP address we would: - - $ sudo docker inspect -f '{{ .NetworkSettings.IPAddress }}' nostalgic_morse - 172.17.0.5 - -## Stopping our Web Application Container - -Okay we've seen web application working. Now let's stop it using the -`docker stop` command and the name of our container: `nostalgic_morse`. - - $ sudo docker stop nostalgic_morse - nostalgic_morse - -We can now use the `docker ps` command to check if the container has -been stopped. - - $ sudo docker ps -l - -## Restarting our Web Application Container - -Oops! Just after you stopped the container you get a call to say another -developer needs the container back. From here you have two choices: you -can create a new container or restart the old one. Let's look at -starting our previous container back up. - - $ sudo docker start nostalgic_morse - nostalgic_morse - -Now quickly run `docker ps -l` again to see the running container is -back up or browse to the container's URL to see if the application -responds. - -> **Note:** -> Also available is the `docker restart` command that runs a stop and -> then start on the container. - -## Removing our Web Application Container - -Your colleague has let you know that they've now finished with the container -and won't need it again. So let's remove it using the `docker rm` command. - - $ sudo docker rm nostalgic_morse - Error: Impossible to remove a running container, please stop it first or use -f - 2014/05/24 08:12:56 Error: failed to remove one or more containers - -What's happened? We can't actually remove a running container. This protects -you from accidentally removing a running container you might need. Let's try -this again by stopping the container first. - - $ sudo docker stop nostalgic_morse - nostalgic_morse - $ sudo docker rm nostalgic_morse - nostalgic_morse - -And now our container is stopped and deleted. - -> **Note:** -> Always remember that deleting a container is final! - -# Next steps - -Until now we've only used images that we've downloaded from -[Docker Hub](https://hub.docker.com) now let's get introduced to -building and sharing our own images. - -Go to [Working with Docker Images](/userguide/dockerimages). -