mirror of
https://github.com/docker/docs.git
synced 2026-03-27 14:28:47 +07:00
6184 lines
2.7 MiB
6184 lines
2.7 MiB
{
|
|
"pages": [
|
|
{
|
|
"loc": "/",
|
|
"tags": "",
|
|
"text": "About Docker\nDevelop, Ship and Run Any Application, Anywhere\nDocker is a platform for developers and sysadmins\nto develop, ship, and run applications. Docker lets you quickly assemble\napplications from components and eliminates the friction that can come when\nshipping code. Docker lets you get your code tested and deployed into production\nas fast as possible.\nDocker consists of:\n\nThe Docker Engine - our lightweight and powerful open source container\n virtualization technology combined with a work flow for building\n and containerizing your applications.\nDocker Hub - our SaaS service for\n sharing and managing your application stacks.\n\nWhy Docker?\nFaster delivery of your applications\n\nWe want your environment to work better. Docker containers,\n and the work flow that comes with them, help your developers,\n sysadmins, QA folks, and release engineers work together to get your code\n into production and make it useful. We've created a standard\n container format that lets developers care about their applications\n inside containers while sysadmins and operators can work on running the\n container in your deployment. This separation of duties streamlines and\n simplifies the management and deployment of code.\nWe make it easy to build new containers, enable rapid iteration of\n your applications, and increase the visibility of changes. This\n helps everyone in your organization understand how an application works\n and how it is built.\nDocker containers are lightweight and fast! Containers have\n sub-second launch times, reducing the cycle\n time of development, testing, and deployment.\n\nDeploy and scale more easily\n\nDocker containers run (almost) everywhere. You can deploy\n containers on desktops, physical servers, virtual machines, into\n data centers, and up to public and private clouds.\nSince Docker runs on so many platforms, it's easy to move your\n applications around. You can easily move an application from a\n testing environment into the cloud and back whenever you need.\nDocker's lightweight containers also make scaling up and\n down fast and easy. You can quickly launch more containers when\n needed and then shut them down easily when they're no longer needed.\n\nGet higher density and run more workloads\n\nDocker containers don't need a hypervisor, so you can pack more of\n them onto your hosts. This means you get more value out of every\n server and can potentially reduce what you spend on equipment and\n licenses.\n\nFaster deployment makes for easier management\n\nAs Docker speeds up your work flow, it gets easier to make lots\n of small changes instead of huge, big bang updates. Smaller\n changes mean reduced risk and more uptime.\n\nAbout this guide\nThe Understanding Docker section will help you:\n\nSee how Docker works at a high level\nUnderstand the architecture of Docker\nDiscover Docker's features;\nSee how Docker compares to virtual machines\nSee some common use cases.\n\nInstallation Guides\nThe installation section will show you how to\ninstall Docker on a variety of platforms.\nDocker User Guide\nTo learn about Docker in more detail and to answer questions about usage and\nimplementation, check out the Docker User Guide.\nRelease Notes\nA summary of the changes in each release in the current series can now be found\non the separate Release Notes page\nLicensing\nDocker is licensed under the Apache License, Version 2.0. See\nLICENSE for the full\nlicense text.",
|
|
"title": "Docker"
|
|
},
|
|
{
|
|
"loc": "#about-docker",
|
|
"tags": "",
|
|
"text": "Develop, Ship and Run Any Application, Anywhere Docker is a platform for developers and sysadmins\nto develop, ship, and run applications. Docker lets you quickly assemble\napplications from components and eliminates the friction that can come when\nshipping code. Docker lets you get your code tested and deployed into production\nas fast as possible. Docker consists of: The Docker Engine - our lightweight and powerful open source container\n virtualization technology combined with a work flow for building\n and containerizing your applications. Docker Hub - our SaaS service for\n sharing and managing your application stacks.",
|
|
"title": "About Docker"
|
|
},
|
|
{
|
|
"loc": "#why-docker",
|
|
"tags": "",
|
|
"text": "Faster delivery of your applications We want your environment to work better. Docker containers,\n and the work flow that comes with them, help your developers,\n sysadmins, QA folks, and release engineers work together to get your code\n into production and make it useful. We've created a standard\n container format that lets developers care about their applications\n inside containers while sysadmins and operators can work on running the\n container in your deployment. This separation of duties streamlines and\n simplifies the management and deployment of code. We make it easy to build new containers, enable rapid iteration of\n your applications, and increase the visibility of changes. This\n helps everyone in your organization understand how an application works\n and how it is built. Docker containers are lightweight and fast! Containers have\n sub-second launch times, reducing the cycle\n time of development, testing, and deployment. Deploy and scale more easily Docker containers run (almost) everywhere. You can deploy\n containers on desktops, physical servers, virtual machines, into\n data centers, and up to public and private clouds. Since Docker runs on so many platforms, it's easy to move your\n applications around. You can easily move an application from a\n testing environment into the cloud and back whenever you need. Docker's lightweight containers also make scaling up and\n down fast and easy. You can quickly launch more containers when\n needed and then shut them down easily when they're no longer needed. Get higher density and run more workloads Docker containers don't need a hypervisor, so you can pack more of\n them onto your hosts. This means you get more value out of every\n server and can potentially reduce what you spend on equipment and\n licenses. Faster deployment makes for easier management As Docker speeds up your work flow, it gets easier to make lots\n of small changes instead of huge, big bang updates. Smaller\n changes mean reduced risk and more uptime.",
|
|
"title": "Why Docker?"
|
|
},
|
|
{
|
|
"loc": "#about-this-guide",
|
|
"tags": "",
|
|
"text": "The Understanding Docker section will help you: See how Docker works at a high level Understand the architecture of Docker Discover Docker's features; See how Docker compares to virtual machines See some common use cases. Installation Guides The installation section will show you how to\ninstall Docker on a variety of platforms. Docker User Guide To learn about Docker in more detail and to answer questions about usage and\nimplementation, check out the Docker User Guide .",
|
|
"title": "About this guide"
|
|
},
|
|
{
|
|
"loc": "#release-notes",
|
|
"tags": "",
|
|
"text": "A summary of the changes in each release in the current series can now be found\non the separate Release Notes page",
|
|
"title": "Release Notes"
|
|
},
|
|
{
|
|
"loc": "#licensing",
|
|
"tags": "",
|
|
"text": "Docker is licensed under the Apache License, Version 2.0. See LICENSE for the full\nlicense text.",
|
|
"title": "Licensing"
|
|
},
|
|
{
|
|
"loc": "/release-notes/",
|
|
"tags": "",
|
|
"text": "Release Notes\nYou can view release notes for earlier version of Docker by selecting the\ndesired version from the drop-down list at the top right of this page.\nVersion 1.5.0\n(2015-02-03)\nFor a complete list of patches, fixes, and other improvements, see the\nmerge PR on GitHub.\nNew Features\n\nThe Docker daemon has now supports for IPv6 networking between containers\n and on the docker0 bridge. For more information see the\n IPv6 networking reference.\nDocker container filesystems can now be set to--read-only, restricting your\n container to writing to volumes PR# 10093.\nA new docker stats CONTAINERID command has been added to allow users to view a\n continuously updating stream of container resource usage statistics. See the\n stats command line reference and the \n container stats API reference.\n Note: this feature is only enabled for the libcontainer exec-driver at this point.\nUsers can now specify the file to use as the Dockerfile by running\n docker build -f alternate.dockerfile .. This will allow the definition of multiple\n Dockerfiles for a single project. See the docker build command reference for more information.\nThe v1 Open Image specification has been created to document the current Docker image\n format and metadata. Please see the Open Image specification document for more details.\nThis release also includes a number of significant performance improvements in\n build and image management (PR #9720,\n PR #8827)\nThe docker inspect command now lists ExecIDs generated for each docker exec process.\n See PR #9800) for more details.\nThe docker inspect command now shows the number of container restarts when there\n is a restart policy (PR #9621)\nThis version of Docker is built using Go 1.4\n\n\nNote:\nDevelopment history prior to version 1.0 can be found by\nsearching in the Docker GitHub repo.\n\nKnown Issues\nThis section lists significant known issues present in Docker as of release\ndate. It is not exhaustive; it lists only issues with potentially significant\nimpact on users. This list will be updated as issues are resolved.\n\n\nUnexpected File Permissions in Containers\nAn idiosyncrasy in AUFS prevents permissions from propagating predictably\nbetween upper and lower layers. This can cause issues with accessing private\nkeys, database instances, etc. For complete information and workarounds see\nGithub Issue 783.\n\n\nDocker Hub incompatible with Safari 8\nDocker Hub has multiple issues displaying on Safari 8, the default browser\nfor OS X 10.10 (Yosemite). Users should access the hub using a different\nbrowser. Most notably, changes in the way Safari handles cookies means that the\nuser is repeatedly logged out. For more information, see the Docker\nforum post.",
|
|
"title": "Release Notes"
|
|
},
|
|
{
|
|
"loc": "/release-notes#release-notes",
|
|
"tags": "",
|
|
"text": "You can view release notes for earlier version of Docker by selecting the\ndesired version from the drop-down list at the top right of this page.",
|
|
"title": "Release Notes"
|
|
},
|
|
{
|
|
"loc": "/release-notes#version-150",
|
|
"tags": "",
|
|
"text": "(2015-02-03) For a complete list of patches, fixes, and other improvements, see the merge PR on GitHub . New Features The Docker daemon has now supports for IPv6 networking between containers\n and on the docker0 bridge. For more information see the\n IPv6 networking reference . Docker container filesystems can now be set to --read-only , restricting your\n container to writing to volumes PR# 10093 . A new docker stats CONTAINERID command has been added to allow users to view a\n continuously updating stream of container resource usage statistics. See the\n stats command line reference and the \n container stats API reference .\n Note : this feature is only enabled for the libcontainer exec-driver at this point. Users can now specify the file to use as the Dockerfile by running\n docker build -f alternate.dockerfile . . This will allow the definition of multiple\n Dockerfile s for a single project. See the docker build command reference for more information. The v1 Open Image specification has been created to document the current Docker image\n format and metadata. Please see the Open Image specification document for more details. This release also includes a number of significant performance improvements in\n build and image management ( PR #9720 ,\n PR #8827 ) The docker inspect command now lists ExecIDs generated for each docker exec process.\n See PR #9800 ) for more details. The docker inspect command now shows the number of container restarts when there\n is a restart policy ( PR #9621 ) This version of Docker is built using Go 1.4 Note: \nDevelopment history prior to version 1.0 can be found by\nsearching in the Docker GitHub repo .",
|
|
"title": "Version 1.5.0"
|
|
},
|
|
{
|
|
"loc": "/release-notes#known-issues",
|
|
"tags": "",
|
|
"text": "This section lists significant known issues present in Docker as of release\ndate. It is not exhaustive; it lists only issues with potentially significant\nimpact on users. This list will be updated as issues are resolved. Unexpected File Permissions in Containers \nAn idiosyncrasy in AUFS prevents permissions from propagating predictably\nbetween upper and lower layers. This can cause issues with accessing private\nkeys, database instances, etc. For complete information and workarounds see Github Issue 783 . Docker Hub incompatible with Safari 8 \nDocker Hub has multiple issues displaying on Safari 8, the default browser\nfor OS X 10.10 (Yosemite). Users should access the hub using a different\nbrowser. Most notably, changes in the way Safari handles cookies means that the\nuser is repeatedly logged out. For more information, see the Docker\nforum post .",
|
|
"title": "Known Issues"
|
|
},
|
|
{
|
|
"loc": "/introduction/",
|
|
"tags": "",
|
|
"text": "Table of Contents\nAbout\n\n\nDocker\n\n\nRelease Notes\n\n\nUnderstanding Docker\n\n\nInstallation\n\n\nUbuntu\n\n\nMac OS X\n\n\nMicrosoft Windows\n\n\nAmazon EC2\n\n\nArch Linux\n\n\nBinaries\n\n\nCentOS\n\n\nCRUX Linux\n\n\nDebian\n\n\nFedora\n\n\nFrugalWare\n\n\nGoogle Cloud Platform\n\n\nGentoo\n\n\nIBM Softlayer\n\n\nRackspace Cloud\n\n\nRed Hat Enterprise Linux\n\n\nOracle Linux\n\n\nSUSE\n\n\nDocker Compose\n\n\nUser Guide\n\n\nThe Docker User Guide\n\n\nGetting Started with Docker Hub\n\n\nDockerizing Applications\n\n\nWorking with Containers\n\n\nWorking with Docker Images\n\n\nLinking containers together\n\n\nManaging data in containers\n\n\nWorking with Docker Hub\n\n\nDocker Compose\n\n\nDocker Machine\n\n\nDocker Swarm\n\n\nDocker Hub\n\n\nDocker Hub\n\n\nAccounts\n\n\nRepositories\n\n\nAutomated Builds\n\n\nOfficial Repo Guidelines\n\n\nExamples\n\n\nDockerizing a Node.js web application\n\n\nDockerizing MongoDB\n\n\nDockerizing a Redis service\n\n\nDockerizing a PostgreSQL service\n\n\nDockerizing a Riak service\n\n\nDockerizing an SSH service\n\n\nDockerizing a CouchDB service\n\n\nDockerizing an Apt-Cacher-ng service\n\n\nGetting started with Compose and Django\n\n\nGetting started with Compose and Rails\n\n\nGetting started with Compose and Wordpress\n\n\nArticles\n\n\nDocker basics\n\n\nAdvanced networking\n\n\nSecurity\n\n\nRunning Docker with HTTPS\n\n\nRun a local registry mirror\n\n\nAutomatically starting containers\n\n\nCreating a base image\n\n\nBest practices for writing Dockerfiles\n\n\nUsing certificates for repository client verification\n\n\nUsing Supervisor\n\n\nProcess management with CFEngine\n\n\nUsing Puppet\n\n\nUsing Chef\n\n\nUsing PowerShell DSC\n\n\nCross-Host linking using ambassador containers\n\n\nRuntime metrics\n\n\nIncreasing a Boot2Docker volume\n\n\nControlling and configuring Docker using Systemd\n\n\nReference\n\n\nCommand line\n\n\nDockerfile\n\n\nFAQ\n\n\nRun Reference\n\n\nCompose command line\n\n\nCompose yml\n\n\nCompose ENV variables\n\n\nCompose commandline completion\n\n\nSwarm discovery\n\n\nSwarm strategies\n\n\nSwarm filters\n\n\nSwarm API\n\n\nDocker Hub API\n\n\nDocker Registry API\n\n\nDocker Registry API Client Libraries\n\n\nDocker Hub and Registry Spec\n\n\nDocker Remote API\n\n\nDocker Remote API v1.17\n\n\nDocker Remote API v1.16\n\n\nDocker Remote API Client Libraries\n\n\nDocker Hub Accounts API\n\n\nContributor Guide\n\n\nREADME first\n\n\nGet required software\n\n\nConfigure Git for contributing\n\n\nWork with a development container\n\n\nRun tests and test documentation\n\n\nUnderstand contribution workflow\n\n\nFind an issue\n\n\nWork on an issue\n\n\nCreate a pull request\n\n\nParticipate in the PR review\n\n\nAdvanced contributing\n\n\nWhere to get help\n\n\nCoding style guide\n\n\nDocumentation style guide",
|
|
"title": "**HIDDEN**"
|
|
},
|
|
{
|
|
"loc": "/introduction#table-of-contents",
|
|
"tags": "",
|
|
"text": "",
|
|
"title": "Table of Contents"
|
|
},
|
|
{
|
|
"loc": "/introduction#about",
|
|
"tags": "",
|
|
"text": "Docker Release Notes Understanding Docker",
|
|
"title": "About"
|
|
},
|
|
{
|
|
"loc": "/introduction#installation",
|
|
"tags": "",
|
|
"text": "Ubuntu Mac OS X Microsoft Windows Amazon EC2 Arch Linux Binaries CentOS CRUX Linux Debian Fedora FrugalWare Google Cloud Platform Gentoo IBM Softlayer Rackspace Cloud Red Hat Enterprise Linux Oracle Linux SUSE Docker Compose",
|
|
"title": "Installation"
|
|
},
|
|
{
|
|
"loc": "/introduction#user-guide",
|
|
"tags": "",
|
|
"text": "The Docker User Guide Getting Started with Docker Hub Dockerizing Applications Working with Containers Working with Docker Images Linking containers together Managing data in containers Working with Docker Hub Docker Compose Docker Machine Docker Swarm",
|
|
"title": "User Guide"
|
|
},
|
|
{
|
|
"loc": "/introduction#docker-hub",
|
|
"tags": "",
|
|
"text": "Docker Hub Accounts Repositories Automated Builds Official Repo Guidelines",
|
|
"title": "Docker Hub"
|
|
},
|
|
{
|
|
"loc": "/introduction#examples",
|
|
"tags": "",
|
|
"text": "Dockerizing a Node.js web application Dockerizing MongoDB Dockerizing a Redis service Dockerizing a PostgreSQL service Dockerizing a Riak service Dockerizing an SSH service Dockerizing a CouchDB service Dockerizing an Apt-Cacher-ng service Getting started with Compose and Django Getting started with Compose and Rails Getting started with Compose and Wordpress",
|
|
"title": "Examples"
|
|
},
|
|
{
|
|
"loc": "/introduction#articles",
|
|
"tags": "",
|
|
"text": "Docker basics Advanced networking Security Running Docker with HTTPS Run a local registry mirror Automatically starting containers Creating a base image Best practices for writing Dockerfiles Using certificates for repository client verification Using Supervisor Process management with CFEngine Using Puppet Using Chef Using PowerShell DSC Cross-Host linking using ambassador containers Runtime metrics Increasing a Boot2Docker volume Controlling and configuring Docker using Systemd",
|
|
"title": "Articles"
|
|
},
|
|
{
|
|
"loc": "/introduction#reference",
|
|
"tags": "",
|
|
"text": "Command line Dockerfile FAQ Run Reference Compose command line Compose yml Compose ENV variables Compose commandline completion Swarm discovery Swarm strategies Swarm filters Swarm API Docker Hub API Docker Registry API Docker Registry API Client Libraries Docker Hub and Registry Spec Docker Remote API Docker Remote API v1.17 Docker Remote API v1.16 Docker Remote API Client Libraries Docker Hub Accounts API",
|
|
"title": "Reference"
|
|
},
|
|
{
|
|
"loc": "/introduction#contributor-guide",
|
|
"tags": "",
|
|
"text": "README first Get required software Configure Git for contributing Work with a development container Run tests and test documentation Understand contribution workflow Find an issue Work on an issue Create a pull request Participate in the PR review Advanced contributing Where to get help Coding style guide Documentation style guide",
|
|
"title": "Contributor Guide"
|
|
},
|
|
{
|
|
"loc": "/introduction/understanding-docker/",
|
|
"tags": "",
|
|
"text": "Understanding Docker\nWhat is Docker?\nDocker is an open platform for developing, shipping, and running applications.\nDocker is designed to deliver your applications faster. With Docker you can\nseparate your applications from your infrastructure AND treat your\ninfrastructure like a managed application. Docker helps you ship code faster,\ntest faster, deploy faster, and shorten the cycle between writing code and\nrunning code.\nDocker does this by combining a lightweight container virtualization platform\nwith workflows and tooling that help you manage and deploy your applications.\nAt its core, Docker provides a way to run almost any application securely\nisolated in a container. The isolation and security allow you to run many\ncontainers simultaneously on your host. The lightweight nature of containers,\nwhich run without the extra load of a hypervisor, means you can get more out of\nyour hardware.\nSurrounding the container virtualization are tooling and a platform which can\nhelp you in several ways:\n\ngetting your applications (and supporting components) into Docker containers\ndistributing and shipping those containers to your teams for further development\nand testing\ndeploying those applications to your production environment,\n whether it be in a local data center or the Cloud.\n\nWhat can I use Docker for?\nFaster delivery of your applications\nDocker is perfect for helping you with the development lifecycle. Docker\nallows your developers to develop on local containers that contain your\napplications and services. It can then integrate into a continuous integration and\ndeployment workflow.\nFor example, your developers write code locally and share their development stack via\nDocker with their colleagues. When they are ready, they push their code and the\nstack they are developing onto a test environment and execute any required\ntests. From the testing environment, you can then push the Docker images into\nproduction and deploy your code.\nDeploying and scaling more easily\nDocker's container-based platform allows for highly portable workloads. Docker\ncontainers can run on a developer's local host, on physical or virtual machines\nin a data center, or in the Cloud.\nDocker's portability and lightweight nature also make dynamically managing\nworkloads easy. You can use Docker to quickly scale up or tear down applications\nand services. Docker's speed means that scaling can be near real time.\nAchieving higher density and running more workloads\nDocker is lightweight and fast. It provides a viable, cost-effective alternative\nto hypervisor-based virtual machines. This is especially useful in high density\nenvironments: for example, building your own Cloud or Platform-as-a-Service. But\nit is also useful for small and medium deployments where you want to get more\nout of the resources you have.\nWhat are the major Docker components?\nDocker has two major components:\n\nDocker: the open source container virtualization platform.\nDocker Hub: our Software-as-a-Service\n platform for sharing and managing Docker containers.\n\n\nNote: Docker is licensed under the open source Apache 2.0 license.\n\nWhat is Docker's architecture?\nDocker uses a client-server architecture. The Docker client talks to the\nDocker daemon, which does the heavy lifting of building, running, and\ndistributing your Docker containers. Both the Docker client and the daemon can\nrun on the same system, or you can connect a Docker client to a remote Docker\ndaemon. The Docker client and daemon communicate via sockets or through a\nRESTful API.\n\nThe Docker daemon\nAs shown in the diagram above, the Docker daemon runs on a host machine. The\nuser does not directly interact with the daemon, but instead through the Docker\nclient.\nThe Docker client\nThe Docker client, in the form of the docker binary, is the primary user\ninterface to Docker. It accepts commands from the user and communicates back and\nforth with a Docker daemon.\nInside Docker\nTo understand Docker's internals, you need to know about three components:\n\nDocker images. \nDocker registries. \nDocker containers.\n\nDocker images\nA Docker image is a read-only template. For example, an image could contain an Ubuntu\noperating system with Apache and your web application installed. Images are used to create\nDocker containers. Docker provides a simple way to build new images or update existing\nimages, or you can download Docker images that other people have already created.\nDocker images are the build component of Docker.\nDocker Registries\nDocker registries hold images. These are public or private stores from which you upload\nor download images. The public Docker registry is called\nDocker Hub. It provides a huge collection of existing\nimages for your use. These can be images you create yourself or you\ncan use images that others have previously created. Docker registries are the \ndistribution component of Docker.\nDocker containers\nDocker containers are similar to a directory. A Docker container holds everything that\nis needed for an application to run. Each container is created from a Docker\nimage. Docker containers can be run, started, stopped, moved, and deleted. Each\ncontainer is an isolated and secure application platform. Docker containers are the\n run component of Docker.\nSo how does Docker work?\nSo far, we've learned that:\n\nYou can build Docker images that hold your applications.\nYou can create Docker containers from those Docker images to run your\n applications.\nYou can share those Docker images via\n Docker Hub or your own registry.\n\nLet's look at how these elements combine together to make Docker work.\nHow does a Docker Image work?\nWe've already seen that Docker images are read-only templates from which Docker\ncontainers are launched. Each image consists of a series of layers. Docker\nmakes use of union file systems to\ncombine these layers into a single image. Union file systems allow files and\ndirectories of separate file systems, known as branches, to be transparently\noverlaid, forming a single coherent file system.\nOne of the reasons Docker is so lightweight is because of these layers. When you\nchange a Docker image\u2014for example, update an application to a new version\u2014 a new layer\ngets built. Thus, rather than replacing the whole image or entirely\nrebuilding, as you may do with a virtual machine, only that layer is added or\nupdated. Now you don't need to distribute a whole new image, just the update,\nmaking distributing Docker images faster and simpler.\nEvery image starts from a base image, for example ubuntu, a base Ubuntu image,\nor fedora, a base Fedora image. You can also use images of your own as the\nbasis for a new image, for example if you have a base Apache image you could use\nthis as the base of all your web application images.\n\nNote: Docker usually gets these base images from\nDocker Hub.\n\nDocker images are then built from these base images using a simple, descriptive\nset of steps we call instructions. Each instruction creates a new layer in our\nimage. Instructions include actions like:\n\nRun a command. \nAdd a file or directory. \nCreate an environment variable.\nWhat process to run when launching a container from this image.\n\nThese instructions are stored in a file called a Dockerfile. Docker reads this\nDockerfile when you request a build of an image, executes the instructions, and\nreturns a final image.\nHow does a Docker registry work?\nThe Docker registry is the store for your Docker images. Once you build a Docker\nimage you can push it to a public registry Docker Hub or to \nyour own registry running behind your firewall.\nUsing the Docker client, you can search for already published images and then\npull them down to your Docker host to build containers from them.\nDocker Hub provides both public and private storage\nfor images. Public storage is searchable and can be downloaded by anyone.\nPrivate storage is excluded from search results and only you and your users can\npull images down and use them to build containers. You can sign up for a storage plan\nhere.\nHow does a container work?\nA container consists of an operating system, user-added files, and meta-data. As\nwe've seen, each container is built from an image. That image tells Docker\nwhat the container holds, what process to run when the container is launched, and\na variety of other configuration data. The Docker image is read-only. When\nDocker runs a container from an image, it adds a read-write layer on top of the\nimage (using a union file system as we saw earlier) in which your application can\nthen run.\nWhat happens when you run a container?\nEither by using the docker binary or via the API, the Docker client tells the Docker\ndaemon to run a container.\n$ sudo docker run -i -t ubuntu /bin/bash\n\nLet's break down this command. The Docker client is launched using the docker\nbinary with the run option telling it to launch a new container. The bare\nminimum the Docker client needs to tell the Docker daemon to run the container\nis:\n\nWhat Docker image to build the container from, here ubuntu, a base Ubuntu\nimage; \nThe command you want to run inside the container when it is launched,\nhere /bin/bash, to start the Bash shell inside the new container.\n\nSo what happens under the hood when we run this command?\nIn order, Docker does the following:\n\nPulls the ubuntu image: Docker checks for the presence of the ubuntu\nimage and, if it doesn't exist locally on the host, then Docker downloads it from\nDocker Hub. If the image already exists, then Docker\nuses it for the new container. \nCreates a new container: Once Docker has the image, it uses it to create a\ncontainer. \nAllocates a filesystem and mounts a read-write layer: The container is created in \nthe file system and a read-write layer is added to the image.\nAllocates a network / bridge interface: Creates a network interface that allows the \nDocker container to talk to the local host. \nSets up an IP address: Finds and attaches an available IP address from a pool. \nExecutes a process that you specify: Runs your application, and; \nCaptures and provides application output: Connects and logs standard input, outputs \nand errors for you to see how your application is running.\n\nYou now have a running container! From here you can manage your container, interact with\nyour application and then, when finished, stop and remove your container.\nThe underlying technology\nDocker is written in Go and makes use of several Linux kernel features to\ndeliver the functionality we've seen.\nNamespaces\nDocker takes advantage of a technology called namespaces to provide the\nisolated workspace we call the container. When you run a container, Docker\ncreates a set of namespaces for that container.\nThis provides a layer of isolation: each aspect of a container runs in its own\nnamespace and does not have access outside it.\nSome of the namespaces that Docker uses are:\n\nThe pid namespace: Used for process isolation (PID: Process ID). \nThe net namespace: Used for managing network interfaces (NET:\n Networking). \nThe ipc namespace: Used for managing access to IPC\n resources (IPC: InterProcess Communication). \nThe mnt namespace: Used for managing mount-points (MNT: Mount). \nThe uts namespace: Used for isolating kernel and version identifiers. (UTS: Unix\nTimesharing System).\n\nControl groups\nDocker also makes use of another technology called cgroups or control groups.\nA key to running applications in isolation is to have them only use the\nresources you want. This ensures containers are good multi-tenant citizens on a\nhost. Control groups allow Docker to share available hardware resources to\ncontainers and, if required, set up limits and constraints. For example,\nlimiting the memory available to a specific container.\nUnion file systems\nUnion file systems, or UnionFS, are file systems that operate by creating layers,\nmaking them very lightweight and fast. Docker uses union file systems to provide\nthe building blocks for containers. Docker can make use of several union file system variants\nincluding: AUFS, btrfs, vfs, and DeviceMapper.\nContainer format\nDocker combines these components into a wrapper we call a container format. The\ndefault container format is called libcontainer. Docker also supports\ntraditional Linux containers using LXC. In the \nfuture, Docker may support other container formats, for example, by integrating with\nBSD Jails or Solaris Zones.\nNext steps\nInstalling Docker\nVisit the installation section.\nThe Docker User Guide\nLearn Docker in depth.",
|
|
"title": "Understanding Docker"
|
|
},
|
|
{
|
|
"loc": "/introduction/understanding-docker#understanding-docker",
|
|
"tags": "",
|
|
"text": "What is Docker? Docker is an open platform for developing, shipping, and running applications.\nDocker is designed to deliver your applications faster. With Docker you can\nseparate your applications from your infrastructure AND treat your\ninfrastructure like a managed application. Docker helps you ship code faster,\ntest faster, deploy faster, and shorten the cycle between writing code and\nrunning code. Docker does this by combining a lightweight container virtualization platform\nwith workflows and tooling that help you manage and deploy your applications. At its core, Docker provides a way to run almost any application securely\nisolated in a container. The isolation and security allow you to run many\ncontainers simultaneously on your host. The lightweight nature of containers,\nwhich run without the extra load of a hypervisor, means you can get more out of\nyour hardware. Surrounding the container virtualization are tooling and a platform which can\nhelp you in several ways: getting your applications (and supporting components) into Docker containers distributing and shipping those containers to your teams for further development\nand testing deploying those applications to your production environment,\n whether it be in a local data center or the Cloud.",
|
|
"title": "Understanding Docker"
|
|
},
|
|
{
|
|
"loc": "/introduction/understanding-docker#what-can-i-use-docker-for",
|
|
"tags": "",
|
|
"text": "Faster delivery of your applications Docker is perfect for helping you with the development lifecycle. Docker\nallows your developers to develop on local containers that contain your\napplications and services. It can then integrate into a continuous integration and\ndeployment workflow. For example, your developers write code locally and share their development stack via\nDocker with their colleagues. When they are ready, they push their code and the\nstack they are developing onto a test environment and execute any required\ntests. From the testing environment, you can then push the Docker images into\nproduction and deploy your code. Deploying and scaling more easily Docker's container-based platform allows for highly portable workloads. Docker\ncontainers can run on a developer's local host, on physical or virtual machines\nin a data center, or in the Cloud. Docker's portability and lightweight nature also make dynamically managing\nworkloads easy. You can use Docker to quickly scale up or tear down applications\nand services. Docker's speed means that scaling can be near real time. Achieving higher density and running more workloads Docker is lightweight and fast. It provides a viable, cost-effective alternative\nto hypervisor-based virtual machines. This is especially useful in high density\nenvironments: for example, building your own Cloud or Platform-as-a-Service. But\nit is also useful for small and medium deployments where you want to get more\nout of the resources you have.",
|
|
"title": "What can I use Docker for?"
|
|
},
|
|
{
|
|
"loc": "/introduction/understanding-docker#what-are-the-major-docker-components",
|
|
"tags": "",
|
|
"text": "Docker has two major components: Docker: the open source container virtualization platform. Docker Hub : our Software-as-a-Service\n platform for sharing and managing Docker containers. Note: Docker is licensed under the open source Apache 2.0 license.",
|
|
"title": "What are the major Docker components?"
|
|
},
|
|
{
|
|
"loc": "/introduction/understanding-docker#what-is-dockers-architecture",
|
|
"tags": "",
|
|
"text": "Docker uses a client-server architecture. The Docker client talks to the\nDocker daemon , which does the heavy lifting of building, running, and\ndistributing your Docker containers. Both the Docker client and the daemon can \nrun on the same system, or you can connect a Docker client to a remote Docker\ndaemon. The Docker client and daemon communicate via sockets or through a\nRESTful API. The Docker daemon As shown in the diagram above, the Docker daemon runs on a host machine. The\nuser does not directly interact with the daemon, but instead through the Docker\nclient. The Docker client The Docker client, in the form of the docker binary, is the primary user\ninterface to Docker. It accepts commands from the user and communicates back and\nforth with a Docker daemon. Inside Docker To understand Docker's internals, you need to know about three components: Docker images. Docker registries. Docker containers. Docker images A Docker image is a read-only template. For example, an image could contain an Ubuntu\noperating system with Apache and your web application installed. Images are used to create\nDocker containers. Docker provides a simple way to build new images or update existing\nimages, or you can download Docker images that other people have already created.\nDocker images are the build component of Docker. Docker Registries Docker registries hold images. These are public or private stores from which you upload\nor download images. The public Docker registry is called Docker Hub . It provides a huge collection of existing\nimages for your use. These can be images you create yourself or you\ncan use images that others have previously created. Docker registries are the distribution component of Docker. Docker containers Docker containers are similar to a directory. A Docker container holds everything that\nis needed for an application to run. Each container is created from a Docker\nimage. Docker containers can be run, started, stopped, moved, and deleted. Each\ncontainer is an isolated and secure application platform. Docker containers are the\n run component of Docker.",
|
|
"title": "What is Docker's architecture?"
|
|
},
|
|
{
|
|
"loc": "/introduction/understanding-docker#so-how-does-docker-work",
|
|
"tags": "",
|
|
"text": "So far, we've learned that: You can build Docker images that hold your applications. You can create Docker containers from those Docker images to run your\n applications. You can share those Docker images via\n Docker Hub or your own registry. Let's look at how these elements combine together to make Docker work. How does a Docker Image work? We've already seen that Docker images are read-only templates from which Docker\ncontainers are launched. Each image consists of a series of layers. Docker\nmakes use of union file systems to\ncombine these layers into a single image. Union file systems allow files and\ndirectories of separate file systems, known as branches, to be transparently\noverlaid, forming a single coherent file system. One of the reasons Docker is so lightweight is because of these layers. When you\nchange a Docker image\u2014for example, update an application to a new version\u2014 a new layer\ngets built. Thus, rather than replacing the whole image or entirely\nrebuilding, as you may do with a virtual machine, only that layer is added or\nupdated. Now you don't need to distribute a whole new image, just the update,\nmaking distributing Docker images faster and simpler. Every image starts from a base image, for example ubuntu , a base Ubuntu image,\nor fedora , a base Fedora image. You can also use images of your own as the\nbasis for a new image, for example if you have a base Apache image you could use\nthis as the base of all your web application images. Note: Docker usually gets these base images from Docker Hub . Docker images are then built from these base images using a simple, descriptive\nset of steps we call instructions . Each instruction creates a new layer in our\nimage. Instructions include actions like: Run a command. Add a file or directory. Create an environment variable. What process to run when launching a container from this image. These instructions are stored in a file called a Dockerfile . Docker reads this Dockerfile when you request a build of an image, executes the instructions, and\nreturns a final image. How does a Docker registry work? The Docker registry is the store for your Docker images. Once you build a Docker\nimage you can push it to a public registry Docker Hub or to \nyour own registry running behind your firewall. Using the Docker client, you can search for already published images and then\npull them down to your Docker host to build containers from them. Docker Hub provides both public and private storage\nfor images. Public storage is searchable and can be downloaded by anyone.\nPrivate storage is excluded from search results and only you and your users can\npull images down and use them to build containers. You can sign up for a storage plan\nhere . How does a container work? A container consists of an operating system, user-added files, and meta-data. As\nwe've seen, each container is built from an image. That image tells Docker\nwhat the container holds, what process to run when the container is launched, and\na variety of other configuration data. The Docker image is read-only. When\nDocker runs a container from an image, it adds a read-write layer on top of the\nimage (using a union file system as we saw earlier) in which your application can\nthen run. What happens when you run a container? Either by using the docker binary or via the API, the Docker client tells the Docker\ndaemon to run a container. $ sudo docker run -i -t ubuntu /bin/bash Let's break down this command. The Docker client is launched using the docker \nbinary with the run option telling it to launch a new container. The bare\nminimum the Docker client needs to tell the Docker daemon to run the container\nis: What Docker image to build the container from, here ubuntu , a base Ubuntu\nimage; The command you want to run inside the container when it is launched,\nhere /bin/bash , to start the Bash shell inside the new container. So what happens under the hood when we run this command? In order, Docker does the following: Pulls the ubuntu image: Docker checks for the presence of the ubuntu \nimage and, if it doesn't exist locally on the host, then Docker downloads it from Docker Hub . If the image already exists, then Docker\nuses it for the new container. Creates a new container: Once Docker has the image, it uses it to create a\ncontainer. Allocates a filesystem and mounts a read-write layer : The container is created in \nthe file system and a read-write layer is added to the image. Allocates a network / bridge interface: Creates a network interface that allows the \nDocker container to talk to the local host. Sets up an IP address: Finds and attaches an available IP address from a pool. Executes a process that you specify: Runs your application, and; Captures and provides application output: Connects and logs standard input, outputs \nand errors for you to see how your application is running. You now have a running container! From here you can manage your container, interact with\nyour application and then, when finished, stop and remove your container.",
|
|
"title": "So how does Docker work?"
|
|
},
|
|
{
|
|
"loc": "/introduction/understanding-docker#the-underlying-technology",
|
|
"tags": "",
|
|
"text": "Docker is written in Go and makes use of several Linux kernel features to\ndeliver the functionality we've seen. Namespaces Docker takes advantage of a technology called namespaces to provide the\nisolated workspace we call the container . When you run a container, Docker\ncreates a set of namespaces for that container. This provides a layer of isolation: each aspect of a container runs in its own\nnamespace and does not have access outside it. Some of the namespaces that Docker uses are: The pid namespace: Used for process isolation (PID: Process ID). The net namespace: Used for managing network interfaces (NET:\n Networking). The ipc namespace: Used for managing access to IPC\n resources (IPC: InterProcess Communication). The mnt namespace: Used for managing mount-points (MNT: Mount). The uts namespace: Used for isolating kernel and version identifiers. (UTS: Unix\nTimesharing System). Control groups Docker also makes use of another technology called cgroups or control groups.\nA key to running applications in isolation is to have them only use the\nresources you want. This ensures containers are good multi-tenant citizens on a\nhost. Control groups allow Docker to share available hardware resources to\ncontainers and, if required, set up limits and constraints. For example,\nlimiting the memory available to a specific container. Union file systems Union file systems, or UnionFS, are file systems that operate by creating layers,\nmaking them very lightweight and fast. Docker uses union file systems to provide\nthe building blocks for containers. Docker can make use of several union file system variants\nincluding: AUFS, btrfs, vfs, and DeviceMapper. Container format Docker combines these components into a wrapper we call a container format. The\ndefault container format is called libcontainer . Docker also supports\ntraditional Linux containers using LXC . In the \nfuture, Docker may support other container formats, for example, by integrating with\nBSD Jails or Solaris Zones.",
|
|
"title": "The underlying technology"
|
|
},
|
|
{
|
|
"loc": "/introduction/understanding-docker#next-steps",
|
|
"tags": "",
|
|
"text": "Installing Docker Visit the installation section . The Docker User Guide Learn Docker in depth .",
|
|
"title": "Next steps"
|
|
},
|
|
{
|
|
"loc": "/installation/",
|
|
"tags": "",
|
|
"text": "Table of Contents\nAbout\n\n\nDocker\n\n\nRelease Notes\n\n\nUnderstanding Docker\n\n\nInstallation\n\n\nUbuntu\n\n\nMac OS X\n\n\nMicrosoft Windows\n\n\nAmazon EC2\n\n\nArch Linux\n\n\nBinaries\n\n\nCentOS\n\n\nCRUX Linux\n\n\nDebian\n\n\nFedora\n\n\nFrugalWare\n\n\nGoogle Cloud Platform\n\n\nGentoo\n\n\nIBM Softlayer\n\n\nRackspace Cloud\n\n\nRed Hat Enterprise Linux\n\n\nOracle Linux\n\n\nSUSE\n\n\nDocker Compose\n\n\nUser Guide\n\n\nThe Docker User Guide\n\n\nGetting Started with Docker Hub\n\n\nDockerizing Applications\n\n\nWorking with Containers\n\n\nWorking with Docker Images\n\n\nLinking containers together\n\n\nManaging data in containers\n\n\nWorking with Docker Hub\n\n\nDocker Compose\n\n\nDocker Machine\n\n\nDocker Swarm\n\n\nDocker Hub\n\n\nDocker Hub\n\n\nAccounts\n\n\nRepositories\n\n\nAutomated Builds\n\n\nOfficial Repo Guidelines\n\n\nExamples\n\n\nDockerizing a Node.js web application\n\n\nDockerizing MongoDB\n\n\nDockerizing a Redis service\n\n\nDockerizing a PostgreSQL service\n\n\nDockerizing a Riak service\n\n\nDockerizing an SSH service\n\n\nDockerizing a CouchDB service\n\n\nDockerizing an Apt-Cacher-ng service\n\n\nGetting started with Compose and Django\n\n\nGetting started with Compose and Rails\n\n\nGetting started with Compose and Wordpress\n\n\nArticles\n\n\nDocker basics\n\n\nAdvanced networking\n\n\nSecurity\n\n\nRunning Docker with HTTPS\n\n\nRun a local registry mirror\n\n\nAutomatically starting containers\n\n\nCreating a base image\n\n\nBest practices for writing Dockerfiles\n\n\nUsing certificates for repository client verification\n\n\nUsing Supervisor\n\n\nProcess management with CFEngine\n\n\nUsing Puppet\n\n\nUsing Chef\n\n\nUsing PowerShell DSC\n\n\nCross-Host linking using ambassador containers\n\n\nRuntime metrics\n\n\nIncreasing a Boot2Docker volume\n\n\nControlling and configuring Docker using Systemd\n\n\nReference\n\n\nCommand line\n\n\nDockerfile\n\n\nFAQ\n\n\nRun Reference\n\n\nCompose command line\n\n\nCompose yml\n\n\nCompose ENV variables\n\n\nCompose commandline completion\n\n\nSwarm discovery\n\n\nSwarm strategies\n\n\nSwarm filters\n\n\nSwarm API\n\n\nDocker Hub API\n\n\nDocker Registry API\n\n\nDocker Registry API Client Libraries\n\n\nDocker Hub and Registry Spec\n\n\nDocker Remote API\n\n\nDocker Remote API v1.17\n\n\nDocker Remote API v1.16\n\n\nDocker Remote API Client Libraries\n\n\nDocker Hub Accounts API\n\n\nContributor Guide\n\n\nREADME first\n\n\nGet required software\n\n\nConfigure Git for contributing\n\n\nWork with a development container\n\n\nRun tests and test documentation\n\n\nUnderstand contribution workflow\n\n\nFind an issue\n\n\nWork on an issue\n\n\nCreate a pull request\n\n\nParticipate in the PR review\n\n\nAdvanced contributing\n\n\nWhere to get help\n\n\nCoding style guide\n\n\nDocumentation style guide",
|
|
"title": "**HIDDEN**"
|
|
},
|
|
{
|
|
"loc": "/installation#table-of-contents",
|
|
"tags": "",
|
|
"text": "",
|
|
"title": "Table of Contents"
|
|
},
|
|
{
|
|
"loc": "/installation#about",
|
|
"tags": "",
|
|
"text": "Docker Release Notes Understanding Docker",
|
|
"title": "About"
|
|
},
|
|
{
|
|
"loc": "/installation#installation",
|
|
"tags": "",
|
|
"text": "Ubuntu Mac OS X Microsoft Windows Amazon EC2 Arch Linux Binaries CentOS CRUX Linux Debian Fedora FrugalWare Google Cloud Platform Gentoo IBM Softlayer Rackspace Cloud Red Hat Enterprise Linux Oracle Linux SUSE Docker Compose",
|
|
"title": "Installation"
|
|
},
|
|
{
|
|
"loc": "/installation#user-guide",
|
|
"tags": "",
|
|
"text": "The Docker User Guide Getting Started with Docker Hub Dockerizing Applications Working with Containers Working with Docker Images Linking containers together Managing data in containers Working with Docker Hub Docker Compose Docker Machine Docker Swarm",
|
|
"title": "User Guide"
|
|
},
|
|
{
|
|
"loc": "/installation#docker-hub",
|
|
"tags": "",
|
|
"text": "Docker Hub Accounts Repositories Automated Builds Official Repo Guidelines",
|
|
"title": "Docker Hub"
|
|
},
|
|
{
|
|
"loc": "/installation#examples",
|
|
"tags": "",
|
|
"text": "Dockerizing a Node.js web application Dockerizing MongoDB Dockerizing a Redis service Dockerizing a PostgreSQL service Dockerizing a Riak service Dockerizing an SSH service Dockerizing a CouchDB service Dockerizing an Apt-Cacher-ng service Getting started with Compose and Django Getting started with Compose and Rails Getting started with Compose and Wordpress",
|
|
"title": "Examples"
|
|
},
|
|
{
|
|
"loc": "/installation#articles",
|
|
"tags": "",
|
|
"text": "Docker basics Advanced networking Security Running Docker with HTTPS Run a local registry mirror Automatically starting containers Creating a base image Best practices for writing Dockerfiles Using certificates for repository client verification Using Supervisor Process management with CFEngine Using Puppet Using Chef Using PowerShell DSC Cross-Host linking using ambassador containers Runtime metrics Increasing a Boot2Docker volume Controlling and configuring Docker using Systemd",
|
|
"title": "Articles"
|
|
},
|
|
{
|
|
"loc": "/installation#reference",
|
|
"tags": "",
|
|
"text": "Command line Dockerfile FAQ Run Reference Compose command line Compose yml Compose ENV variables Compose commandline completion Swarm discovery Swarm strategies Swarm filters Swarm API Docker Hub API Docker Registry API Docker Registry API Client Libraries Docker Hub and Registry Spec Docker Remote API Docker Remote API v1.17 Docker Remote API v1.16 Docker Remote API Client Libraries Docker Hub Accounts API",
|
|
"title": "Reference"
|
|
},
|
|
{
|
|
"loc": "/installation#contributor-guide",
|
|
"tags": "",
|
|
"text": "README first Get required software Configure Git for contributing Work with a development container Run tests and test documentation Understand contribution workflow Find an issue Work on an issue Create a pull request Participate in the PR review Advanced contributing Where to get help Coding style guide Documentation style guide",
|
|
"title": "Contributor Guide"
|
|
},
|
|
{
|
|
"loc": "/installation/ubuntulinux/",
|
|
"tags": "",
|
|
"text": "Ubuntu\nDocker is supported on these Ubuntu operating systems:\n\nUbuntu Trusty 14.04 (LTS) \nUbuntu Precise 12.04 (LTS) \nUbuntu Saucy 13.10\n\nThis page instructs you to install using Docker-managed release packages and\ninstallation mechanisms. Using these packages ensures you get the latest release\nof Docker. If you wish to install using Ubuntu-managed packages, consult your\nUbuntu documentation.\nPrerequisites\nDocker requires a 64-bit installation regardless of your Ubuntu version.\nAdditionally, your kernel must be 3.10 at minimum. The latest 3.10 minor version\nor a newer maintained version are also acceptable.\nKernels older than 3.10 lack some of the features required to run Docker\ncontainers. These older versions are known to have bugs which cause data loss\nand frequently panic under certain conditions.\nTo check your current kernel version, open a terminal and use uname -r to display\nyour kernel version:\n$ uname -r \n3.11.0-15-generic\n\n\nCaution Some Ubuntu OS versions require a version higher than 3.10 to\nrun Docker, see the prerequisites on this page that apply to your Ubuntu\nversion.\n\nFor Trusty 14.04\nThere are no prerequisites for this version.\nFor Precise 12.04 (LTS)\nFor Ubuntu Precise, Docker requires the 3.13 kernel version. If your kernel\nversion is older than 3.13, you must upgrade it. Refer to this table to see\nwhich packages are required for your environment:\n .tg {border-collapse:collapse;border-spacing:0;} .tg\ntd{font-size:14px;padding:10px\n5px;border-style:solid;border-width:1px;overflow:hidden;word-break:normal;}\n.tg-031{width:275px;font-family:monospace} \n\n linux-image-generic-lts-trusty Generic\nLinux kernel image. This kernel has AUFS built in. This is required to run\nDocker. linux-headers-generic-lts-trusty\nAllows packages such as ZFS and VirtualBox guest additions\nwhich depend on them. If you didn't install the headers for your existing\nkernel, then you can skip these headers for the\"trusty\" kernel. If you're\nunsure, you should include this package for safety. xserver-xorg-lts-trusty Optional in non-graphical environments without Unity/Xorg.\nRequired when running Docker on machine with a graphical environment.\nTo learn more about the reasons for these packages, read the installation\ninstructions for backported kernels, specifically the LTS\nEnablement Stack refer to note 5 under each version.\n\n \n libgl1-mesa-glx-lts-trusty \nTo upgrade your kernel and install the additional packages, do the following:\n\n\nOpen a terminal on your Ubuntu host.\n\n\nUpdate your package manager.\n$ sudo apt-get update\n\n\n\nInstall both the required and optional packages.\n$ sudo apt-get install linux-image-generic-lts-trusty\n\nDepending on your environment, you may install more as described in the preceding table.\n\n\nReboot your host.\n$ sudo reboot\n\n\n\nAfter your system reboots, go ahead and install Docker.\n\n\nFor Saucy 13.10 (64 bit)\nDocker uses AUFS as the default storage backend. If you don't have this\nprerequisite installed, Docker's installation process adds it.\nInstalling Docker on Ubuntu\nMake sure you have intalled the prerequisites for your Ubuntu version. Then,\ninstall Docker using the following:\n\n\nLog into your Ubuntu installation as a user with sudo privileges.\n\n\nVerify that you have wget installed.\n$ which wget\n\nIf wget isn't installed, install it after updating your manager:\n$ sudo apt-get update $ sudo apt-get install wget\n\n\n\nGet the latest Docker package.\n$ wget -qO- https://get.docker.com/ | sh\n\nThe system prompts you for your sudo password. Then, it downloads and\n installs Docker and its dependencies.\n\n\nVerify docker is installed correctly.\n$ sudo docker run hello-world\n\nThis command downloads a test image and runs it in a container.\n\n\nOptional Configurations for Docker on Ubuntu\nThis section contains optional procedures for configuring your Ubuntu to work\nbetter with Docker.\n\nCreate a docker group \nAdjust memory and swap accounting \nEnable UFW forwarding \nConfigure a DNS server for use by Docker\n\nCreate a docker group\nThe docker daemon binds to a Unix socket instead of a TCP port. By default\nthat Unix socket is owned by the user root and other users can access it with\nsudo. For this reason, docker daemon always runs as the root user.\nTo avoid having to use sudo when you use the docker command, create a Unix\ngroup called docker and add users to it. When the docker daemon starts, it\nmakes the ownership of the Unix socket read/writable by the docker group.\n\nWarning: The docker group is equivalent to the root user; For details\non how this impacts security in your system, see Docker Daemon Attack\nSurface for details.\n\nTo create the docker group and add your user:\n\n\nLog into Ubuntu as a user with sudo privileges.\nThis procedure assumes you log in as the ubuntu user.\n\n\nCreate the docker group and add your user.\n$ sudo usermod -aG docker ubuntu\n\n\n\nLog out and log back in.\nThis ensures your user is running with the correct permissions.\n\n\nVerify your work by running docker without sudo.\n$ docker run hello-world\n\n\n\nAdjust memory and swap accounting\nWhen users run Docker, they may see these messages when working with an image:\nWARNING: Your kernel does not support cgroup swap limit. WARNING: Your\nkernel does not support swap limit capabilities. Limitation discarded.\n\nTo prevent these messages, enable memory and swap accounting on your system. To\nenable these on system using GNU GRUB (GNU GRand Unified Bootloader), do the\nfollowing.\n\n\nLog into Ubuntu as a user with sudo privileges.\n\n\nEdit the /etc/default/grub file.\n\n\nSet the GRUB_CMDLINE_LINUX value as follows:\nGRUB_CMDLINE_LINUX=\"cgroup_enable=memory swapaccount=1\"\n\n\n\nSave and close the file.\n\n\nUpdate GRUB.\n$ sudo update-grub\n\n\n\nReboot your system.\n\n\nEnable UFW forwarding\nIf you use UFW (Uncomplicated Firewall)\non the same host as you run Docker, you'll need to do additional configuration.\nDocker uses a bridge to manage container networking. By default, UFW drops all\nforwarding traffic. As a result, for Docker to run when UFW is\nenabled, you must set UFW's forwarding policy appropriately.\nAlso, UFW's default set of rules denies all incoming traffic. If you want to be able\nto reach your containers from another host then you should also allow incoming\nconnections on the Docker port (default 2375).\nTo configure UFW and allow incoming connections on the Docker port:\n\n\nLog into Ubuntu as a user with sudo privileges.\n\n\nVerify that UFW is installed and enabled.\n$ sudo ufw status\n\n\n\nOpen the /etc/default/ufw file for editing.\n$ sudo nano /etc/default/ufw\n\n\n\nSet the DEFAULT_FORWARD_POLICY policy to:\nDEFAULT_FORWARD_POLICY=\"ACCEPT\"\n\n\n\nSave and close the file.\n\n\nReload UFW to use the new setting.\n$ sudo ufw reload\n\n\n\nAllow incoming connections on the Docker port.\n$ sudo ufw allow 2375/tcp\n\n\n\nConfigure a DNS server for use by Docker\nSystems that run Ubuntu or an Ubuntu derivative on the desktop typically use\n127.0.0.1 as the default nameserver in /etc/resolv.conf file. The\nNetworkManager also sets up dnsmasq to use the real DNS servers of the\nconnection and sets up nameserver 127.0.0.1 in /etc/resolv.conf.\nWhen starting containers on desktop machines with these configurations, Docker\nusers see this warning:\nWARNING: Local (127.0.0.1) DNS resolver found in resolv.conf and containers\ncan't use it. Using default external servers : [8.8.8.8 8.8.4.4]\n\nThe warning occurs because Docker containers can't use the local DNS nameserver.\nInstead, Docker defaults to using an external nameserver.\nTo avoid this warning, you can specify a DNS server for use by Docker\ncontainers. Or, you can disable dnsmasq in NetworkManager. Though, disabiling\ndnsmasq might make DNS resolution slower on some networks.\nTo specify a DNS server for use by Docker:\n\n\nLog into Ubuntu as a user with sudo privileges.\n\n\nOpen the /etc/default/docker file for editing.\n$ sudo nano /etc/default/docker\n\n\n\nAdd a setting for Docker.\nDOCKER_OPTS=\"--dns 8.8.8.8\"\n\nReplace 8.8.8.8 with a local DNS server such as 192.168.1.1. You can also\nspecify multiple DNS servers. Separated them with spaces, for example:\n--dns 8.8.8.8 --dns 192.168.1.1\n\n\nWarning: If you're doing this on a laptop which connects to various\nnetworks, make sure to choose a public DNS server.\n\n\n\nSave and close the file.\n\n\nRestart the Docker daemon.\n$ sudo restart docker\n\n\n\n\n\nOr, as an alternative to the previous procedure, disable dnsmasq in\nNetworkManager (this might slow your network).\n\n\nOpen the /etc/default/docker file for editing.\n$ sudo nano /etc/NetworkManager/NetworkManager.conf\n\n\n\nComment out the dns=dsnmasq line:\ndns=dnsmasq\n\n\n\nSave and close the file.\n\n\nRestart both the NetworkManager and Docker.\n$ sudo restart network-manager $ sudo restart docker\n\n\n\nUpgrade Docker\nTo install the latest version of Docker, use the standard -N flag with wget:\n$ wget -N https://get.docker.com/ | sh",
|
|
"title": "Ubuntu"
|
|
},
|
|
{
|
|
"loc": "/installation/ubuntulinux#ubuntu",
|
|
"tags": "",
|
|
"text": "Docker is supported on these Ubuntu operating systems: Ubuntu Trusty 14.04 (LTS) Ubuntu Precise 12.04 (LTS) Ubuntu Saucy 13.10 This page instructs you to install using Docker-managed release packages and\ninstallation mechanisms. Using these packages ensures you get the latest release\nof Docker. If you wish to install using Ubuntu-managed packages, consult your\nUbuntu documentation.",
|
|
"title": "Ubuntu"
|
|
},
|
|
{
|
|
"loc": "/installation/ubuntulinux#prerequisites",
|
|
"tags": "",
|
|
"text": "Docker requires a 64-bit installation regardless of your Ubuntu version.\nAdditionally, your kernel must be 3.10 at minimum. The latest 3.10 minor version\nor a newer maintained version are also acceptable. Kernels older than 3.10 lack some of the features required to run Docker\ncontainers. These older versions are known to have bugs which cause data loss\nand frequently panic under certain conditions. To check your current kernel version, open a terminal and use uname -r to display\nyour kernel version: $ uname -r \n3.11.0-15-generic Caution Some Ubuntu OS versions require a version higher than 3.10 to\nrun Docker, see the prerequisites on this page that apply to your Ubuntu\nversion. For Trusty 14.04 There are no prerequisites for this version. For Precise 12.04 (LTS) For Ubuntu Precise, Docker requires the 3.13 kernel version. If your kernel\nversion is older than 3.13, you must upgrade it. Refer to this table to see\nwhich packages are required for your environment: .tg {border-collapse:collapse;border-spacing:0;} .tg\ntd{font-size:14px;padding:10px\n5px;border-style:solid;border-width:1px;overflow:hidden;word-break:normal;}\n.tg-031{width:275px;font-family:monospace} linux-image-generic-lts-trusty Generic\nLinux kernel image. This kernel has AUFS built in. This is required to run\nDocker. linux-headers-generic-lts-trusty Allows packages such as ZFS and VirtualBox guest additions\nwhich depend on them. If you didn't install the headers for your existing\nkernel, then you can skip these headers for the\"trusty\" kernel. If you're\nunsure, you should include this package for safety. xserver-xorg-lts-trusty Optional in non-graphical environments without Unity/Xorg. Required when running Docker on machine with a graphical environment. To learn more about the reasons for these packages, read the installation\ninstructions for backported kernels, specifically the LTS\nEnablement Stack refer to note 5 under each version. libgl1-mesa-glx-lts-trusty To upgrade your kernel and install the additional packages, do the following: Open a terminal on your Ubuntu host. Update your package manager. $ sudo apt-get update Install both the required and optional packages. $ sudo apt-get install linux-image-generic-lts-trusty Depending on your environment, you may install more as described in the preceding table. Reboot your host. $ sudo reboot After your system reboots, go ahead and install Docker . For Saucy 13.10 (64 bit) Docker uses AUFS as the default storage backend. If you don't have this\nprerequisite installed, Docker's installation process adds it.",
|
|
"title": "Prerequisites"
|
|
},
|
|
{
|
|
"loc": "/installation/ubuntulinux#installing-docker-on-ubuntu",
|
|
"tags": "",
|
|
"text": "Make sure you have intalled the prerequisites for your Ubuntu version. Then,\ninstall Docker using the following: Log into your Ubuntu installation as a user with sudo privileges. Verify that you have wget installed. $ which wget If wget isn't installed, install it after updating your manager: $ sudo apt-get update $ sudo apt-get install wget Get the latest Docker package. $ wget -qO- https://get.docker.com/ | sh The system prompts you for your sudo password. Then, it downloads and\n installs Docker and its dependencies. Verify docker is installed correctly. $ sudo docker run hello-world This command downloads a test image and runs it in a container.",
|
|
"title": "Installing Docker on Ubuntu"
|
|
},
|
|
{
|
|
"loc": "/installation/ubuntulinux#optional-configurations-for-docker-on-ubuntu",
|
|
"tags": "",
|
|
"text": "This section contains optional procedures for configuring your Ubuntu to work\nbetter with Docker. Create a docker group Adjust memory and swap accounting Enable UFW forwarding Configure a DNS server for use by Docker Create a docker group The docker daemon binds to a Unix socket instead of a TCP port. By default\nthat Unix socket is owned by the user root and other users can access it with sudo . For this reason, docker daemon always runs as the root user. To avoid having to use sudo when you use the docker command, create a Unix\ngroup called docker and add users to it. When the docker daemon starts, it\nmakes the ownership of the Unix socket read/writable by the docker group. Warning : The docker group is equivalent to the root user; For details\non how this impacts security in your system, see Docker Daemon Attack\nSurface for details. To create the docker group and add your user: Log into Ubuntu as a user with sudo privileges. This procedure assumes you log in as the ubuntu user. Create the docker group and add your user. $ sudo usermod -aG docker ubuntu Log out and log back in. This ensures your user is running with the correct permissions. Verify your work by running docker without sudo . $ docker run hello-world Adjust memory and swap accounting When users run Docker, they may see these messages when working with an image: WARNING: Your kernel does not support cgroup swap limit. WARNING: Your\nkernel does not support swap limit capabilities. Limitation discarded. To prevent these messages, enable memory and swap accounting on your system. To\nenable these on system using GNU GRUB (GNU GRand Unified Bootloader), do the\nfollowing. Log into Ubuntu as a user with sudo privileges. Edit the /etc/default/grub file. Set the GRUB_CMDLINE_LINUX value as follows: GRUB_CMDLINE_LINUX=\"cgroup_enable=memory swapaccount=1\" Save and close the file. Update GRUB. $ sudo update-grub Reboot your system. Enable UFW forwarding If you use UFW (Uncomplicated Firewall) \non the same host as you run Docker, you'll need to do additional configuration.\nDocker uses a bridge to manage container networking. By default, UFW drops all\nforwarding traffic. As a result, for Docker to run when UFW is\nenabled, you must set UFW's forwarding policy appropriately. Also, UFW's default set of rules denies all incoming traffic. If you want to be able\nto reach your containers from another host then you should also allow incoming\nconnections on the Docker port (default 2375 ). To configure UFW and allow incoming connections on the Docker port: Log into Ubuntu as a user with sudo privileges. Verify that UFW is installed and enabled. $ sudo ufw status Open the /etc/default/ufw file for editing. $ sudo nano /etc/default/ufw Set the DEFAULT_FORWARD_POLICY policy to: DEFAULT_FORWARD_POLICY=\"ACCEPT\" Save and close the file. Reload UFW to use the new setting. $ sudo ufw reload Allow incoming connections on the Docker port. $ sudo ufw allow 2375/tcp Configure a DNS server for use by Docker Systems that run Ubuntu or an Ubuntu derivative on the desktop typically use 127.0.0.1 as the default nameserver in /etc/resolv.conf file. The\nNetworkManager also sets up dnsmasq to use the real DNS servers of the\nconnection and sets up nameserver 127.0.0.1 in / etc/resolv.conf . When starting containers on desktop machines with these configurations, Docker\nusers see this warning: WARNING: Local (127.0.0.1) DNS resolver found in resolv.conf and containers\ncan't use it. Using default external servers : [8.8.8.8 8.8.4.4] The warning occurs because Docker containers can't use the local DNS nameserver.\nInstead, Docker defaults to using an external nameserver. To avoid this warning, you can specify a DNS server for use by Docker\ncontainers. Or, you can disable dnsmasq in NetworkManager. Though, disabiling dnsmasq might make DNS resolution slower on some networks. To specify a DNS server for use by Docker: Log into Ubuntu as a user with sudo privileges. Open the /etc/default/docker file for editing. $ sudo nano /etc/default/docker Add a setting for Docker. DOCKER_OPTS=\"--dns 8.8.8.8\" Replace 8.8.8.8 with a local DNS server such as 192.168.1.1 . You can also\nspecify multiple DNS servers. Separated them with spaces, for example: --dns 8.8.8.8 --dns 192.168.1.1 Warning : If you're doing this on a laptop which connects to various\nnetworks, make sure to choose a public DNS server. Save and close the file. Restart the Docker daemon. $ sudo restart docker Or, as an alternative to the previous procedure, disable dnsmasq in\nNetworkManager (this might slow your network). Open the /etc/default/docker file for editing. $ sudo nano /etc/NetworkManager/NetworkManager.conf Comment out the dns=dsnmasq line: dns=dnsmasq Save and close the file. Restart both the NetworkManager and Docker. $ sudo restart network-manager $ sudo restart docker",
|
|
"title": "Optional Configurations for Docker on Ubuntu"
|
|
},
|
|
{
|
|
"loc": "/installation/ubuntulinux#upgrade-docker",
|
|
"tags": "",
|
|
"text": "To install the latest version of Docker, use the standard -N flag with wget : $ wget -N https://get.docker.com/ | sh",
|
|
"title": "Upgrade Docker"
|
|
},
|
|
{
|
|
"loc": "/installation/mac/",
|
|
"tags": "",
|
|
"text": "Install Docker on Mac OS X\nYou can install Docker using Boot2Docker to run docker commands at your command-line.\nChoose this installation if you are familiar with the command-line or plan to\ncontribute to the Docker project on GitHub.\nAlternatively, you may want to try Kitematic, an application that lets you set up Docker and\nrun containers using a graphical user interface (GUI).\n\nCommand-line Docker with Boot2Docker\nBecause the Docker daemon uses Linux-specific kernel features, you can't run\nDocker natively in OS X. Instead, you must install the Boot2Docker application.\nThe application includes a VirtualBox Virtual Machine (VM), Docker itself, and the\nBoot2Docker management tool.\nThe Boot2Docker management tool is a lightweight Linux virtual machine made\nspecifically to run the Docker daemon on Mac OS X. The VirtualBox VM runs\ncompletely from RAM, is a small ~24MB download, and boots in approximately 5s.\nRequirements\nYour Mac must be running OS X 10.6 \"Snow Leopard\" or newer to run Boot2Docker.\nLearn the key concepts before installing\nIn a Docker installation on Linux, your machine is both the localhost and the\nDocker host. In networking, localhost means your computer. The Docker host is\nthe machine on which the containers run.\nOn a typical Linux installation, the Docker client, the Docker daemon, and any\ncontainers run directly on your localhost. This means you can address ports on a\nDocker container using standard localhost addressing such as localhost:8000 or\n0.0.0.0:8376.\n\nIn an OS X installation, the docker daemon is running inside a Linux virtual\nmachine provided by Boot2Docker.\n\nIn OS X, the Docker host address is the address of the Linux VM.\nWhen you start the boot2docker process, the VM is assigned an IP address. Under\nboot2docker ports on a container map to ports on the VM. To see this in\npractice, work through the exercises on this page.\nInstall Boot2Docker\n\n\nGo to the boot2docker/osx-installer release page.\n\n\nDownload Boot2Docker by clicking Boot2Docker-x.x.x.pkg in the \"Downloads\"\nsection.\n\n\nInstall Boot2Docker by double-clicking the package.\nThe installer places Boot2Docker in your \"Applications\" folder.\n\n\nThe installation places the docker and boot2docker binaries in your\n/usr/local/bin directory.\nStart the Boot2Docker Application\nTo run a Docker container, you first start the boot2docker VM and then issue\ndocker commands to create, load, and manage containers. You can launch\nboot2docker from your Applications folder or from the command line.\n\nNOTE: Boot2Docker is designed as a development tool. You should not use\n it in production environments.\n\nFrom the Applications folder\nWhen you launch the \"Boot2Docker\" application from your \"Applications\" folder, the\napplication:\n\n\nopens a terminal window\n\n\ncreates a $HOME/.boot2docker directory\n\n\ncreates a VirtualBox ISO and certs\n\n\nstarts a VirtualBox VM running the docker daemon\n\n\nOnce the launch completes, you can run docker commands. A good way to verify\nyour setup succeeded is to run the hello-world container.\n $ docker run hello-world\n Unable to find image 'hello-world:latest' locally\n 511136ea3c5a: Pull complete\n 31cbccb51277: Pull complete\n e45a5af57b00: Pull complete\n hello-world:latest: The image you are pulling has been verified. Important: image verification is a tech preview feature and should not be relied on to provide security.\n Status: Downloaded newer image for hello-world:latest\n Hello from Docker.\n This message shows that your installation appears to be working correctly.\n\n To generate this message, Docker took the following steps:\n 1. The Docker client contacted the Docker daemon.\n 2. The Docker daemon pulled the \"hello-world\" image from the Docker Hub.\n (Assuming it was not already locally available.)\n 3. The Docker daemon created a new container from that image which runs the\n executable that produces the output you are currently reading.\n 4. The Docker daemon streamed that output to the Docker client, which sent it\n to your terminal.\n\n To try something more ambitious, you can run an Ubuntu container with:\n $ docker run -it ubuntu bash\n\n For more examples and ideas, visit:\n http://docs.docker.com/userguide/\n\nA more typical way to start and stop boot2docker is using the command line.\nFrom your command line\nInitialize and run boot2docker from the command line, do the following:\n\n\nCreate a new Boot2Docker VM.\n$ boot2docker init\n\nThis creates a new virtual machine. You only need to run this command once.\n\n\nStart the boot2docker VM.\n$ boot2docker start\n\n\n\nDisplay the environment variables for the Docker client.\n$ boot2docker shellinit\nWriting /Users/mary/.boot2docker/certs/boot2docker-vm/ca.pem\nWriting /Users/mary/.boot2docker/certs/boot2docker-vm/cert.pem\nWriting /Users/mary/.boot2docker/certs/boot2docker-vm/key.pem\n export DOCKER_HOST=tcp://192.168.59.103:2376\n export DOCKER_CERT_PATH=/Users/mary/.boot2docker/certs/boot2docker-vm\n export DOCKER_TLS_VERIFY=1\n\nThe specific paths and address on your machine will be different.\n\n\nTo set the environment variables in your shell do the following:\n$ eval \"$(boot2docker shellinit)\"\n\nYou can also set them manually by using the export commands boot2docker\nreturns.\n\n\nRun the hello-world container to verify your setup.\n$ docker run hello-world\n\n\n\nBasic Boot2Docker Exercises\nAt this point, you should have boot2docker running and the docker client\nenvironment initialized. To verify this, run the following commands:\n$ boot2docker status\n$ docker version\n\nWork through this section to try some practical container tasks using boot2docker VM.\nAccess container ports\n\n\nStart an NGINX container on the DOCKER_HOST.\n$ docker run -d -P --name web nginx\n\nNormally, the docker run commands starts a container, runs it, and then\nexits. The -d flag keeps the container running in the background\nafter the docker run command completes. The -P flag publishes exposed ports from the\ncontainer to your local host; this lets you access them from your Mac.\n\n\nDisplay your running container with docker ps command\nCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES\n5fb65ff765e9 nginx:latest \"nginx -g 'daemon of 3 minutes ago Up 3 minutes 0.0.0.0:49156-443/tcp, 0.0.0.0:49157-80/tcp web\n\nAt this point, you can see nginx is running as a daemon.\n\n\nView just the container's ports.\n$ docker port web\n443/tcp - 0.0.0.0:49156\n80/tcp - 0.0.0.0:49157\n\nThis tells you that the web container's port 80 is mapped to port\n49157 on your Docker host.\n\n\nEnter the http://localhost:49157 address (localhost is 0.0.0.0) in your browser:\n\nThis didn't work. The reason it doesn't work is your DOCKER_HOST address is\nnot the localhost address (0.0.0.0) but is instead the address of the\nboot2docker VM.\n\n\nGet the address of the boot2docker VM.\n$ boot2docker ip\n192.168.59.103\n\n\n\nEnter the http://192.168.59.103:49157 address in your browser:\n\nSuccess!\n\n\nTo stop and then remove your running nginx container, do the following:\n$ docker stop web\n$ docker rm web\n\n\n\nMount a volume on the container\nWhen you start boot2docker, it automatically shares your /Users directory\nwith the VM. You can use this share point to mount directories onto your container.\nThe next exercise demonstrates how to do this.\n\n\nChange to your user $HOME directory.\n$ cd $HOME\n\n\n\nMake a new site directory.\n$ mkdir site\n\n\n\nChange into the site directory.\n$ cd site\n\n\n\nCreate a new index.html file.\n$ echo \"my new site\" index.html\n\n\n\nStart a new nginx container and replace the html folder with your site directory.\n$ docker run -d -P -v $HOME/site:/usr/share/nginx/html --name mysite nginx\n\n\n\nGet the mysite container's port.\n$ docker port mysite\n80/tcp - 0.0.0.0:49166\n443/tcp - 0.0.0.0:49165\n\n\n\nOpen the site in a browser:\n\n\n\nTry adding a page to your $HOME/site in real time.\n$ echo \"This is cool\" cool.html\n\n\n\nOpen the new page in the browser.\n\n\n\nStop and then remove your running mysite container.\n$ docker stop mysite\n$ docker rm mysite\n\n\n\nUpgrade Boot2Docker\nIf you running Boot2Docker 1.4.1 or greater, you can upgrade Boot2Docker from\nthe command line. If you are running an older version, you should use the\npackage provided by the boot2docker repository.\nFrom the command line\nTo upgrade from 1.4.1 or greater, you can do this:\n\n\nOpen a terminal on your local machine.\n\n\nStop the boot2docker application.\n$ boot2docker stop\n\n\n\nRun the upgrade command.\n$ boot2docker upgrade\n\n\n\nUse the installer\nTo upgrade any version of Boot2Docker, do this:\n\n\nOpen a terminal on your local machine.\n\n\nStop the boot2docker application.\n$ boot2docker stop\n\n\n\nGo to the boot2docker/osx-installer release page.\n\n\nDownload Boot2Docker by clicking Boot2Docker-x.x.x.pkg in the \"Downloads\"\nsection.\n\n\nInstall Boot2Docker by double-clicking the package.\nThe installer places Boot2Docker in your \"Applications\" folder.\n\n\nLearning more and Acknowledgement\nUse boot2docker help to list the full command line reference. For more\ninformation about using SSH or SCP to access the Boot2Docker VM, see the README\nat Boot2Docker repository.\nThanks to Chris Jones whose blog inspired me to redo\nthis page.\nContinue with the Docker User Guide.",
|
|
"title": "Mac OS X"
|
|
},
|
|
{
|
|
"loc": "/installation/mac#install-docker-on-mac-os-x",
|
|
"tags": "",
|
|
"text": "You can install Docker using Boot2Docker to run docker commands at your command-line.\nChoose this installation if you are familiar with the command-line or plan to\ncontribute to the Docker project on GitHub. Alternatively, you may want to try Kitematic , an application that lets you set up Docker and\nrun containers using a graphical user interface (GUI).",
|
|
"title": "Install Docker on Mac OS X"
|
|
},
|
|
{
|
|
"loc": "/installation/mac#command-line-docker-with-boot2docker",
|
|
"tags": "",
|
|
"text": "Because the Docker daemon uses Linux-specific kernel features, you can't run\nDocker natively in OS X. Instead, you must install the Boot2Docker application.\nThe application includes a VirtualBox Virtual Machine (VM), Docker itself, and the\nBoot2Docker management tool. The Boot2Docker management tool is a lightweight Linux virtual machine made\nspecifically to run the Docker daemon on Mac OS X. The VirtualBox VM runs\ncompletely from RAM, is a small ~24MB download, and boots in approximately 5s. Requirements Your Mac must be running OS X 10.6 \"Snow Leopard\" or newer to run Boot2Docker. Learn the key concepts before installing In a Docker installation on Linux, your machine is both the localhost and the\nDocker host. In networking, localhost means your computer. The Docker host is\nthe machine on which the containers run. On a typical Linux installation, the Docker client, the Docker daemon, and any\ncontainers run directly on your localhost. This means you can address ports on a\nDocker container using standard localhost addressing such as localhost:8000 or 0.0.0.0:8376 . In an OS X installation, the docker daemon is running inside a Linux virtual\nmachine provided by Boot2Docker. In OS X, the Docker host address is the address of the Linux VM.\nWhen you start the boot2docker process, the VM is assigned an IP address. Under boot2docker ports on a container map to ports on the VM. To see this in\npractice, work through the exercises on this page. Install Boot2Docker Go to the boot2docker/osx-installer release page. Download Boot2Docker by clicking Boot2Docker-x.x.x.pkg in the \"Downloads\"\nsection. Install Boot2Docker by double-clicking the package. The installer places Boot2Docker in your \"Applications\" folder. The installation places the docker and boot2docker binaries in your /usr/local/bin directory.",
|
|
"title": "Command-line Docker with Boot2Docker"
|
|
},
|
|
{
|
|
"loc": "/installation/mac#start-the-boot2docker-application",
|
|
"tags": "",
|
|
"text": "To run a Docker container, you first start the boot2docker VM and then issue docker commands to create, load, and manage containers. You can launch boot2docker from your Applications folder or from the command line. NOTE : Boot2Docker is designed as a development tool. You should not use\n it in production environments. From the Applications folder When you launch the \"Boot2Docker\" application from your \"Applications\" folder, the\napplication: opens a terminal window creates a $HOME/.boot2docker directory creates a VirtualBox ISO and certs starts a VirtualBox VM running the docker daemon Once the launch completes, you can run docker commands. A good way to verify\nyour setup succeeded is to run the hello-world container. $ docker run hello-world\n Unable to find image 'hello-world:latest' locally\n 511136ea3c5a: Pull complete\n 31cbccb51277: Pull complete\n e45a5af57b00: Pull complete\n hello-world:latest: The image you are pulling has been verified. Important: image verification is a tech preview feature and should not be relied on to provide security.\n Status: Downloaded newer image for hello-world:latest\n Hello from Docker.\n This message shows that your installation appears to be working correctly.\n\n To generate this message, Docker took the following steps:\n 1. The Docker client contacted the Docker daemon.\n 2. The Docker daemon pulled the \"hello-world\" image from the Docker Hub.\n (Assuming it was not already locally available.)\n 3. The Docker daemon created a new container from that image which runs the\n executable that produces the output you are currently reading.\n 4. The Docker daemon streamed that output to the Docker client, which sent it\n to your terminal.\n\n To try something more ambitious, you can run an Ubuntu container with:\n $ docker run -it ubuntu bash\n\n For more examples and ideas, visit:\n http://docs.docker.com/userguide/ A more typical way to start and stop boot2docker is using the command line. From your command line Initialize and run boot2docker from the command line, do the following: Create a new Boot2Docker VM. $ boot2docker init This creates a new virtual machine. You only need to run this command once. Start the boot2docker VM. $ boot2docker start Display the environment variables for the Docker client. $ boot2docker shellinit\nWriting /Users/mary/.boot2docker/certs/boot2docker-vm/ca.pem\nWriting /Users/mary/.boot2docker/certs/boot2docker-vm/cert.pem\nWriting /Users/mary/.boot2docker/certs/boot2docker-vm/key.pem\n export DOCKER_HOST=tcp://192.168.59.103:2376\n export DOCKER_CERT_PATH=/Users/mary/.boot2docker/certs/boot2docker-vm\n export DOCKER_TLS_VERIFY=1 The specific paths and address on your machine will be different. To set the environment variables in your shell do the following: $ eval \"$(boot2docker shellinit)\" You can also set them manually by using the export commands boot2docker \nreturns. Run the hello-world container to verify your setup. $ docker run hello-world",
|
|
"title": "Start the Boot2Docker Application"
|
|
},
|
|
{
|
|
"loc": "/installation/mac#basic-boot2docker-exercises",
|
|
"tags": "",
|
|
"text": "At this point, you should have boot2docker running and the docker client\nenvironment initialized. To verify this, run the following commands: $ boot2docker status\n$ docker version Work through this section to try some practical container tasks using boot2docker VM. Access container ports Start an NGINX container on the DOCKER_HOST. $ docker run -d -P --name web nginx Normally, the docker run commands starts a container, runs it, and then\nexits. The -d flag keeps the container running in the background\nafter the docker run command completes. The -P flag publishes exposed ports from the\ncontainer to your local host; this lets you access them from your Mac. Display your running container with docker ps command CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES\n5fb65ff765e9 nginx:latest \"nginx -g 'daemon of 3 minutes ago Up 3 minutes 0.0.0.0:49156- 443/tcp, 0.0.0.0:49157- 80/tcp web At this point, you can see nginx is running as a daemon. View just the container's ports. $ docker port web\n443/tcp - 0.0.0.0:49156\n80/tcp - 0.0.0.0:49157 This tells you that the web container's port 80 is mapped to port 49157 on your Docker host. Enter the http://localhost:49157 address ( localhost is 0.0.0.0 ) in your browser: This didn't work. The reason it doesn't work is your DOCKER_HOST address is\nnot the localhost address (0.0.0.0) but is instead the address of the boot2docker VM. Get the address of the boot2docker VM. $ boot2docker ip\n192.168.59.103 Enter the http://192.168.59.103:49157 address in your browser: Success! To stop and then remove your running nginx container, do the following: $ docker stop web\n$ docker rm web Mount a volume on the container When you start boot2docker , it automatically shares your /Users directory\nwith the VM. You can use this share point to mount directories onto your container.\nThe next exercise demonstrates how to do this. Change to your user $HOME directory. $ cd $HOME Make a new site directory. $ mkdir site Change into the site directory. $ cd site Create a new index.html file. $ echo \"my new site\" index.html Start a new nginx container and replace the html folder with your site directory. $ docker run -d -P -v $HOME/site:/usr/share/nginx/html --name mysite nginx Get the mysite container's port. $ docker port mysite\n80/tcp - 0.0.0.0:49166\n443/tcp - 0.0.0.0:49165 Open the site in a browser: Try adding a page to your $HOME/site in real time. $ echo \"This is cool\" cool.html Open the new page in the browser. Stop and then remove your running mysite container. $ docker stop mysite\n$ docker rm mysite",
|
|
"title": "Basic Boot2Docker Exercises"
|
|
},
|
|
{
|
|
"loc": "/installation/mac#upgrade-boot2docker",
|
|
"tags": "",
|
|
"text": "If you running Boot2Docker 1.4.1 or greater, you can upgrade Boot2Docker from\nthe command line. If you are running an older version, you should use the\npackage provided by the boot2docker repository. From the command line To upgrade from 1.4.1 or greater, you can do this: Open a terminal on your local machine. Stop the boot2docker application. $ boot2docker stop Run the upgrade command. $ boot2docker upgrade Use the installer To upgrade any version of Boot2Docker, do this: Open a terminal on your local machine. Stop the boot2docker application. $ boot2docker stop Go to the boot2docker/osx-installer release page. Download Boot2Docker by clicking Boot2Docker-x.x.x.pkg in the \"Downloads\"\nsection. Install Boot2Docker by double-clicking the package. The installer places Boot2Docker in your \"Applications\" folder.",
|
|
"title": "Upgrade Boot2Docker"
|
|
},
|
|
{
|
|
"loc": "/installation/mac#learning-more-and-acknowledgement",
|
|
"tags": "",
|
|
"text": "Use boot2docker help to list the full command line reference. For more\ninformation about using SSH or SCP to access the Boot2Docker VM, see the README\nat Boot2Docker repository . Thanks to Chris Jones whose blog inspired me to redo\nthis page. Continue with the Docker User Guide .",
|
|
"title": "Learning more and Acknowledgement"
|
|
},
|
|
{
|
|
"loc": "/installation/windows/",
|
|
"tags": "",
|
|
"text": "Windows\n\nNote:\nDocker has been tested on Windows 7.1 and 8; it may also run on older versions.\nYour processor needs to support hardware virtualization.\n\nThe Docker Engine uses Linux-specific kernel features, so to run it on Windows\nwe need to use a lightweight virtual machine (vm). You use the Windows Docker client to\ncontrol the virtualized Docker Engine to build, run, and manage Docker containers.\nTo make this process easier, we've designed a helper application called\nBoot2Docker that installs the\nvirtual machine and runs the Docker daemon.\nDemonstration\n\n\nInstallation\n\nDownload the latest release of the Docker for Windows Installer\nRun the installer, which will install VirtualBox, MSYS-git, the boot2docker Linux ISO,\nand the Boot2Docker management tool.\n \nRun the Boot2Docker Start shell script from your Desktop or Program Files Boot2Docker for Windows.\n The Start script will ask you to enter an ssh key passphrase - the simplest\n (but least secure) is to just hit [Enter].\n\n\nThe Boot2Docker Start script will connect you to a shell session in the virtual\n machine. If needed, it will initialize a new VM and start it.\nUpgrading\n\n\nDownload the latest release of the Docker for Windows Installer\n\n\nRun the installer, which will update the Boot2Docker management tool.\n\n\nTo upgrade your existing virtual machine, open a terminal and run:\nboot2docker stop\nboot2docker download\nboot2docker start\n\n\n\nRunning Docker\n\nNote: if you are using a remote Docker daemon, such as Boot2Docker, \nthen do not type the sudo before the docker commands shown in the\ndocumentation's examples.\n\nBoot2Docker will log you in automatically so you can start using Docker right away.\nLet's try the hello-world example image. Run\n$ docker run hello-world\n\nThis should download the very small hello-world image and print a Hello from Docker. message.\nLogin with PUTTY instead of using the CMD\nBoot2Docker generates and uses the public/private key pair in your %HOMEPATH%\\.ssh\ndirectory so to log in you need to use the private key from this same directory.\nThe private key needs to be converted into the format PuTTY uses.\nYou can do this with\nputtygen:\n\nOpen puttygen.exe and load (\"File\"-\"Load\" menu) the private key from\n %HOMEPATH%\\.ssh\\id_boot2docker\nthen click: \"Save Private Key\".\nThen use the saved file to login with PuTTY using docker@127.0.0.1:2022.\n\nFurther Details\nThe Boot2Docker management tool provides several commands:\n$ ./boot2docker\nUsage: ./boot2docker [options] {help|init|up|ssh|save|down|poweroff|reset|restart|config|status|info|ip|delete|download|version} [args]\n\nContainer port redirection\nIf you are curious, the username for the boot2docker default user is docker and the password is tcuser.\nThe latest version of boot2docker sets up a host only network adaptor which provides access to the container's ports.\nIf you run a container with an exposed port:\ndocker run --rm -i -t -p 80:80 nginx\n\nThen you should be able to access that nginx server using the IP address reported\nto you using:\nboot2docker ip\n\nTypically, it is 192.168.59.103, but it could get changed by Virtualbox's DHCP\nimplementation.\nFor further information or to report issues, please see the Boot2Docker site",
|
|
"title": "Microsoft Windows"
|
|
},
|
|
{
|
|
"loc": "/installation/windows#windows",
|
|
"tags": "",
|
|
"text": "Note: \nDocker has been tested on Windows 7.1 and 8; it may also run on older versions.\nYour processor needs to support hardware virtualization. The Docker Engine uses Linux-specific kernel features, so to run it on Windows\nwe need to use a lightweight virtual machine (vm). You use the Windows Docker client to\ncontrol the virtualized Docker Engine to build, run, and manage Docker containers. To make this process easier, we've designed a helper application called Boot2Docker that installs the\nvirtual machine and runs the Docker daemon.",
|
|
"title": "Windows"
|
|
},
|
|
{
|
|
"loc": "/installation/windows#demonstration",
|
|
"tags": "",
|
|
"text": "",
|
|
"title": "Demonstration"
|
|
},
|
|
{
|
|
"loc": "/installation/windows#installation",
|
|
"tags": "",
|
|
"text": "Download the latest release of the Docker for Windows Installer Run the installer, which will install VirtualBox, MSYS-git, the boot2docker Linux ISO,\nand the Boot2Docker management tool.\n Run the Boot2Docker Start shell script from your Desktop or Program Files Boot2Docker for Windows.\n The Start script will ask you to enter an ssh key passphrase - the simplest\n (but least secure) is to just hit [Enter]. The Boot2Docker Start script will connect you to a shell session in the virtual\n machine. If needed, it will initialize a new VM and start it.",
|
|
"title": "Installation"
|
|
},
|
|
{
|
|
"loc": "/installation/windows#upgrading",
|
|
"tags": "",
|
|
"text": "Download the latest release of the Docker for Windows Installer Run the installer, which will update the Boot2Docker management tool. To upgrade your existing virtual machine, open a terminal and run: boot2docker stop\nboot2docker download\nboot2docker start",
|
|
"title": "Upgrading"
|
|
},
|
|
{
|
|
"loc": "/installation/windows#running-docker",
|
|
"tags": "",
|
|
"text": "Note: if you are using a remote Docker daemon, such as Boot2Docker, \nthen do not type the sudo before the docker commands shown in the\ndocumentation's examples. Boot2Docker will log you in automatically so you can start using Docker right away. Let's try the hello-world example image. Run $ docker run hello-world This should download the very small hello-world image and print a Hello from Docker. message.",
|
|
"title": "Running Docker"
|
|
},
|
|
{
|
|
"loc": "/installation/windows#login-with-putty-instead-of-using-the-cmd",
|
|
"tags": "",
|
|
"text": "Boot2Docker generates and uses the public/private key pair in your %HOMEPATH%\\.ssh \ndirectory so to log in you need to use the private key from this same directory. The private key needs to be converted into the format PuTTY uses. You can do this with puttygen : Open puttygen.exe and load (\"File\"- \"Load\" menu) the private key from\n %HOMEPATH%\\.ssh\\id_boot2docker then click: \"Save Private Key\". Then use the saved file to login with PuTTY using docker@127.0.0.1:2022 .",
|
|
"title": "Login with PUTTY instead of using the CMD"
|
|
},
|
|
{
|
|
"loc": "/installation/windows#further-details",
|
|
"tags": "",
|
|
"text": "The Boot2Docker management tool provides several commands: $ ./boot2docker\nUsage: ./boot2docker [ options ] {help|init|up|ssh|save|down|poweroff|reset|restart|config|status|info|ip|delete|download|version} [ args ]",
|
|
"title": "Further Details"
|
|
},
|
|
{
|
|
"loc": "/installation/windows#container-port-redirection",
|
|
"tags": "",
|
|
"text": "If you are curious, the username for the boot2docker default user is docker and the password is tcuser . The latest version of boot2docker sets up a host only network adaptor which provides access to the container's ports. If you run a container with an exposed port: docker run --rm -i -t -p 80:80 nginx Then you should be able to access that nginx server using the IP address reported\nto you using: boot2docker ip Typically, it is 192.168.59.103, but it could get changed by Virtualbox's DHCP\nimplementation. For further information or to report issues, please see the Boot2Docker site",
|
|
"title": "Container port redirection"
|
|
},
|
|
{
|
|
"loc": "/installation/amazon/",
|
|
"tags": "",
|
|
"text": "Amazon EC2\nThere are several ways to install Docker on AWS EC2. You can use Amazon Linux, which includes the Docker packages in its Software Repository, or opt for any of the other supported Linux images, for example a Standard Ubuntu Installation.\nYou'll need an AWS account first, of\ncourse.\nAmazon QuickStart with Amazon Linux AMI 2014.09.1\nThe latest Amazon Linux AMI, 2014.09.1, is Docker ready. Docker packages can be installed from Amazon's provided Software\nRepository.\n\nChoose an image:\nLaunch the Create Instance\n Wizard\n menu on your AWS Console.\nIn the Quick Start menu, select the Amazon provided AMI for Amazon Linux 2014.09.1\nFor testing you can use the default (possibly free)\n t2.micro instance (more info on\n pricing).\nClick the Next: Configure Instance Details\n button at the bottom right.\nAfter a few more standard choices where defaults are probably ok,\n your Amazon Linux instance should be running!\nSSH to your instance to install Docker :\n ssh -i path to your private key ec2-user@your public IP address\nOnce connected to the instance, type\n sudo yum install -y docker ; sudo service docker start\n to install and start Docker\n\nIf this is your first AWS instance, you may need to set up your Security Group to allow SSH. By default all incoming ports to your new instance will be blocked by the AWS Security Group, so you might just get timeouts when you try to connect.\nOnce you`ve got Docker installed, you're ready to try it out \u2013 head on\nover to the User Guide.\nStandard Ubuntu Installation\nIf you want a more hands-on installation, then you can follow the\nUbuntu instructions installing Docker\non any EC2 instance running Ubuntu. Just follow Step 1 from the Amazon\nQuickStart above to pick an image (or use one of your\nown) and skip the step with the User Data. Then continue with the\nUbuntu instructions.\nContinue with the User Guide.",
|
|
"title": "Amazon EC2"
|
|
},
|
|
{
|
|
"loc": "/installation/amazon#amazon-ec2",
|
|
"tags": "",
|
|
"text": "There are several ways to install Docker on AWS EC2. You can use Amazon Linux, which includes the Docker packages in its Software Repository, or opt for any of the other supported Linux images, for example a Standard Ubuntu Installation . You'll need an AWS account first, of\ncourse.",
|
|
"title": "Amazon EC2"
|
|
},
|
|
{
|
|
"loc": "/installation/amazon#amazon-quickstart-with-amazon-linux-ami-2014091",
|
|
"tags": "",
|
|
"text": "The latest Amazon Linux AMI, 2014.09.1, is Docker ready. Docker packages can be installed from Amazon's provided Software\nRepository. Choose an image: Launch the Create Instance\n Wizard \n menu on your AWS Console. In the Quick Start menu, select the Amazon provided AMI for Amazon Linux 2014.09.1 For testing you can use the default (possibly free)\n t2.micro instance (more info on\n pricing ). Click the Next: Configure Instance Details \n button at the bottom right. After a few more standard choices where defaults are probably ok,\n your Amazon Linux instance should be running! SSH to your instance to install Docker :\n ssh -i path to your private key ec2-user@ your public IP address Once connected to the instance, type\n sudo yum install -y docker ; sudo service docker start \n to install and start Docker If this is your first AWS instance, you may need to set up your Security Group to allow SSH. By default all incoming ports to your new instance will be blocked by the AWS Security Group, so you might just get timeouts when you try to connect. Once you`ve got Docker installed, you're ready to try it out \u2013 head on\nover to the User Guide .",
|
|
"title": "Amazon QuickStart with Amazon Linux AMI 2014.09.1"
|
|
},
|
|
{
|
|
"loc": "/installation/amazon#standard-ubuntu-installation",
|
|
"tags": "",
|
|
"text": "If you want a more hands-on installation, then you can follow the Ubuntu instructions installing Docker\non any EC2 instance running Ubuntu. Just follow Step 1 from the Amazon\nQuickStart above to pick an image (or use one of your\nown) and skip the step with the User Data . Then continue with the Ubuntu instructions. Continue with the User Guide .",
|
|
"title": "Standard Ubuntu Installation"
|
|
},
|
|
{
|
|
"loc": "/installation/archlinux/",
|
|
"tags": "",
|
|
"text": "Arch Linux\nInstalling on Arch Linux can be handled via the package in community:\n\ndocker\n\nor the following AUR package:\n\ndocker-git\n\nThe docker package will install the latest tagged version of docker. The\ndocker-git package will build from the current master branch.\nDependencies\nDocker depends on several packages which are specified as dependencies\nin the packages. The core dependencies are:\n\nbridge-utils\ndevice-mapper\niproute2\nlxc\nsqlite\n\nInstallation\nFor the normal package a simple\npacman -S docker\n\nis all that is needed.\nFor the AUR package execute:\nyaourt -S docker-git\n\nThe instructions here assume yaourt is installed. See Arch User\nRepository\nfor information on building and installing packages from the AUR if you\nhave not done so before.\nStarting Docker\nThere is a systemd service unit created for docker. To start the docker\nservice:\n$ sudo systemctl start docker\n\nTo start on system boot:\n$ sudo systemctl enable docker\n\nCustom daemon options\nIf you need to add an HTTP Proxy, set a different directory or partition for the\nDocker runtime files, or make other customizations, read our systemd article to\nlearn how to customize your systemd Docker daemon options.",
|
|
"title": "Arch Linux"
|
|
},
|
|
{
|
|
"loc": "/installation/archlinux#arch-linux",
|
|
"tags": "",
|
|
"text": "Installing on Arch Linux can be handled via the package in community: docker or the following AUR package: docker-git The docker package will install the latest tagged version of docker. The\ndocker-git package will build from the current master branch.",
|
|
"title": "Arch Linux"
|
|
},
|
|
{
|
|
"loc": "/installation/archlinux#dependencies",
|
|
"tags": "",
|
|
"text": "Docker depends on several packages which are specified as dependencies\nin the packages. The core dependencies are: bridge-utils device-mapper iproute2 lxc sqlite",
|
|
"title": "Dependencies"
|
|
},
|
|
{
|
|
"loc": "/installation/archlinux#installation",
|
|
"tags": "",
|
|
"text": "For the normal package a simple pacman -S docker is all that is needed. For the AUR package execute: yaourt -S docker-git The instructions here assume yaourt is installed. See Arch User\nRepository \nfor information on building and installing packages from the AUR if you\nhave not done so before.",
|
|
"title": "Installation"
|
|
},
|
|
{
|
|
"loc": "/installation/archlinux#starting-docker",
|
|
"tags": "",
|
|
"text": "There is a systemd service unit created for docker. To start the docker\nservice: $ sudo systemctl start docker To start on system boot: $ sudo systemctl enable docker",
|
|
"title": "Starting Docker"
|
|
},
|
|
{
|
|
"loc": "/installation/archlinux#custom-daemon-options",
|
|
"tags": "",
|
|
"text": "If you need to add an HTTP Proxy, set a different directory or partition for the\nDocker runtime files, or make other customizations, read our systemd article to\nlearn how to customize your systemd Docker daemon options .",
|
|
"title": "Custom daemon options"
|
|
},
|
|
{
|
|
"loc": "/installation/binaries/",
|
|
"tags": "",
|
|
"text": "Binaries\nThis instruction set is meant for hackers who want to try out Docker\non a variety of environments.\nBefore following these directions, you should really check if a packaged\nversion of Docker is already available for your distribution. We have\npackages for many distributions, and more keep showing up all the time!\nCheck runtime dependencies\nTo run properly, docker needs the following software to be installed at\nruntime:\n\niptables version 1.4 or later\nGit version 1.7 or later\nprocps (or similar provider of a \"ps\" executable)\nXZ Utils 4.9 or later\na properly mounted\n cgroupfs hierarchy (having a single, all-encompassing \"cgroup\" mount\n point is\n not\n sufficient)\n\nCheck kernel dependencies\nDocker in daemon mode has specific kernel requirements. For details,\ncheck your distribution in Installation.\nA 3.10 Linux kernel is the minimum requirement for Docker.\nKernels older than 3.10 lack some of the features required to run Docker\ncontainers. These older versions are known to have bugs which cause data loss\nand frequently panic under certain conditions.\nThe latest minor version (3.x.y) of the 3.10 (or a newer maintained version)\nLinux kernel is recommended. Keeping the kernel up to date with the latest\nminor version will ensure critical kernel bugs get fixed.\n\nWarning:\nInstalling custom kernels and kernel packages is probably not\nsupported by your Linux distribution's vendor. Please make sure to\nask your vendor about Docker support first before attempting to\ninstall custom kernels on your distribution.\nWarning:\nInstalling a newer kernel might not be enough for some distributions\nwhich provide packages which are too old or incompatible with\nnewer kernels.\n\nNote that Docker also has a client mode, which can run on virtually any\nLinux kernel (it even builds on OS X!).\nEnable AppArmor and SELinux when possible\nPlease use AppArmor or SELinux if your Linux distribution supports\neither of the two. This helps improve security and blocks certain\ntypes of exploits. Your distribution's documentation should provide\ndetailed steps on how to enable the recommended security mechanism.\nSome Linux distributions enable AppArmor or SELinux by default and\nthey run a kernel which doesn't meet the minimum requirements (3.10\nor newer). Updating the kernel to 3.10 or newer on such a system\nmight not be enough to start Docker and run containers.\nIncompatibilities between the version of AppArmor/SELinux user\nspace utilities provided by the system and the kernel could prevent\nDocker from running, from starting containers or, cause containers to\nexhibit unexpected behaviour.\n\nWarning:\nIf either of the security mechanisms is enabled, it should not be\ndisabled to make Docker or its containers run. This will reduce\nsecurity in that environment, lose support from the distribution's\nvendor for the system, and might break regulations and security\npolicies in heavily regulated environments.\n\nGet the docker binary:\n$ wget https://get.docker.com/builds/Linux/x86_64/docker-latest -O docker\n$ chmod +x docker\n\n\nNote:\nIf you have trouble downloading the binary, you can also get the smaller\ncompressed release file:\nhttps://get.docker.com/builds/Linux/x86_64/docker-latest.tgz\n\nRun the docker daemon\n# start the docker in daemon mode from the directory you unpacked\n$ sudo ./docker -d \n\nGiving non-root access\nThe docker daemon always runs as the root user, and the docker\ndaemon binds to a Unix socket instead of a TCP port. By default that\nUnix socket is owned by the user root, and so, by default, you can\naccess it with sudo.\nIf you (or your Docker installer) create a Unix group called docker\nand add users to it, then the docker daemon will make the ownership of\nthe Unix socket read/writable by the docker group when the daemon\nstarts. The docker daemon must always run as the root user, but if you\nrun the docker client as a user in the docker group then you don't\nneed to add sudo to all the client commands.\n\nWarning: \nThe docker group (or the group specified with -G) is root-equivalent;\nsee Docker Daemon Attack Surface details.\n\nUpgrades\nTo upgrade your manual installation of Docker, first kill the docker\ndaemon:\n$ killall docker\n\nThen follow the regular installation steps.\nRun your first container!\n# check your docker version\n$ sudo ./docker version\n\n# run a container and open an interactive shell in the container\n$ sudo ./docker run -i -t ubuntu /bin/bash\n\nContinue with the User Guide.",
|
|
"title": "Binaries"
|
|
},
|
|
{
|
|
"loc": "/installation/binaries#binaries",
|
|
"tags": "",
|
|
"text": "This instruction set is meant for hackers who want to try out Docker\non a variety of environments. Before following these directions, you should really check if a packaged\nversion of Docker is already available for your distribution. We have\npackages for many distributions, and more keep showing up all the time!",
|
|
"title": "Binaries"
|
|
},
|
|
{
|
|
"loc": "/installation/binaries#check-runtime-dependencies",
|
|
"tags": "",
|
|
"text": "To run properly, docker needs the following software to be installed at\nruntime: iptables version 1.4 or later Git version 1.7 or later procps (or similar provider of a \"ps\" executable) XZ Utils 4.9 or later a properly mounted \n cgroupfs hierarchy (having a single, all-encompassing \"cgroup\" mount\n point is \n not \n sufficient )",
|
|
"title": "Check runtime dependencies"
|
|
},
|
|
{
|
|
"loc": "/installation/binaries#check-kernel-dependencies",
|
|
"tags": "",
|
|
"text": "Docker in daemon mode has specific kernel requirements. For details,\ncheck your distribution in Installation . A 3.10 Linux kernel is the minimum requirement for Docker.\nKernels older than 3.10 lack some of the features required to run Docker\ncontainers. These older versions are known to have bugs which cause data loss\nand frequently panic under certain conditions. The latest minor version (3.x.y) of the 3.10 (or a newer maintained version)\nLinux kernel is recommended. Keeping the kernel up to date with the latest\nminor version will ensure critical kernel bugs get fixed. Warning :\nInstalling custom kernels and kernel packages is probably not\nsupported by your Linux distribution's vendor. Please make sure to\nask your vendor about Docker support first before attempting to\ninstall custom kernels on your distribution. Warning :\nInstalling a newer kernel might not be enough for some distributions\nwhich provide packages which are too old or incompatible with\nnewer kernels. Note that Docker also has a client mode, which can run on virtually any\nLinux kernel (it even builds on OS X!).",
|
|
"title": "Check kernel dependencies"
|
|
},
|
|
{
|
|
"loc": "/installation/binaries#enable-apparmor-and-selinux-when-possible",
|
|
"tags": "",
|
|
"text": "Please use AppArmor or SELinux if your Linux distribution supports\neither of the two. This helps improve security and blocks certain\ntypes of exploits. Your distribution's documentation should provide\ndetailed steps on how to enable the recommended security mechanism. Some Linux distributions enable AppArmor or SELinux by default and\nthey run a kernel which doesn't meet the minimum requirements (3.10\nor newer). Updating the kernel to 3.10 or newer on such a system\nmight not be enough to start Docker and run containers.\nIncompatibilities between the version of AppArmor/SELinux user\nspace utilities provided by the system and the kernel could prevent\nDocker from running, from starting containers or, cause containers to\nexhibit unexpected behaviour. Warning :\nIf either of the security mechanisms is enabled, it should not be\ndisabled to make Docker or its containers run. This will reduce\nsecurity in that environment, lose support from the distribution's\nvendor for the system, and might break regulations and security\npolicies in heavily regulated environments.",
|
|
"title": "Enable AppArmor and SELinux when possible"
|
|
},
|
|
{
|
|
"loc": "/installation/binaries#get-the-docker-binary",
|
|
"tags": "",
|
|
"text": "$ wget https://get.docker.com/builds/Linux/x86_64/docker-latest -O docker\n$ chmod +x docker Note :\nIf you have trouble downloading the binary, you can also get the smaller\ncompressed release file: https://get.docker.com/builds/Linux/x86_64/docker-latest.tgz",
|
|
"title": "Get the docker binary:"
|
|
},
|
|
{
|
|
"loc": "/installation/binaries#run-the-docker-daemon",
|
|
"tags": "",
|
|
"text": "# start the docker in daemon mode from the directory you unpacked\n$ sudo ./docker -d",
|
|
"title": "Run the docker daemon"
|
|
},
|
|
{
|
|
"loc": "/installation/binaries#giving-non-root-access",
|
|
"tags": "",
|
|
"text": "The docker daemon always runs as the root user, and the docker \ndaemon binds to a Unix socket instead of a TCP port. By default that\nUnix socket is owned by the user root , and so, by default, you can\naccess it with sudo . If you (or your Docker installer) create a Unix group called docker \nand add users to it, then the docker daemon will make the ownership of\nthe Unix socket read/writable by the docker group when the daemon\nstarts. The docker daemon must always run as the root user, but if you\nrun the docker client as a user in the docker group then you don't\nneed to add sudo to all the client commands. Warning : \nThe docker group (or the group specified with -G ) is root-equivalent;\nsee Docker Daemon Attack Surface details.",
|
|
"title": "Giving non-root access"
|
|
},
|
|
{
|
|
"loc": "/installation/binaries#upgrades",
|
|
"tags": "",
|
|
"text": "To upgrade your manual installation of Docker, first kill the docker\ndaemon: $ killall docker Then follow the regular installation steps.",
|
|
"title": "Upgrades"
|
|
},
|
|
{
|
|
"loc": "/installation/binaries#run-your-first-container",
|
|
"tags": "",
|
|
"text": "# check your docker version\n$ sudo ./docker version\n\n# run a container and open an interactive shell in the container\n$ sudo ./docker run -i -t ubuntu /bin/bash Continue with the User Guide .",
|
|
"title": "Run your first container!"
|
|
},
|
|
{
|
|
"loc": "/installation/centos/",
|
|
"tags": "",
|
|
"text": "CentOS\nDocker is supported on the following versions of CentOS:\n\nCentOS 7 (64-bit)\nCentOS 6.5 (64-bit) or later\n\nThese instructions are likely work for other binary compatible EL6/EL7 distributions\nsuch as Scientific Linux, but they haven't been tested.\nPlease note that due to the current Docker limitations, Docker is able to\nrun only on the 64 bit architecture.\nKernel support\nCurrently the CentOS project will only support Docker when running on kernels\nshipped by the distribution. There are kernel changes which will cause issues\nif one decides to step outside that box and run non-distribution kernel packages.\nTo run Docker on CentOS-6.5 or later, you will need\nkernel version 2.6.32-431 or higher as this has specific kernel fixes to allow\nDocker to run.\nInstalling Docker - CentOS-7\nDocker is included by default in the CentOS-Extras repository. To install\nrun the following command:\n$ sudo yum install docker\n\nPlease continue with the Starting the Docker daemon.\nFirewallD\nCentOS-7 introduced firewalld, which is a wrapper around iptables and can\nconflict with Docker.\nWhen firewalld is started or restarted it will remove the DOCKER chain\nfrom iptables, preventing Docker from working properly.\nWhen using Systemd, firewalld is started before Docker, but if you\nstart or restart firewalld after Docker, you will have to restart the Docker daemon.\nInstalling Docker - CentOS-6.5\nFor Centos-6.5, the Docker package is part of Extra Packages\nfor Enterprise Linux (EPEL) repository,\na community effort to create and maintain additional packages for the RHEL distribution.\nFirstly, you need to ensure you have the EPEL repository enabled. Please\nfollow the EPEL installation instructions.\nFor CentOS-6, there is a package name conflict with a system tray application\nand its executable, so the Docker RPM package was called docker-io.\nTo proceed with docker-io installation on CentOS-6, you may need to remove the\ndocker package first.\n$ sudo yum -y remove docker\n\nNext, let's install the docker-io package which will install Docker on our host.\n$ sudo yum install docker-io\n\nPlease continue with the Starting the Docker daemon.\nManual installation of latest Docker release\nWhile using a package is the recommended way of installing Docker,\nthe above package might not be the current release version. If you need the latest\nversion, you can install the binary directly.\nWhen installing the binary without a package, you may want\nto integrate Docker with Systemd. For this, install the two unit files\n(service and socket) from the GitHub\nrepository\nto /etc/systemd/system.\nPlease continue with the Starting the Docker daemon.\nStarting the Docker daemon\nOnce Docker is installed, you will need to start the docker daemon.\n$ sudo service docker start\n\nIf we want Docker to start at boot, we should also:\n$ sudo chkconfig docker on\n\nNow let's verify that Docker is working. First we'll need to get the latest\ncentos image.\n$ sudo docker pull centos\n\nNext we'll make sure that we can see the image by running:\n$ sudo docker images centos\n\nThis should generate some output similar to:\n$ sudo docker images centos\nREPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE\ncentos latest 0b443ba03958 2 hours ago 297.6 MB\n\nRun a simple bash shell to test the image:\n$ sudo docker run -i -t centos /bin/bash\n\nIf everything is working properly, you'll get a simple bash prompt. Type\nexit to continue.\nCustom daemon options\nIf you need to add an HTTP Proxy, set a different directory or partition for the\nDocker runtime files, or make other customizations, read our Systemd article to\nlearn how to customize your Systemd Docker daemon options.\nDockerfiles\nThe CentOS Project provides a number of sample Dockerfiles which you may use\neither as templates or to familiarize yourself with docker. These templates\nare available on GitHub at https://github.com/CentOS/CentOS-Dockerfiles\nDone! You can either continue with the Docker User\nGuide or explore and build on the images yourself.\nIssues?\nIf you have any issues - please report them directly in the\nCentOS bug tracker.",
|
|
"title": "CentOS"
|
|
},
|
|
{
|
|
"loc": "/installation/centos#centos",
|
|
"tags": "",
|
|
"text": "Docker is supported on the following versions of CentOS: CentOS 7 (64-bit) CentOS 6.5 (64-bit) or later These instructions are likely work for other binary compatible EL6/EL7 distributions\nsuch as Scientific Linux, but they haven't been tested. Please note that due to the current Docker limitations, Docker is able to\nrun only on the 64 bit architecture.",
|
|
"title": "CentOS"
|
|
},
|
|
{
|
|
"loc": "/installation/centos#kernel-support",
|
|
"tags": "",
|
|
"text": "Currently the CentOS project will only support Docker when running on kernels\nshipped by the distribution. There are kernel changes which will cause issues\nif one decides to step outside that box and run non-distribution kernel packages. To run Docker on CentOS-6.5 or later, you will need\nkernel version 2.6.32-431 or higher as this has specific kernel fixes to allow\nDocker to run.",
|
|
"title": "Kernel support"
|
|
},
|
|
{
|
|
"loc": "/installation/centos#installing-docker-centos-7",
|
|
"tags": "",
|
|
"text": "Docker is included by default in the CentOS-Extras repository. To install\nrun the following command: $ sudo yum install docker Please continue with the Starting the Docker daemon . FirewallD CentOS-7 introduced firewalld, which is a wrapper around iptables and can\nconflict with Docker. When firewalld is started or restarted it will remove the DOCKER chain\nfrom iptables, preventing Docker from working properly. When using Systemd, firewalld is started before Docker, but if you\nstart or restart firewalld after Docker, you will have to restart the Docker daemon.",
|
|
"title": "Installing Docker - CentOS-7"
|
|
},
|
|
{
|
|
"loc": "/installation/centos#installing-docker-centos-65",
|
|
"tags": "",
|
|
"text": "For Centos-6.5, the Docker package is part of Extra Packages\nfor Enterprise Linux (EPEL) repository,\na community effort to create and maintain additional packages for the RHEL distribution. Firstly, you need to ensure you have the EPEL repository enabled. Please\nfollow the EPEL installation instructions . For CentOS-6, there is a package name conflict with a system tray application\nand its executable, so the Docker RPM package was called docker-io . To proceed with docker-io installation on CentOS-6, you may need to remove the docker package first. $ sudo yum -y remove docker Next, let's install the docker-io package which will install Docker on our host. $ sudo yum install docker-io Please continue with the Starting the Docker daemon .",
|
|
"title": "Installing Docker - CentOS-6.5"
|
|
},
|
|
{
|
|
"loc": "/installation/centos#manual-installation-of-latest-docker-release",
|
|
"tags": "",
|
|
"text": "While using a package is the recommended way of installing Docker,\nthe above package might not be the current release version. If you need the latest\nversion, you can install the binary directly . When installing the binary without a package, you may want\nto integrate Docker with Systemd. For this, install the two unit files\n(service and socket) from the GitHub\nrepository \nto /etc/systemd/system . Please continue with the Starting the Docker daemon .",
|
|
"title": "Manual installation of latest Docker release"
|
|
},
|
|
{
|
|
"loc": "/installation/centos#starting-the-docker-daemon",
|
|
"tags": "",
|
|
"text": "Once Docker is installed, you will need to start the docker daemon. $ sudo service docker start If we want Docker to start at boot, we should also: $ sudo chkconfig docker on Now let's verify that Docker is working. First we'll need to get the latest centos image. $ sudo docker pull centos Next we'll make sure that we can see the image by running: $ sudo docker images centos This should generate some output similar to: $ sudo docker images centos\nREPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE\ncentos latest 0b443ba03958 2 hours ago 297.6 MB Run a simple bash shell to test the image: $ sudo docker run -i -t centos /bin/bash If everything is working properly, you'll get a simple bash prompt. Type exit to continue.",
|
|
"title": "Starting the Docker daemon"
|
|
},
|
|
{
|
|
"loc": "/installation/centos#custom-daemon-options",
|
|
"tags": "",
|
|
"text": "If you need to add an HTTP Proxy, set a different directory or partition for the\nDocker runtime files, or make other customizations, read our Systemd article to\nlearn how to customize your Systemd Docker daemon options .",
|
|
"title": "Custom daemon options"
|
|
},
|
|
{
|
|
"loc": "/installation/centos#dockerfiles",
|
|
"tags": "",
|
|
"text": "The CentOS Project provides a number of sample Dockerfiles which you may use\neither as templates or to familiarize yourself with docker. These templates\nare available on GitHub at https://github.com/CentOS/CentOS-Dockerfiles Done! You can either continue with the Docker User\nGuide or explore and build on the images yourself.",
|
|
"title": "Dockerfiles"
|
|
},
|
|
{
|
|
"loc": "/installation/centos#issues",
|
|
"tags": "",
|
|
"text": "If you have any issues - please report them directly in the CentOS bug tracker .",
|
|
"title": "Issues?"
|
|
},
|
|
{
|
|
"loc": "/installation/cruxlinux/",
|
|
"tags": "",
|
|
"text": "CRUX Linux\nInstalling on CRUX Linux can be handled via the contrib ports from\nJames Mills and are included in the\nofficial contrib ports:\n\ndocker\n\nThe docker port will build and install the latest tagged version of Docker.\nInstallation\nAssuming you have contrib enabled, update your ports tree and install docker (as root):\n# prt-get depinst docker\n\nKernel Requirements\nTo have a working CRUX+Docker Host you must ensure your Kernel has\nthe necessary modules enabled for the Docker Daemon to function correctly.\nPlease read the README:\n$ prt-get readme docker\n\nThe docker port installs the contrib/check-config.sh script\nprovided by the Docker contributors for checking your kernel\nconfiguration as a suitable Docker host.\nTo check your Kernel configuration run:\n$ /usr/share/docker/check-config.sh\n\nStarting Docker\nThere is a rc script created for Docker. To start the Docker service (as root):\n# /etc/rc.d/docker start\n\nTo start on system boot:\n\nEdit /etc/rc.conf\nPut docker into the SERVICES=(...) array after net.\n\nImages\nThere is a CRUX image maintained by James Mills\nas part of the Docker \"Official Library\" of images. To use this image simply pull it\nor use it as part of your FROM line in your Dockerfile(s).\n$ docker pull crux\n$ docker run -i -t crux\n\nThere are also user contributed CRUX based image(s) on the Docker Hub.\nIssues\nIf you have any issues please file a bug with the\nCRUX Bug Tracker.\nSupport\nFor support contact the CRUX Mailing List\nor join CRUX's IRC Channels. on the\nFreeNode IRC Network.",
|
|
"title": "CRUX Linux"
|
|
},
|
|
{
|
|
"loc": "/installation/cruxlinux#crux-linux",
|
|
"tags": "",
|
|
"text": "Installing on CRUX Linux can be handled via the contrib ports from James Mills and are included in the\nofficial contrib ports: docker The docker port will build and install the latest tagged version of Docker.",
|
|
"title": "CRUX Linux"
|
|
},
|
|
{
|
|
"loc": "/installation/cruxlinux#installation",
|
|
"tags": "",
|
|
"text": "Assuming you have contrib enabled, update your ports tree and install docker ( as root ): # prt-get depinst docker",
|
|
"title": "Installation"
|
|
},
|
|
{
|
|
"loc": "/installation/cruxlinux#kernel-requirements",
|
|
"tags": "",
|
|
"text": "To have a working CRUX+Docker Host you must ensure your Kernel has\nthe necessary modules enabled for the Docker Daemon to function correctly. Please read the README : $ prt-get readme docker The docker port installs the contrib/check-config.sh script\nprovided by the Docker contributors for checking your kernel\nconfiguration as a suitable Docker host. To check your Kernel configuration run: $ /usr/share/docker/check-config.sh",
|
|
"title": "Kernel Requirements"
|
|
},
|
|
{
|
|
"loc": "/installation/cruxlinux#starting-docker",
|
|
"tags": "",
|
|
"text": "There is a rc script created for Docker. To start the Docker service ( as root ): # /etc/rc.d/docker start To start on system boot: Edit /etc/rc.conf Put docker into the SERVICES=(...) array after net .",
|
|
"title": "Starting Docker"
|
|
},
|
|
{
|
|
"loc": "/installation/cruxlinux#images",
|
|
"tags": "",
|
|
"text": "There is a CRUX image maintained by James Mills \nas part of the Docker \"Official Library\" of images. To use this image simply pull it\nor use it as part of your FROM line in your Dockerfile(s) . $ docker pull crux\n$ docker run -i -t crux There are also user contributed CRUX based image(s) on the Docker Hub.",
|
|
"title": "Images"
|
|
},
|
|
{
|
|
"loc": "/installation/cruxlinux#issues",
|
|
"tags": "",
|
|
"text": "If you have any issues please file a bug with the CRUX Bug Tracker .",
|
|
"title": "Issues"
|
|
},
|
|
{
|
|
"loc": "/installation/cruxlinux#support",
|
|
"tags": "",
|
|
"text": "For support contact the CRUX Mailing List \nor join CRUX's IRC Channels . on the FreeNode IRC Network.",
|
|
"title": "Support"
|
|
},
|
|
{
|
|
"loc": "/installation/debian/",
|
|
"tags": "",
|
|
"text": "Debian\nDocker is supported on the following versions of Debian:\n\nDebian 8.0 Jessie (64-bit)\nDebian 7.7 Wheezy (64-bit)\n\nDebian Jessie 8.0 (64-bit)\nDebian 8 comes with a 3.14.0 Linux kernel, and a docker.io package which\ninstalls all its prerequisites from Debian's repository.\n\nNote:\nDebian contains a much older KDE3/GNOME2 package called docker, so the\npackage and the executable are called docker.io.\n\nInstallation\nTo install the latest Debian package (may not be the latest Docker release):\n$ sudo apt-get update\n$ sudo apt-get install docker.io\n\nTo verify that everything has worked as expected:\n$ sudo docker run -i -t ubuntu /bin/bash\n\nWhich should download the ubuntu image, and then start bash in a container.\n\nNote: \nIf you want to enable memory and swap accounting see\nthis.\n\nDebian Wheezy/Stable 7.x (64-bit)\nDocker requires Kernel 3.8+, while Wheezy ships with Kernel 3.2 (for more details\non why 3.8 is required, see discussion on\nbug #407).\nFortunately, wheezy-backports currently has Kernel 3.16\n,\nwhich is officially supported by Docker.\nInstallation\n\n\nInstall Kernel from wheezy-backports\nAdd the following line to your /etc/apt/sources.list\ndeb http://http.debian.net/debian wheezy-backports main\nthen install the linux-image-amd64 package (note the use of\n-t wheezy-backports)\n$ sudo apt-get update\n$ sudo apt-get install -t wheezy-backports linux-image-amd64\n\n\n\nInstall Docker using the get.docker.com script:\ncurl -sSL https://get.docker.com/ | sh\n\n\nGiving non-root access\nThe docker daemon always runs as the root user and the docker\ndaemon binds to a Unix socket instead of a TCP port. By default that\nUnix socket is owned by the user root, and so, by default, you can\naccess it with sudo.\nIf you (or your Docker installer) create a Unix group called docker\nand add users to it, then the docker daemon will make the ownership of\nthe Unix socket read/writable by the docker group when the daemon\nstarts. The docker daemon must always run as the root user, but if you\nrun the docker client as a user in the docker group then you don't\nneed to add sudo to all the client commands. From Docker 0.9.0 you can\nuse the -G flag to specify an alternative group.\n\nWarning: \nThe docker group (or the group specified with the -G flag) is\nroot-equivalent; see Docker Daemon Attack Surface details.\n\nExample:\n# Add the docker group if it doesn't already exist.\n$ sudo groupadd docker\n\n# Add the connected user \"${USER}\" to the docker group.\n# Change the user name to match your preferred user.\n# You may have to logout and log back in again for\n# this to take effect.\n$ sudo gpasswd -a ${USER} docker\n\n# Restart the Docker daemon.\n$ sudo service docker restart\n\nWhat next?\nContinue with the User Guide.",
|
|
"title": "Debian"
|
|
},
|
|
{
|
|
"loc": "/installation/debian#debian",
|
|
"tags": "",
|
|
"text": "Docker is supported on the following versions of Debian: Debian 8.0 Jessie (64-bit) Debian 7.7 Wheezy (64-bit)",
|
|
"title": "Debian"
|
|
},
|
|
{
|
|
"loc": "/installation/debian#debian-jessie-80-64-bit",
|
|
"tags": "",
|
|
"text": "Debian 8 comes with a 3.14.0 Linux kernel, and a docker.io package which\ninstalls all its prerequisites from Debian's repository. Note :\nDebian contains a much older KDE3/GNOME2 package called docker , so the\npackage and the executable are called docker.io . Installation To install the latest Debian package (may not be the latest Docker release): $ sudo apt-get update\n$ sudo apt-get install docker.io To verify that everything has worked as expected: $ sudo docker run -i -t ubuntu /bin/bash Which should download the ubuntu image, and then start bash in a container. Note : \nIf you want to enable memory and swap accounting see this .",
|
|
"title": "Debian Jessie 8.0 (64-bit)"
|
|
},
|
|
{
|
|
"loc": "/installation/debian#debian-wheezystable-7x-64-bit",
|
|
"tags": "",
|
|
"text": "Docker requires Kernel 3.8+, while Wheezy ships with Kernel 3.2 (for more details\non why 3.8 is required, see discussion on bug #407 ). Fortunately, wheezy-backports currently has Kernel 3.16 ,\nwhich is officially supported by Docker. Installation Install Kernel from wheezy-backports Add the following line to your /etc/apt/sources.list deb http://http.debian.net/debian wheezy-backports main then install the linux-image-amd64 package (note the use of -t wheezy-backports ) $ sudo apt-get update\n$ sudo apt-get install -t wheezy-backports linux-image-amd64 Install Docker using the get.docker.com script: curl -sSL https://get.docker.com/ | sh",
|
|
"title": "Debian Wheezy/Stable 7.x (64-bit)"
|
|
},
|
|
{
|
|
"loc": "/installation/debian#giving-non-root-access",
|
|
"tags": "",
|
|
"text": "The docker daemon always runs as the root user and the docker \ndaemon binds to a Unix socket instead of a TCP port. By default that\nUnix socket is owned by the user root , and so, by default, you can\naccess it with sudo . If you (or your Docker installer) create a Unix group called docker \nand add users to it, then the docker daemon will make the ownership of\nthe Unix socket read/writable by the docker group when the daemon\nstarts. The docker daemon must always run as the root user, but if you\nrun the docker client as a user in the docker group then you don't\nneed to add sudo to all the client commands. From Docker 0.9.0 you can\nuse the -G flag to specify an alternative group. Warning : \nThe docker group (or the group specified with the -G flag) is root -equivalent; see Docker Daemon Attack Surface details. Example: # Add the docker group if it doesn't already exist.\n$ sudo groupadd docker\n\n# Add the connected user \"${USER}\" to the docker group.\n# Change the user name to match your preferred user.\n# You may have to logout and log back in again for\n# this to take effect.\n$ sudo gpasswd -a ${USER} docker\n\n# Restart the Docker daemon.\n$ sudo service docker restart",
|
|
"title": "Giving non-root access"
|
|
},
|
|
{
|
|
"loc": "/installation/debian#what-next",
|
|
"tags": "",
|
|
"text": "Continue with the User Guide .",
|
|
"title": "What next?"
|
|
},
|
|
{
|
|
"loc": "/installation/fedora/",
|
|
"tags": "",
|
|
"text": "Fedora\nDocker is supported on the following versions of Fedora:\n\nFedora 20 (64-bit)\nFedora 21 and later (64-bit)\n\nCurrently the Fedora project will only support Docker when running on kernels\nshipped by the distribution. There are kernel changes which will cause issues\nif one decides to step outside that box and run non-distribution kernel packages.\nFedora 21 and later installation\nInstall the docker package which will install Docker on our host.\n$ sudo yum -y install docker\n\nTo update the docker package:\n$ sudo yum -y update docker\n\nPlease continue with the Starting the Docker daemon.\nFedora 20 installation\nFor Fedora 20, there is a package name conflict with a system tray application\nand its executable, so the Docker RPM package was called docker-io.\nTo proceed with docker-io installation on Fedora 20, please remove the docker\npackage first.\n$ sudo yum -y remove docker\n$ sudo yum -y install docker-io\n\nTo update the docker package:\n$ sudo yum -y update docker-io\n\nPlease continue with the Starting the Docker daemon.\nStarting the Docker daemon\nNow that it's installed, let's start the Docker daemon.\n$ sudo systemctl start docker\n\nIf we want Docker to start at boot, we should also:\n$ sudo systemctl enable docker\n\nNow let's verify that Docker is working.\n$ sudo docker run -i -t fedora /bin/bash\n\n\nNote: If you get a Cannot start container error mentioning SELinux\nor permission denied, you may need to update the SELinux policies.\nThis can be done using sudo yum upgrade selinux-policy and then rebooting.\n\nGranting rights to users to use Docker\nThe docker command line tool contacts the docker daemon process via a\nsocket file /var/run/docker.sock owned by root:root. Though it's\nrecommended\nto use sudo for docker commands, if users wish to avoid it, an administrator can\ncreate a docker group, have it own /var/run/docker.sock, and add users to this group.\n$ sudo groupadd docker\n$ sudo chown root:docker /var/run/docker.sock\n$ sudo usermod -a -G docker $USERNAME\n\nCustom daemon options\nIf you need to add an HTTP Proxy, set a different directory or partition for the\nDocker runtime files, or make other customizations, read our Systemd article to\nlearn how to customize your Systemd Docker daemon options.\nWhat next?\nContinue with the User Guide.",
|
|
"title": "Fedora"
|
|
},
|
|
{
|
|
"loc": "/installation/fedora#fedora",
|
|
"tags": "",
|
|
"text": "Docker is supported on the following versions of Fedora: Fedora 20 (64-bit) Fedora 21 and later (64-bit) Currently the Fedora project will only support Docker when running on kernels\nshipped by the distribution. There are kernel changes which will cause issues\nif one decides to step outside that box and run non-distribution kernel packages.",
|
|
"title": "Fedora"
|
|
},
|
|
{
|
|
"loc": "/installation/fedora#fedora-21-and-later-installation",
|
|
"tags": "",
|
|
"text": "Install the docker package which will install Docker on our host. $ sudo yum -y install docker To update the docker package: $ sudo yum -y update docker Please continue with the Starting the Docker daemon .",
|
|
"title": "Fedora 21 and later installation"
|
|
},
|
|
{
|
|
"loc": "/installation/fedora#fedora-20-installation",
|
|
"tags": "",
|
|
"text": "For Fedora 20 , there is a package name conflict with a system tray application\nand its executable, so the Docker RPM package was called docker-io . To proceed with docker-io installation on Fedora 20, please remove the docker \npackage first. $ sudo yum -y remove docker\n$ sudo yum -y install docker-io To update the docker package: $ sudo yum -y update docker-io Please continue with the Starting the Docker daemon .",
|
|
"title": "Fedora 20 installation"
|
|
},
|
|
{
|
|
"loc": "/installation/fedora#starting-the-docker-daemon",
|
|
"tags": "",
|
|
"text": "Now that it's installed, let's start the Docker daemon. $ sudo systemctl start docker If we want Docker to start at boot, we should also: $ sudo systemctl enable docker Now let's verify that Docker is working. $ sudo docker run -i -t fedora /bin/bash Note: If you get a Cannot start container error mentioning SELinux\nor permission denied, you may need to update the SELinux policies.\nThis can be done using sudo yum upgrade selinux-policy and then rebooting.",
|
|
"title": "Starting the Docker daemon"
|
|
},
|
|
{
|
|
"loc": "/installation/fedora#granting-rights-to-users-to-use-docker",
|
|
"tags": "",
|
|
"text": "The docker command line tool contacts the docker daemon process via a\nsocket file /var/run/docker.sock owned by root:root . Though it's recommended \nto use sudo for docker commands, if users wish to avoid it, an administrator can\ncreate a docker group, have it own /var/run/docker.sock , and add users to this group. $ sudo groupadd docker\n$ sudo chown root:docker /var/run/docker.sock\n$ sudo usermod -a -G docker $USERNAME",
|
|
"title": "Granting rights to users to use Docker"
|
|
},
|
|
{
|
|
"loc": "/installation/fedora#custom-daemon-options",
|
|
"tags": "",
|
|
"text": "If you need to add an HTTP Proxy, set a different directory or partition for the\nDocker runtime files, or make other customizations, read our Systemd article to\nlearn how to customize your Systemd Docker daemon options .",
|
|
"title": "Custom daemon options"
|
|
},
|
|
{
|
|
"loc": "/installation/fedora#what-next",
|
|
"tags": "",
|
|
"text": "Continue with the User Guide .",
|
|
"title": "What next?"
|
|
},
|
|
{
|
|
"loc": "/installation/frugalware/",
|
|
"tags": "",
|
|
"text": "FrugalWare\nInstalling on FrugalWare is handled via the official packages:\n\nlxc-docker i686\nlxc-docker x86_64\n\nThe lxc-docker package will install the latest tagged version of Docker.\nDependencies\nDocker depends on several packages which are specified as dependencies\nin the packages. The core dependencies are:\n\nsystemd\nlvm2\nsqlite3\nlibguestfs\nlxc\niproute2\nbridge-utils\n\nInstallation\nA simple\npacman -S lxc-docker\n\nis all that is needed.\nStarting Docker\nThere is a systemd service unit created for Docker. To start Docker as\nservice:\n$ sudo systemctl start lxc-docker\n\nTo start on system boot:\n$ sudo systemctl enable lxc-docker\n\nCustom daemon options\nIf you need to add an HTTP Proxy, set a different directory or partition for the\nDocker runtime files, or make other customizations, read our systemd article to\nlearn how to customize your systemd Docker daemon options.",
|
|
"title": "FrugalWare"
|
|
},
|
|
{
|
|
"loc": "/installation/frugalware#frugalware",
|
|
"tags": "",
|
|
"text": "Installing on FrugalWare is handled via the official packages: lxc-docker i686 lxc-docker x86_64 The lxc-docker package will install the latest tagged version of Docker.",
|
|
"title": "FrugalWare"
|
|
},
|
|
{
|
|
"loc": "/installation/frugalware#dependencies",
|
|
"tags": "",
|
|
"text": "Docker depends on several packages which are specified as dependencies\nin the packages. The core dependencies are: systemd lvm2 sqlite3 libguestfs lxc iproute2 bridge-utils",
|
|
"title": "Dependencies"
|
|
},
|
|
{
|
|
"loc": "/installation/frugalware#installation",
|
|
"tags": "",
|
|
"text": "A simple pacman -S lxc-docker is all that is needed.",
|
|
"title": "Installation"
|
|
},
|
|
{
|
|
"loc": "/installation/frugalware#starting-docker",
|
|
"tags": "",
|
|
"text": "There is a systemd service unit created for Docker. To start Docker as\nservice: $ sudo systemctl start lxc-docker To start on system boot: $ sudo systemctl enable lxc-docker",
|
|
"title": "Starting Docker"
|
|
},
|
|
{
|
|
"loc": "/installation/frugalware#custom-daemon-options",
|
|
"tags": "",
|
|
"text": "If you need to add an HTTP Proxy, set a different directory or partition for the\nDocker runtime files, or make other customizations, read our systemd article to\nlearn how to customize your systemd Docker daemon options .",
|
|
"title": "Custom daemon options"
|
|
},
|
|
{
|
|
"loc": "/installation/google/",
|
|
"tags": "",
|
|
"text": "Google Cloud Platform\nQuickStart with Container-optimized Google Compute Engine images\n\n\nGo to Google Cloud Console and create a new Cloud Project with\n Compute Engine enabled\n\n\nDownload and configure the Google Cloud SDK to use your\n project with the following commands:\n$ curl -sSL https://sdk.cloud.google.com | bash\n$ gcloud auth login\n$ gcloud config set project google-cloud-project-id\n\n\n\nStart a new instance using the latest Container-optimized image:\n (select a zone close to you and the desired instance size)\n$ gcloud compute instances create docker-playground \\\n --image container-vm \\\n --zone us-central1-a \\\n --machine-type f1-micro\n\n\n\nConnect to the instance using SSH:\n$ gcloud compute ssh --zone us-central1-a docker-playground\ndocker-playground:~$ sudo docker run hello-world\n\nHello from Docker.\nThis message shows that your installation appears to be working correctly.\n...\n\n\nRead more about deploying Containers on Google Cloud Platform.",
|
|
"title": "Google Cloud Platform"
|
|
},
|
|
{
|
|
"loc": "/installation/google#google-cloud-platform",
|
|
"tags": "",
|
|
"text": "",
|
|
"title": "Google Cloud Platform"
|
|
},
|
|
{
|
|
"loc": "/installation/google#quickstart-with-container-optimized-google-compute-engine-images",
|
|
"tags": "",
|
|
"text": "Go to Google Cloud Console and create a new Cloud Project with\n Compute Engine enabled Download and configure the Google Cloud SDK to use your\n project with the following commands: $ curl -sSL https://sdk.cloud.google.com | bash\n$ gcloud auth login\n$ gcloud config set project google-cloud-project-id Start a new instance using the latest Container-optimized image :\n (select a zone close to you and the desired instance size) $ gcloud compute instances create docker-playground \\\n --image container-vm \\\n --zone us-central1-a \\\n --machine-type f1-micro Connect to the instance using SSH: $ gcloud compute ssh --zone us-central1-a docker-playground\ndocker-playground:~$ sudo docker run hello-world Hello from Docker.\nThis message shows that your installation appears to be working correctly.\n... Read more about deploying Containers on Google Cloud Platform .",
|
|
"title": "QuickStart with Container-optimized Google Compute Engine images"
|
|
},
|
|
{
|
|
"loc": "/installation/gentoolinux/",
|
|
"tags": "",
|
|
"text": "Gentoo\nInstalling Docker on Gentoo Linux can be accomplished using one of two ways: the official way and the docker-overlay way.\nOfficial project page of Gentoo Docker team.\nOfficial way\nThe first and recommended way if you are looking for a stable\nexperience is to use the official app-emulation/docker package directly\nfrom the tree.\nIf any issues arise from this ebuild including, missing kernel \nconfiguration flags or dependencies, open a bug \non the Gentoo Bugzilla assigned to docker AT gentoo DOT org \nor join and ask in the official\nIRC channel on the Freenode network.\ndocker-overlay way\nIf you're looking for a -bin ebuild, a live ebuild, or a bleeding edge\nebuild, use the provided overlay, docker-overlay\nwhich can be added using app-portage/layman. The most accurate and\nup-to-date documentation for properly installing and using the overlay\ncan be found in the overlay.\nIf any issues arise from this ebuild or the resulting binary, including\nand especially missing kernel configuration flags or dependencies, \nopen an issue on \nthe docker-overlay repository or ping tianon directly in the #docker \nIRC channel on the Freenode network.\nInstallation\nAvailable USE flags\n\n\n\nUSE Flag\nDefault\nDescription\n\n\n\n\naufs\n\nEnables dependencies for the \"aufs\" graph driver, including necessary kernel flags.\n\n\nbtrfs\n\nEnables dependencies for the \"btrfs\" graph driver, including necessary kernel flags.\n\n\ncontrib\nYes\nInstall additional contributed scripts and components.\n\n\ndevice-mapper\nYes\nEnables dependencies for the \"devicemapper\" graph driver, including necessary kernel flags.\n\n\ndoc\n\nAdd extra documentation (API, Javadoc, etc). It is recommended to enable per package instead of globally.\n\n\nlxc\n\nEnables dependencies for the \"lxc\" execution driver.\n\n\nvim-syntax\n\nPulls in related vim syntax scripts.\n\n\nzsh-completion\n\nEnable zsh completion support.\n\n\n\nUSE flags are described in detail on tianon's\nblog.\nThe package should properly pull in all the necessary dependencies and\nprompt for all necessary kernel options.\n$ sudo emerge -av app-emulation/docker\n\n\nNote: Sometimes there is a disparity between the latest versions \nin the official Gentoo tree and the docker-overlay.\nPlease be patient, and the latest version should propagate shortly.\n\nStarting Docker\nEnsure that you are running a kernel that includes all the necessary\nmodules and configuration (and optionally for device-mapper\nand AUFS or Btrfs, depending on the storage driver you've decided to use).\nTo use Docker, the docker daemon must be running as root.\nTo use Docker as a non-root user, add yourself to the docker \ngroup by running the following command:\n$ sudo usermod -a -G docker user\n\nOpenRC\nTo start the docker daemon:\n$ sudo /etc/init.d/docker start\n\nTo start on system boot:\n$ sudo rc-update add docker default\n\nsystemd\nTo start the docker daemon:\n$ sudo systemctl start docker\n\nTo start on system boot:\n$ sudo systemctl enable docker\n\nIf you need to add an HTTP Proxy, set a different directory or partition for the\nDocker runtime files, or make other customizations, read our systemd article to\nlearn how to customize your systemd Docker daemon options.",
|
|
"title": "Gentoo"
|
|
},
|
|
{
|
|
"loc": "/installation/gentoolinux#gentoo",
|
|
"tags": "",
|
|
"text": "Installing Docker on Gentoo Linux can be accomplished using one of two ways: the official way and the docker-overlay way. Official project page of Gentoo Docker team.",
|
|
"title": "Gentoo"
|
|
},
|
|
{
|
|
"loc": "/installation/gentoolinux#official-way",
|
|
"tags": "",
|
|
"text": "The first and recommended way if you are looking for a stable \nexperience is to use the official app-emulation/docker package directly \nfrom the tree. If any issues arise from this ebuild including, missing kernel \nconfiguration flags or dependencies, open a bug \non the Gentoo Bugzilla assigned to docker AT gentoo DOT org \nor join and ask in the official IRC channel on the Freenode network.",
|
|
"title": "Official way"
|
|
},
|
|
{
|
|
"loc": "/installation/gentoolinux#docker-overlay-way",
|
|
"tags": "",
|
|
"text": "If you're looking for a -bin ebuild, a live ebuild, or a bleeding edge\nebuild, use the provided overlay, docker-overlay \nwhich can be added using app-portage/layman . The most accurate and\nup-to-date documentation for properly installing and using the overlay\ncan be found in the overlay . If any issues arise from this ebuild or the resulting binary, including\nand especially missing kernel configuration flags or dependencies, \nopen an issue on \nthe docker-overlay repository or ping tianon directly in the #docker \nIRC channel on the Freenode network.",
|
|
"title": "docker-overlay way"
|
|
},
|
|
{
|
|
"loc": "/installation/gentoolinux#installation",
|
|
"tags": "",
|
|
"text": "Available USE flags USE Flag Default Description aufs Enables dependencies for the \"aufs\" graph driver, including necessary kernel flags. btrfs Enables dependencies for the \"btrfs\" graph driver, including necessary kernel flags. contrib Yes Install additional contributed scripts and components. device-mapper Yes Enables dependencies for the \"devicemapper\" graph driver, including necessary kernel flags. doc Add extra documentation (API, Javadoc, etc). It is recommended to enable per package instead of globally. lxc Enables dependencies for the \"lxc\" execution driver. vim-syntax Pulls in related vim syntax scripts. zsh-completion Enable zsh completion support. USE flags are described in detail on tianon's\nblog . The package should properly pull in all the necessary dependencies and\nprompt for all necessary kernel options. $ sudo emerge -av app-emulation/docker Note: Sometimes there is a disparity between the latest versions \nin the official Gentoo tree and the docker-overlay . \nPlease be patient, and the latest version should propagate shortly.",
|
|
"title": "Installation"
|
|
},
|
|
{
|
|
"loc": "/installation/gentoolinux#starting-docker",
|
|
"tags": "",
|
|
"text": "Ensure that you are running a kernel that includes all the necessary\nmodules and configuration (and optionally for device-mapper\nand AUFS or Btrfs, depending on the storage driver you've decided to use). To use Docker, the docker daemon must be running as root . \nTo use Docker as a non-root user, add yourself to the docker \ngroup by running the following command: $ sudo usermod -a -G docker user OpenRC To start the docker daemon: $ sudo /etc/init.d/docker start To start on system boot: $ sudo rc-update add docker default systemd To start the docker daemon: $ sudo systemctl start docker To start on system boot: $ sudo systemctl enable docker If you need to add an HTTP Proxy, set a different directory or partition for the\nDocker runtime files, or make other customizations, read our systemd article to\nlearn how to customize your systemd Docker daemon options .",
|
|
"title": "Starting Docker"
|
|
},
|
|
{
|
|
"loc": "/installation/softlayer/",
|
|
"tags": "",
|
|
"text": "IBM SoftLayer\n\nCreate an IBM SoftLayer account.\nLog in to the SoftLayer Customer Portal.\nFrom the Devices menu select Device List\nClick Order Devices on the top right of the window below the menu bar.\nUnder Virtual Server click Hourly\n\nCreate a new SoftLayer Virtual Server Instance (VSI) using the default\n values for all the fields and choose:\n\nThe desired location for Datacenter\nUbuntu Linux 12.04 LTS Precise Pangolin - Minimal Install (64 bit)\n for Operating System.\n\n\n\nClick the Continue Your Order button at the bottom right.\n\nFill out VSI hostname and domain.\nInsert the required User Metadata and place the order.\nThen continue with the Ubuntu\n instructions.\n\nWhat next?\nContinue with the User Guide.",
|
|
"title": "IBM Softlayer"
|
|
},
|
|
{
|
|
"loc": "/installation/softlayer#ibm-softlayer",
|
|
"tags": "",
|
|
"text": "Create an IBM SoftLayer account . Log in to the SoftLayer Customer Portal . From the Devices menu select Device List Click Order Devices on the top right of the window below the menu bar. Under Virtual Server click Hourly Create a new SoftLayer Virtual Server Instance (VSI) using the default\n values for all the fields and choose: The desired location for Datacenter Ubuntu Linux 12.04 LTS Precise Pangolin - Minimal Install (64 bit) \n for Operating System . Click the Continue Your Order button at the bottom right. Fill out VSI hostname and domain . Insert the required User Metadata and place the order. Then continue with the Ubuntu \n instructions.",
|
|
"title": "IBM SoftLayer"
|
|
},
|
|
{
|
|
"loc": "/installation/softlayer#what-next",
|
|
"tags": "",
|
|
"text": "Continue with the User Guide .",
|
|
"title": "What next?"
|
|
},
|
|
{
|
|
"loc": "/installation/rackspace/",
|
|
"tags": "",
|
|
"text": "Rackspace Cloud\nInstalling Docker on Ubuntu provided by Rackspace is pretty\nstraightforward, and you should mostly be able to follow the\nUbuntu installation guide.\nHowever, there is one caveat:\nIf you are using any Linux not already shipping with the 3.8 kernel you\nwill need to install it. And this is a little more difficult on\nRackspace.\nRackspace boots their servers using grub's menu.lst\nand does not like non virtual packages (e.g., Xen compatible)\nkernels there, although they do work. This results in\nupdate-grub not having the expected result, and\nyou will need to set the kernel manually.\nDo not attempt this on a production machine!\n# update apt\n$ apt-get update\n\n# install the new kernel\n$ apt-get install linux-generic-lts-raring\n\nGreat, now you have the kernel installed in /boot/, next you need to\nmake it boot next time.\n# find the exact names\n$ find /boot/ -name '*3.8*'\n\n# this should return some results\n\nNow you need to manually edit /boot/grub/menu.lst,\nyou will find a section at the bottom with the existing options. Copy\nthe top one and substitute the new kernel into that. Make sure the new\nkernel is on top, and double check the kernel and initrd lines point to\nthe right files.\nTake special care to double check the kernel and initrd entries.\n# now edit /boot/grub/menu.lst\n$ vi /boot/grub/menu.lst\n\nIt will probably look something like this:\n## ## End Default Options ##\n\ntitle Ubuntu 12.04.2 LTS, kernel 3.8.x generic\nroot (hd0)\nkernel /boot/vmlinuz-3.8.0-19-generic root=/dev/xvda1 ro quiet splash console=hvc0\ninitrd /boot/initrd.img-3.8.0-19-generic\n\ntitle Ubuntu 12.04.2 LTS, kernel 3.2.0-38-virtual\nroot (hd0)\nkernel /boot/vmlinuz-3.2.0-38-virtual root=/dev/xvda1 ro quiet splash console=hvc0\ninitrd /boot/initrd.img-3.2.0-38-virtual\n\ntitle Ubuntu 12.04.2 LTS, kernel 3.2.0-38-virtual (recovery mode)\nroot (hd0)\nkernel /boot/vmlinuz-3.2.0-38-virtual root=/dev/xvda1 ro quiet splash single\ninitrd /boot/initrd.img-3.2.0-38-virtual\n\nReboot the server (either via command line or console)\n# reboot\n\nVerify the kernel was updated\n$ uname -a\n# Linux docker-12-04 3.8.0-19-generic #30~precise1-Ubuntu SMP Wed May 1 22:26:36 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux\n\n# nice! 3.8.\n\nNow you can finish with the Ubuntu\ninstructions.",
|
|
"title": "Rackspace Cloud"
|
|
},
|
|
{
|
|
"loc": "/installation/rackspace#rackspace-cloud",
|
|
"tags": "",
|
|
"text": "Installing Docker on Ubuntu provided by Rackspace is pretty\nstraightforward, and you should mostly be able to follow the Ubuntu installation guide. However, there is one caveat: If you are using any Linux not already shipping with the 3.8 kernel you\nwill need to install it. And this is a little more difficult on\nRackspace. Rackspace boots their servers using grub's menu.lst \nand does not like non virtual packages (e.g., Xen compatible)\nkernels there, although they do work. This results in update-grub not having the expected result, and\nyou will need to set the kernel manually. Do not attempt this on a production machine! # update apt\n$ apt-get update\n\n# install the new kernel\n$ apt-get install linux-generic-lts-raring Great, now you have the kernel installed in /boot/ , next you need to\nmake it boot next time. # find the exact names\n$ find /boot/ -name '*3.8*'\n\n# this should return some results Now you need to manually edit /boot/grub/menu.lst ,\nyou will find a section at the bottom with the existing options. Copy\nthe top one and substitute the new kernel into that. Make sure the new\nkernel is on top, and double check the kernel and initrd lines point to\nthe right files. Take special care to double check the kernel and initrd entries. # now edit /boot/grub/menu.lst\n$ vi /boot/grub/menu.lst It will probably look something like this: ## ## End Default Options ##\n\ntitle Ubuntu 12.04.2 LTS, kernel 3.8.x generic\nroot (hd0)\nkernel /boot/vmlinuz-3.8.0-19-generic root=/dev/xvda1 ro quiet splash console=hvc0\ninitrd /boot/initrd.img-3.8.0-19-generic\n\ntitle Ubuntu 12.04.2 LTS, kernel 3.2.0-38-virtual\nroot (hd0)\nkernel /boot/vmlinuz-3.2.0-38-virtual root=/dev/xvda1 ro quiet splash console=hvc0\ninitrd /boot/initrd.img-3.2.0-38-virtual\n\ntitle Ubuntu 12.04.2 LTS, kernel 3.2.0-38-virtual (recovery mode)\nroot (hd0)\nkernel /boot/vmlinuz-3.2.0-38-virtual root=/dev/xvda1 ro quiet splash single\ninitrd /boot/initrd.img-3.2.0-38-virtual Reboot the server (either via command line or console) # reboot Verify the kernel was updated $ uname -a\n# Linux docker-12-04 3.8.0-19-generic #30~precise1-Ubuntu SMP Wed May 1 22:26:36 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux\n\n# nice! 3.8. Now you can finish with the Ubuntu \ninstructions.",
|
|
"title": "Rackspace Cloud"
|
|
},
|
|
{
|
|
"loc": "/installation/rhel/",
|
|
"tags": "",
|
|
"text": "Red Hat Enterprise Linux\nDocker is supported on the following versions of RHEL:\n\nRed Hat Enterprise Linux 7 (64-bit)\nRed Hat Enterprise Linux 6.5 (64-bit) or later\n\nKernel support\nRHEL will only support Docker via the extras channel or EPEL package when\nrunning on kernels shipped by the distribution. There are kernel changes which\nwill cause issues if one decides to step outside that box and run\nnon-distribution kernel packages.\nRed Hat Enterprise Linux 7 Installation\nRed Hat Enterprise Linux 7 (64 bit) has shipped with\nDocker.\nAn overview and some guidance can be found in the Release\nNotes.\nDocker is located in the extras channel. To install Docker:\n\n\nEnable the extras channel:\n$ sudo subscription-manager repos --enable=rhel-7-server-extras-rpms\n\n\n\nInstall Docker:\n$ sudo yum install docker\n\n\n\nAdditional installation, configuration, and usage information,\nincluding a Get Started with Docker Containers in Red Hat\nEnterprise Linux 7\nguide, can be found by Red Hat customers on the Red Hat Customer\nPortal.\nPlease continue with the Starting the Docker daemon.\nRed Hat Enterprise Linux 6.5 Installation\nYou will need 64 bit RHEL\n6.5 or later, with\na RHEL 6 kernel version 2.6.32-431 or higher as this has specific kernel\nfixes to allow Docker to work.\nDocker is available for RHEL6.5 on EPEL. Please note that\nthis package is part of Extra Packages for Enterprise Linux\n(EPEL), a community effort to\ncreate and maintain additional packages for the RHEL distribution.\nKernel support\nRHEL will only support Docker via the extras channel or EPEL package when\nrunning on kernels shipped by the distribution. There are things like namespace\nchanges which will cause issues if one decides to step outside that box and run\nnon-distro kernel packages.\n\nWarning:\nPlease keep your system up to date using yum update and rebooting\nyour system. Keeping your system updated ensures critical security\n vulnerabilities and severe bugs (such as those found in kernel 2.6.32)\nare fixed.\n\nInstallation\nFirstly, you need to install the EPEL repository. Please follow the\nEPEL installation\ninstructions.\nThere is a package name conflict with a system tray application\nand its executable, so the Docker RPM package was called docker-io.\nTo proceed with docker-io installation, you may need to remove the\ndocker package first.\n$ sudo yum -y remove docker\n\nNext, let's install the docker-io package which will install Docker on our host.\n$ sudo yum install docker-io\n\nTo update the docker-io package\n$ sudo yum -y update docker-io\n\nPlease continue with the Starting the Docker daemon.\nStarting the Docker daemon\nNow that it's installed, let's start the Docker daemon.\n$ sudo service docker start\n\nIf we want Docker to start at boot, we should also:\n$ sudo chkconfig docker on\n\nNow let's verify that Docker is working.\n$ sudo docker run -i -t fedora /bin/bash\n\n\nNote: If you get a Cannot start container error mentioning SELinux\nor permission denied, you may need to update the SELinux policies.\nThis can be done using sudo yum upgrade selinux-policy and then rebooting.\n\nDone!\nContinue with the User Guide.\nCustom daemon options\nIf you need to add an HTTP Proxy, set a different directory or partition for the\nDocker runtime files, or make other customizations, read our Systemd article to\nlearn how to customize your Systemd Docker daemon options.\nIssues?\nIf you have any issues - please report them directly in the\nRed Hat Bugzilla for docker-io component.",
|
|
"title": "Red Hat Enterprise Linux"
|
|
},
|
|
{
|
|
"loc": "/installation/rhel#red-hat-enterprise-linux",
|
|
"tags": "",
|
|
"text": "Docker is supported on the following versions of RHEL: Red Hat Enterprise Linux 7 (64-bit) Red Hat Enterprise Linux 6.5 (64-bit) or later",
|
|
"title": "Red Hat Enterprise Linux"
|
|
},
|
|
{
|
|
"loc": "/installation/rhel#kernel-support",
|
|
"tags": "",
|
|
"text": "RHEL will only support Docker via the extras channel or EPEL package when\nrunning on kernels shipped by the distribution. There are kernel changes which\nwill cause issues if one decides to step outside that box and run\nnon-distribution kernel packages.",
|
|
"title": "Kernel support"
|
|
},
|
|
{
|
|
"loc": "/installation/rhel#red-hat-enterprise-linux-7-installation",
|
|
"tags": "",
|
|
"text": "Red Hat Enterprise Linux 7 (64 bit) has shipped with\nDocker .\nAn overview and some guidance can be found in the Release\nNotes . Docker is located in the extras channel. To install Docker: Enable the extras channel: $ sudo subscription-manager repos --enable=rhel-7-server-extras-rpms Install Docker: $ sudo yum install docker Additional installation, configuration, and usage information,\nincluding a Get Started with Docker Containers in Red Hat\nEnterprise Linux 7 \nguide, can be found by Red Hat customers on the Red Hat Customer\nPortal . Please continue with the Starting the Docker daemon .",
|
|
"title": "Red Hat Enterprise Linux 7 Installation"
|
|
},
|
|
{
|
|
"loc": "/installation/rhel#red-hat-enterprise-linux-65-installation",
|
|
"tags": "",
|
|
"text": "You will need 64 bit RHEL\n6.5 or later, with\na RHEL 6 kernel version 2.6.32-431 or higher as this has specific kernel\nfixes to allow Docker to work. Docker is available for RHEL6.5 on EPEL. Please note that\nthis package is part of Extra Packages for Enterprise Linux\n(EPEL) , a community effort to\ncreate and maintain additional packages for the RHEL distribution. Kernel support RHEL will only support Docker via the extras channel or EPEL package when\nrunning on kernels shipped by the distribution. There are things like namespace\nchanges which will cause issues if one decides to step outside that box and run\nnon-distro kernel packages. Warning :\nPlease keep your system up to date using yum update and rebooting\nyour system. Keeping your system updated ensures critical security\n vulnerabilities and severe bugs (such as those found in kernel 2.6.32)\nare fixed.",
|
|
"title": "Red Hat Enterprise Linux 6.5 Installation"
|
|
},
|
|
{
|
|
"loc": "/installation/rhel#installation",
|
|
"tags": "",
|
|
"text": "Firstly, you need to install the EPEL repository. Please follow the EPEL installation\ninstructions . There is a package name conflict with a system tray application\nand its executable, so the Docker RPM package was called docker-io . To proceed with docker-io installation, you may need to remove the docker package first. $ sudo yum -y remove docker Next, let's install the docker-io package which will install Docker on our host. $ sudo yum install docker-io To update the docker-io package $ sudo yum -y update docker-io Please continue with the Starting the Docker daemon .",
|
|
"title": "Installation"
|
|
},
|
|
{
|
|
"loc": "/installation/rhel#starting-the-docker-daemon",
|
|
"tags": "",
|
|
"text": "Now that it's installed, let's start the Docker daemon. $ sudo service docker start If we want Docker to start at boot, we should also: $ sudo chkconfig docker on Now let's verify that Docker is working. $ sudo docker run -i -t fedora /bin/bash Note: If you get a Cannot start container error mentioning SELinux\nor permission denied, you may need to update the SELinux policies.\nThis can be done using sudo yum upgrade selinux-policy and then rebooting. Done! Continue with the User Guide .",
|
|
"title": "Starting the Docker daemon"
|
|
},
|
|
{
|
|
"loc": "/installation/rhel#custom-daemon-options",
|
|
"tags": "",
|
|
"text": "If you need to add an HTTP Proxy, set a different directory or partition for the\nDocker runtime files, or make other customizations, read our Systemd article to\nlearn how to customize your Systemd Docker daemon options .",
|
|
"title": "Custom daemon options"
|
|
},
|
|
{
|
|
"loc": "/installation/rhel#issues",
|
|
"tags": "",
|
|
"text": "If you have any issues - please report them directly in the Red Hat Bugzilla for docker-io component .",
|
|
"title": "Issues?"
|
|
},
|
|
{
|
|
"loc": "/installation/oracle/",
|
|
"tags": "",
|
|
"text": "Oracle Linux 6 and 7\nYou do not require an Oracle Linux Support subscription to install Docker on\nOracle Linux.\nFor Oracle Linux customers with an active support subscription:\nDocker is available in either the ol6_x86_64_addons or ol7_x86_64_addons\nchannel for Oracle Linux 6 and Oracle Linux 7 on the Unbreakable Linux Network\n(ULN).\nFor Oracle Linux users without an active support subscription:\nDocker is available in the appropriate ol6_addons or ol7_addons repository\non Oracle Public Yum.\nDocker requires the use of the Unbreakable Enterprise Kernel Release 3 (3.8.13)\nor higher on Oracle Linux. This kernel supports the Docker btrfs storage engine\non both Oracle Linux 6 and 7.\nDue to current Docker limitations, Docker is only able to run only on the x86_64\narchitecture.\nTo enable the addons channel via the Unbreakable Linux Network:\n\nEnable either the ol6_x86_64_addons or ol7_x86_64_addons channel\nvia the ULN web interface.\nConsult the Unbreakable Linux Network User's\nGuide for\ndocumentation on subscribing to channels.\n\nTo enable the addons repository via Oracle Public Yum:\nThe latest release of Oracle Linux 6 and 7 are automatically configured to use\nthe Oracle Public Yum repositories during installation. However, the addons\nrepository is not enabled by default.\nTo enable the addons repository:\n\nEdit either /etc/yum.repos.d/public-yum-ol6.repo or\n/etc/yum.repos.d/public-yum-ol7.repo\nand set enabled=1 in the [ol6_addons] or the [ol7_addons] stanza.\n\nTo install Docker:\n\n\nEnsure the appropriate addons channel or repository has been enabled.\n\n\nUse yum to install the Docker package:\n$ sudo yum install docker\n\n\n\nTo start Docker:\n\n\nNow that it's installed, start the Docker daemon:\n\n\nOn Oracle Linux 6:\n$ sudo service docker start\n\n\n\nOn Oracle Linux 7:\n$ sudo systemctl start docker.service\n\n\n\n\n\nIf you want the Docker daemon to start automatically at boot:\n\n\nOn Oracle Linux 6:\n$ sudo chkconfig docker on\n\n\n\nOn Oracle Linux 7:\n$ sudo systemctl enable docker.service\n\n\n\n\n\nDone!\nCustom daemon options\nIf you need to add an HTTP Proxy, set a different directory or partition for the\nDocker runtime files, or make other customizations, read our systemd article to\nlearn how to customize your systemd Docker daemon options.\nUsing the btrfs storage engine\nDocker on Oracle Linux 6 and 7 supports the use of the btrfs storage engine.\nBefore enabling btrfs support, ensure that /var/lib/docker is stored on a\nbtrfs-based filesystem. Review Chapter\n5 of the Oracle\nLinux Administrator's Solution\nGuide for details\non how to create and mount btrfs filesystems.\nTo enable btrfs support on Oracle Linux:\n\nEnsure that /var/lib/docker is on a btrfs filesystem.\nEdit /etc/sysconfig/docker and add -s btrfs to the OTHER_ARGS field.\nRestart the Docker daemon:\n\nYou can now continue with the Docker User Guide.\nKnown issues\nDocker unmounts btrfs filesystem on shutdown\nIf you're running Docker using the btrfs storage engine and you stop the Docker\nservice, it will unmount the btrfs filesystem during the shutdown process. You\nshould ensure the filesystem is mounted properly prior to restarting the Docker\nservice.\nOn Oracle Linux 7, you can use a systemd.mount definition and modify the\nDocker systemd.service to depend on the btrfs mount defined in systemd.\nSElinux Support on Oracle Linux 7\nSElinux must be set to Permissive or Disabled in /etc/sysconfig/selinux to\nuse the btrfs storage engine on Oracle Linux 7.\nFurther issues?\nIf you have a current Basic or Premier Support Subscription for Oracle Linux,\nyou can report any issues you have with the installation of Docker via a Service\nRequest at My Oracle Support.\nIf you do not have an Oracle Linux Support Subscription, you can use the Oracle\nLinux\nForum for community-based support.",
|
|
"title": "Oracle Linux"
|
|
},
|
|
{
|
|
"loc": "/installation/oracle#oracle-linux-6-and-7",
|
|
"tags": "",
|
|
"text": "You do not require an Oracle Linux Support subscription to install Docker on\nOracle Linux. For Oracle Linux customers with an active support subscription: \nDocker is available in either the ol6_x86_64_addons or ol7_x86_64_addons \nchannel for Oracle Linux 6 and Oracle Linux 7 on the Unbreakable Linux Network\n(ULN) . For Oracle Linux users without an active support subscription: \nDocker is available in the appropriate ol6_addons or ol7_addons repository\non Oracle Public Yum . Docker requires the use of the Unbreakable Enterprise Kernel Release 3 (3.8.13)\nor higher on Oracle Linux. This kernel supports the Docker btrfs storage engine\non both Oracle Linux 6 and 7. Due to current Docker limitations, Docker is only able to run only on the x86_64\narchitecture.",
|
|
"title": "Oracle Linux 6 and 7"
|
|
},
|
|
{
|
|
"loc": "/installation/oracle#to-enable-the-addons-channel-via-the-unbreakable-linux-network",
|
|
"tags": "",
|
|
"text": "Enable either the ol6_x86_64_addons or ol7_x86_64_addons channel\nvia the ULN web interface.\nConsult the Unbreakable Linux Network User's\nGuide for\ndocumentation on subscribing to channels.",
|
|
"title": "To enable the addons channel via the Unbreakable Linux Network:"
|
|
},
|
|
{
|
|
"loc": "/installation/oracle#to-enable-the-addons-repository-via-oracle-public-yum",
|
|
"tags": "",
|
|
"text": "The latest release of Oracle Linux 6 and 7 are automatically configured to use\nthe Oracle Public Yum repositories during installation. However, the addons \nrepository is not enabled by default. To enable the addons repository: Edit either /etc/yum.repos.d/public-yum-ol6.repo or /etc/yum.repos.d/public-yum-ol7.repo \nand set enabled=1 in the [ol6_addons] or the [ol7_addons] stanza.",
|
|
"title": "To enable the addons repository via Oracle Public Yum:"
|
|
},
|
|
{
|
|
"loc": "/installation/oracle#to-install-docker",
|
|
"tags": "",
|
|
"text": "Ensure the appropriate addons channel or repository has been enabled. Use yum to install the Docker package: $ sudo yum install docker",
|
|
"title": "To install Docker:"
|
|
},
|
|
{
|
|
"loc": "/installation/oracle#to-start-docker",
|
|
"tags": "",
|
|
"text": "Now that it's installed, start the Docker daemon: On Oracle Linux 6: $ sudo service docker start On Oracle Linux 7: $ sudo systemctl start docker.service If you want the Docker daemon to start automatically at boot: On Oracle Linux 6: $ sudo chkconfig docker on On Oracle Linux 7: $ sudo systemctl enable docker.service Done!",
|
|
"title": "To start Docker:"
|
|
},
|
|
{
|
|
"loc": "/installation/oracle#custom-daemon-options",
|
|
"tags": "",
|
|
"text": "If you need to add an HTTP Proxy, set a different directory or partition for the\nDocker runtime files, or make other customizations, read our systemd article to\nlearn how to customize your systemd Docker daemon options .",
|
|
"title": "Custom daemon options"
|
|
},
|
|
{
|
|
"loc": "/installation/oracle#using-the-btrfs-storage-engine",
|
|
"tags": "",
|
|
"text": "Docker on Oracle Linux 6 and 7 supports the use of the btrfs storage engine.\nBefore enabling btrfs support, ensure that /var/lib/docker is stored on a\nbtrfs-based filesystem. Review Chapter\n5 of the Oracle\nLinux Administrator's Solution\nGuide for details\non how to create and mount btrfs filesystems. To enable btrfs support on Oracle Linux: Ensure that /var/lib/docker is on a btrfs filesystem. Edit /etc/sysconfig/docker and add -s btrfs to the OTHER_ARGS field. Restart the Docker daemon: You can now continue with the Docker User Guide .",
|
|
"title": "Using the btrfs storage engine"
|
|
},
|
|
{
|
|
"loc": "/installation/oracle#known-issues",
|
|
"tags": "",
|
|
"text": "Docker unmounts btrfs filesystem on shutdown If you're running Docker using the btrfs storage engine and you stop the Docker\nservice, it will unmount the btrfs filesystem during the shutdown process. You\nshould ensure the filesystem is mounted properly prior to restarting the Docker\nservice. On Oracle Linux 7, you can use a systemd.mount definition and modify the\nDocker systemd.service to depend on the btrfs mount defined in systemd. SElinux Support on Oracle Linux 7 SElinux must be set to Permissive or Disabled in /etc/sysconfig/selinux to\nuse the btrfs storage engine on Oracle Linux 7.",
|
|
"title": "Known issues"
|
|
},
|
|
{
|
|
"loc": "/installation/oracle#further-issues",
|
|
"tags": "",
|
|
"text": "If you have a current Basic or Premier Support Subscription for Oracle Linux,\nyou can report any issues you have with the installation of Docker via a Service\nRequest at My Oracle Support . If you do not have an Oracle Linux Support Subscription, you can use the Oracle\nLinux\nForum for community-based support.",
|
|
"title": "Further issues?"
|
|
},
|
|
{
|
|
"loc": "/installation/SUSE/",
|
|
"tags": "",
|
|
"text": "openSUSE\nDocker is available in openSUSE 12.3 and later. Please note that due\nto its current limitations Docker is able to run only 64 bit architecture.\nDocker is not part of the official repositories of openSUSE 12.3 and\nopenSUSE 13.1. Hence it is neccessary to add the Virtualization\nrepository from\nOBS to install the docker package.\nExecute one of the following commands to add the Virtualization repository:\n# openSUSE 12.3\n$ sudo zypper ar -f http://download.opensuse.org/repositories/Virtualization/openSUSE_12.3/ Virtualization\n\n# openSUSE 13.1\n$ sudo zypper ar -f http://download.opensuse.org/repositories/Virtualization/openSUSE_13.1/ Virtualization\n\nNo extra repository is required for openSUSE 13.2 and later.\nSUSE Linux Enterprise\nDocker is available in SUSE Linux Enterprise 12 and later. Please note that\ndue to its current limitations Docker is able to run only on 64 bit\narchitecture.\nInstallation\nInstall the Docker package.\n$ sudo zypper in docker\n\nNow that it's installed, let's start the Docker daemon.\n$ sudo systemctl start docker\n\nIf we want Docker to start at boot, we should also:\n$ sudo systemctl enable docker\n\nThe docker package creates a new group named docker. Users, other than\nroot user, need to be part of this group in order to interact with the\nDocker daemon. You can add users with:\n$ sudo /usr/sbin/usermod -a -G docker username\n\nTo verify that everything has worked as expected:\n$ sudo docker run --rm -i -t opensuse /bin/bash\n\nThis should download and import the opensuse image, and then start bash in\na container. To exit the container type exit.\nIf you want your containers to be able to access the external network you must\nenable the net.ipv4.ip_forward rule.\nThis can be done using YaST by browsing to the\nNetwork Devices - Network Settings - Routing menu and ensuring that the\nEnable IPv4 Forwarding box is checked.\nThis option cannot be changed when networking is handled by the Network Manager.\nIn such cases the /etc/sysconfig/SuSEfirewall2 file needs to be edited by\nhand to ensure the FW_ROUTE flag is set to yes like so:\nFW_ROUTE=\"yes\"\n\nDone!\nCustom daemon options\nIf you need to add an HTTP Proxy, set a different directory or partition for the\nDocker runtime files, or make other customizations, read our systemd article to\nlearn how to customize your systemd Docker daemon options.\nWhat's next\nContinue with the User Guide.",
|
|
"title": "SUSE"
|
|
},
|
|
{
|
|
"loc": "/installation/SUSE#opensuse",
|
|
"tags": "",
|
|
"text": "Docker is available in openSUSE 12.3 and later . Please note that due\nto its current limitations Docker is able to run only 64 bit architecture. Docker is not part of the official repositories of openSUSE 12.3 and\nopenSUSE 13.1. Hence it is neccessary to add the Virtualization\nrepository from OBS to install the docker package. Execute one of the following commands to add the Virtualization repository: # openSUSE 12.3\n$ sudo zypper ar -f http://download.opensuse.org/repositories/Virtualization/openSUSE_12.3/ Virtualization\n\n# openSUSE 13.1\n$ sudo zypper ar -f http://download.opensuse.org/repositories/Virtualization/openSUSE_13.1/ Virtualization No extra repository is required for openSUSE 13.2 and later.",
|
|
"title": "openSUSE"
|
|
},
|
|
{
|
|
"loc": "/installation/SUSE#suse-linux-enterprise",
|
|
"tags": "",
|
|
"text": "Docker is available in SUSE Linux Enterprise 12 and later . Please note that\ndue to its current limitations Docker is able to run only on 64 bit \narchitecture.",
|
|
"title": "SUSE Linux Enterprise"
|
|
},
|
|
{
|
|
"loc": "/installation/SUSE#installation",
|
|
"tags": "",
|
|
"text": "Install the Docker package. $ sudo zypper in docker Now that it's installed, let's start the Docker daemon. $ sudo systemctl start docker If we want Docker to start at boot, we should also: $ sudo systemctl enable docker The docker package creates a new group named docker. Users, other than\nroot user, need to be part of this group in order to interact with the\nDocker daemon. You can add users with: $ sudo /usr/sbin/usermod -a -G docker username To verify that everything has worked as expected: $ sudo docker run --rm -i -t opensuse /bin/bash This should download and import the opensuse image, and then start bash in\na container. To exit the container type exit . If you want your containers to be able to access the external network you must\nenable the net.ipv4.ip_forward rule.\nThis can be done using YaST by browsing to the Network Devices - Network Settings - Routing menu and ensuring that the Enable IPv4 Forwarding box is checked. This option cannot be changed when networking is handled by the Network Manager.\nIn such cases the /etc/sysconfig/SuSEfirewall2 file needs to be edited by\nhand to ensure the FW_ROUTE flag is set to yes like so: FW_ROUTE=\"yes\" Done!",
|
|
"title": "Installation"
|
|
},
|
|
{
|
|
"loc": "/installation/SUSE#custom-daemon-options",
|
|
"tags": "",
|
|
"text": "If you need to add an HTTP Proxy, set a different directory or partition for the\nDocker runtime files, or make other customizations, read our systemd article to\nlearn how to customize your systemd Docker daemon options .",
|
|
"title": "Custom daemon options"
|
|
},
|
|
{
|
|
"loc": "/installation/SUSE#whats-next",
|
|
"tags": "",
|
|
"text": "Continue with the User Guide .",
|
|
"title": "What's next"
|
|
},
|
|
{
|
|
"loc": "/compose/install/",
|
|
"tags": "",
|
|
"text": "Installing Compose\nTo install Compose, you'll need to install Docker first. You'll then install\nCompose with a curl command. \nInstall Docker\nFirst, you'll need to install Docker version 1.3 or greater.\nIf you're on OS X, you can use the\nOS X installer to install both\nDocker and the OSX helper app, boot2docker. Once boot2docker is running, set the\nenvironment variables that'll configure Docker and Compose to talk to it:\n$(boot2docker shellinit)\n\nTo persist the environment variables across shell sessions, add the above line\nto your ~/.bashrc file.\nFor complete instructions, or if you are on another platform, consult Docker's\ninstallation instructions.\nInstall Compose\nTo install Compose, run the following commands:\ncurl -L https://github.com/docker/compose/releases/download/1.1.0/docker-compose-`uname -s`-`uname -m` /usr/local/bin/docker-compose\nchmod +x /usr/local/bin/docker-compose\n\nOptionally, you can also install command completion for the\nbash shell.\nCompose is available for OS X and 64-bit Linux. If you're on another platform,\nCompose can also be installed as a Python package:\n$ sudo pip install -U docker-compose\n\nNo further steps are required; Compose should now be successfully installed.\nYou can test the installation by running docker-compose --version.\nCompose documentation\n\nUser guide\nCommand line reference\nYaml file reference\nCompose environment variables\nCompose command line completion",
|
|
"title": "Docker Compose"
|
|
},
|
|
{
|
|
"loc": "/compose/install#installing-compose",
|
|
"tags": "",
|
|
"text": "To install Compose, you'll need to install Docker first. You'll then install\nCompose with a curl command. Install Docker First, you'll need to install Docker version 1.3 or greater. If you're on OS X, you can use the OS X installer to install both\nDocker and the OSX helper app, boot2docker. Once boot2docker is running, set the\nenvironment variables that'll configure Docker and Compose to talk to it: $(boot2docker shellinit) To persist the environment variables across shell sessions, add the above line\nto your ~/.bashrc file. For complete instructions, or if you are on another platform, consult Docker's installation instructions . Install Compose To install Compose, run the following commands: curl -L https://github.com/docker/compose/releases/download/1.1.0/docker-compose-`uname -s`-`uname -m` /usr/local/bin/docker-compose\nchmod +x /usr/local/bin/docker-compose Optionally, you can also install command completion for the\nbash shell. Compose is available for OS X and 64-bit Linux. If you're on another platform,\nCompose can also be installed as a Python package: $ sudo pip install -U docker-compose No further steps are required; Compose should now be successfully installed.\nYou can test the installation by running docker-compose --version .",
|
|
"title": "Installing Compose"
|
|
},
|
|
{
|
|
"loc": "/compose/install#compose-documentation",
|
|
"tags": "",
|
|
"text": "User guide Command line reference Yaml file reference Compose environment variables Compose command line completion",
|
|
"title": "Compose documentation"
|
|
},
|
|
{
|
|
"loc": "/userguide/",
|
|
"tags": "",
|
|
"text": "Welcome to the Docker User Guide\nIn the Introduction you got a taste of what Docker is and how it\nworks. In this guide we're going to take you through the fundamentals of\nusing Docker and integrating it into your environment.\nWe\u2019ll teach you how to use Docker to:\n\nDockerize your applications.\nRun your own containers.\nBuild Docker images.\nShare your Docker images with others.\nAnd a whole lot more!\n\nWe've broken this guide into major sections that take you through\nthe Docker life cycle:\nGetting Started with Docker Hub\nHow do I use Docker Hub?\nDocker Hub is the central hub for Docker. It hosts public Docker images\nand provides services to help you build and manage your Docker\nenvironment. To learn more:\nGo to Using Docker Hub.\nDockerizing Applications: A \"Hello world\"\nHow do I run applications inside containers?\nDocker offers a container-based virtualization platform to power your\napplications. To learn how to Dockerize applications and run them:\nGo to Dockerizing Applications.\nWorking with Containers\nHow do I manage my containers?\nOnce you get a grip on running your applications in Docker containers\nwe're going to show you how to manage those containers. To find out\nabout how to inspect, monitor and manage containers:\nGo to Working With Containers.\nWorking with Docker Images\nHow can I access, share and build my own images?\nOnce you've learnt how to use Docker it's time to take the next step and\nlearn how to build your own application images with Docker.\nGo to Working with Docker Images.\nLinking Containers Together\nUntil now we've seen how to build individual applications inside Docker\ncontainers. Now learn how to build whole application stacks with Docker\nby linking together multiple Docker containers.\nGo to Linking Containers Together.\nManaging Data in Containers\nNow we know how to link Docker containers together the next step is\nlearning how to manage data, volumes and mounts inside our containers.\nGo to Managing Data in Containers.\nWorking with Docker Hub\nNow we've learned a bit more about how to use Docker we're going to see\nhow to combine Docker with the services available on Docker Hub including\nTrusted Builds and private repositories.\nGo to Working with Docker Hub.\nDocker Compose\nDocker Compose allows you to define a application's components -- their containers,\nconfiguration, links and volumes -- in a single file. Then a single command\nwill set everything up and start your application running.\nGo to Docker Compose user guide.\nDocker Machine\nDocker Machine helps you get Docker Engines up and running quickly. Machine\ncan set up hosts for Docker Engines on your computer, on cloud providers,\nand/or in your data center, and then configure your Docker client to securely\ntalk to them.\nGo to Docker Machine user guide.\nDocker Swarm\nDocker Swarm pools several Docker Engines together and exposes them as a single\nvirtual Docker Engine. It serves the standard Docker API, so any tool that already\nworks with Docker can now transparently scale up to multiple hosts.\nGo to Docker Swarm user guide.\nGetting help\n\nDocker homepage\nDocker Hub\nDocker blog\nDocker documentation\nDocker Getting Started Guide\nDocker code on GitHub\nDocker mailing\n list\nDocker on IRC: irc.freenode.net and channel #docker\nDocker on Twitter\nGet Docker help on\n StackOverflow\nDocker.com",
|
|
"title": "The Docker User Guide"
|
|
},
|
|
{
|
|
"loc": "/userguide#welcome-to-the-docker-user-guide",
|
|
"tags": "",
|
|
"text": "In the Introduction you got a taste of what Docker is and how it\nworks. In this guide we're going to take you through the fundamentals of\nusing Docker and integrating it into your environment. We\u2019ll teach you how to use Docker to: Dockerize your applications. Run your own containers. Build Docker images. Share your Docker images with others. And a whole lot more! We've broken this guide into major sections that take you through\nthe Docker life cycle:",
|
|
"title": "Welcome to the Docker User Guide"
|
|
},
|
|
{
|
|
"loc": "/userguide#getting-started-with-docker-hub",
|
|
"tags": "",
|
|
"text": "How do I use Docker Hub? Docker Hub is the central hub for Docker. It hosts public Docker images\nand provides services to help you build and manage your Docker\nenvironment. To learn more: Go to Using Docker Hub .",
|
|
"title": "Getting Started with Docker Hub"
|
|
},
|
|
{
|
|
"loc": "/userguide#dockerizing-applications-a-hello-world",
|
|
"tags": "",
|
|
"text": "How do I run applications inside containers? Docker offers a container-based virtualization platform to power your\napplications. To learn how to Dockerize applications and run them: Go to Dockerizing Applications .",
|
|
"title": "Dockerizing Applications: A \"Hello world\""
|
|
},
|
|
{
|
|
"loc": "/userguide#working-with-containers",
|
|
"tags": "",
|
|
"text": "How do I manage my containers? Once you get a grip on running your applications in Docker containers\nwe're going to show you how to manage those containers. To find out\nabout how to inspect, monitor and manage containers: Go to Working With Containers .",
|
|
"title": "Working with Containers"
|
|
},
|
|
{
|
|
"loc": "/userguide#working-with-docker-images",
|
|
"tags": "",
|
|
"text": "How can I access, share and build my own images? Once you've learnt how to use Docker it's time to take the next step and\nlearn how to build your own application images with Docker. Go to Working with Docker Images .",
|
|
"title": "Working with Docker Images"
|
|
},
|
|
{
|
|
"loc": "/userguide#linking-containers-together",
|
|
"tags": "",
|
|
"text": "Until now we've seen how to build individual applications inside Docker\ncontainers. Now learn how to build whole application stacks with Docker\nby linking together multiple Docker containers. Go to Linking Containers Together .",
|
|
"title": "Linking Containers Together"
|
|
},
|
|
{
|
|
"loc": "/userguide#managing-data-in-containers",
|
|
"tags": "",
|
|
"text": "Now we know how to link Docker containers together the next step is\nlearning how to manage data, volumes and mounts inside our containers. Go to Managing Data in Containers .",
|
|
"title": "Managing Data in Containers"
|
|
},
|
|
{
|
|
"loc": "/userguide#working-with-docker-hub",
|
|
"tags": "",
|
|
"text": "Now we've learned a bit more about how to use Docker we're going to see\nhow to combine Docker with the services available on Docker Hub including\nTrusted Builds and private repositories. Go to Working with Docker Hub .",
|
|
"title": "Working with Docker Hub"
|
|
},
|
|
{
|
|
"loc": "/userguide#docker-compose",
|
|
"tags": "",
|
|
"text": "Docker Compose allows you to define a application's components -- their containers,\nconfiguration, links and volumes -- in a single file. Then a single command\nwill set everything up and start your application running. Go to Docker Compose user guide .",
|
|
"title": "Docker Compose"
|
|
},
|
|
{
|
|
"loc": "/userguide#docker-machine",
|
|
"tags": "",
|
|
"text": "Docker Machine helps you get Docker Engines up and running quickly. Machine\ncan set up hosts for Docker Engines on your computer, on cloud providers,\nand/or in your data center, and then configure your Docker client to securely\ntalk to them. Go to Docker Machine user guide .",
|
|
"title": "Docker Machine"
|
|
},
|
|
{
|
|
"loc": "/userguide#docker-swarm",
|
|
"tags": "",
|
|
"text": "Docker Swarm pools several Docker Engines together and exposes them as a single\nvirtual Docker Engine. It serves the standard Docker API, so any tool that already\nworks with Docker can now transparently scale up to multiple hosts. Go to Docker Swarm user guide .",
|
|
"title": "Docker Swarm"
|
|
},
|
|
{
|
|
"loc": "/userguide#getting-help",
|
|
"tags": "",
|
|
"text": "Docker homepage Docker Hub Docker blog Docker documentation Docker Getting Started Guide Docker code on GitHub Docker mailing\n list Docker on IRC: irc.freenode.net and channel #docker Docker on Twitter Get Docker help on\n StackOverflow Docker.com",
|
|
"title": "Getting help"
|
|
},
|
|
{
|
|
"loc": "/userguide/dockerhub/",
|
|
"tags": "",
|
|
"text": "Getting Started with Docker Hub\nThis section provides a quick introduction to the Docker Hub,\nincluding how to create an account.\nThe Docker Hub is a centralized resource for working with\nDocker and its components. Docker Hub helps you collaborate with colleagues and get the\nmost out of Docker.To do this, it provides services such as:\n\nDocker image hosting.\nUser authentication.\nAutomated image builds and work-flow tools such as build triggers and web\n hooks.\nIntegration with GitHub and BitBucket.\n\nIn order to use Docker Hub, you will first need to register and create an account. Don't\nworry, creating an account is simple and free.\nCreating a Docker Hub Account\nThere are two ways for you to register and create an account:\n\nVia the web, or\nVia the command line.\n\nRegister via the web\nFill in the sign-up form by\nchoosing your user name and password and entering a valid email address. You can also\nsign up for the Docker Weekly mailing list, which has lots of information about what's\ngoing on in the world of Docker.\n\nRegister via the command line\nYou can also create a Docker Hub account via the command line with the\ndocker login command.\n$ sudo docker login\n\nConfirm your email\nOnce you've filled in the form, check your email for a welcome message asking for\nconfirmation so we can activate your account.\nLogin\nAfter you complete the confirmation process, you can login using the web console:\n\nOr via the command line with the docker login command:\n$ sudo docker login\n\nYour Docker Hub account is now active and ready to use.\nNext steps\nNext, let's start learning how to Dockerize applications with our \"Hello world\"\nexercise.\nGo to Dockerizing Applications.",
|
|
"title": "Getting Started with Docker Hub"
|
|
},
|
|
{
|
|
"loc": "/userguide/dockerhub#getting-started-with-docker-hub",
|
|
"tags": "",
|
|
"text": "This section provides a quick introduction to the Docker Hub ,\nincluding how to create an account. The Docker Hub is a centralized resource for working with\nDocker and its components. Docker Hub helps you collaborate with colleagues and get the\nmost out of Docker.To do this, it provides services such as: Docker image hosting. User authentication. Automated image builds and work-flow tools such as build triggers and web\n hooks. Integration with GitHub and BitBucket. In order to use Docker Hub, you will first need to register and create an account. Don't\nworry, creating an account is simple and free.",
|
|
"title": "Getting Started with Docker Hub"
|
|
},
|
|
{
|
|
"loc": "/userguide/dockerhub#creating-a-docker-hub-account",
|
|
"tags": "",
|
|
"text": "There are two ways for you to register and create an account: Via the web, or Via the command line. Register via the web Fill in the sign-up form by\nchoosing your user name and password and entering a valid email address. You can also\nsign up for the Docker Weekly mailing list, which has lots of information about what's\ngoing on in the world of Docker. Register via the command line You can also create a Docker Hub account via the command line with the docker login command. $ sudo docker login Confirm your email Once you've filled in the form, check your email for a welcome message asking for\nconfirmation so we can activate your account. Login After you complete the confirmation process, you can login using the web console: Or via the command line with the docker login command: $ sudo docker login Your Docker Hub account is now active and ready to use.",
|
|
"title": "Creating a Docker Hub Account"
|
|
},
|
|
{
|
|
"loc": "/userguide/dockerhub#next-steps",
|
|
"tags": "",
|
|
"text": "Next, let's start learning how to Dockerize applications with our \"Hello world\"\nexercise. Go to Dockerizing Applications .",
|
|
"title": "Next steps"
|
|
},
|
|
{
|
|
"loc": "/userguide/dockerizing/",
|
|
"tags": "",
|
|
"text": "Dockerizing Applications: A \"Hello world\"\nSo what's this Docker thing all about?\nDocker allows you to run applications inside containers. Running an\napplication inside a container takes a single command: docker run.\n\nNote: if you are using a remote Docker daemon, such as Boot2Docker, \nthen do not type the sudo before the docker commands shown in the\ndocumentation's examples.\n\nHello world\nLet's try it now.\n$ sudo docker run ubuntu:14.04 /bin/echo 'Hello world'\nHello world\n\nAnd you just launched your first container!\nSo what just happened? Let's step through what the docker run command\ndid.\nFirst we specified the docker binary and the command we wanted to\nexecute, run. The docker run combination runs containers.\nNext we specified an image: ubuntu:14.04. This is the source of the container\nwe ran. Docker calls this an image. In this case we used an Ubuntu 14.04\noperating system image.\nWhen you specify an image, Docker looks first for the image on your\nDocker host. If it can't find it then it downloads the image from the public\nimage registry: Docker Hub.\nNext we told Docker what command to run inside our new container:\n/bin/echo 'Hello world'\n\nWhen our container was launched Docker created a new Ubuntu 14.04\nenvironment and then executed the /bin/echo command inside it. We saw\nthe result on the command line:\nHello world\n\nSo what happened to our container after that? Well Docker containers\nonly run as long as the command you specify is active. Here, as soon as\nHello world was echoed, the container stopped.\nAn Interactive Container\nLet's try the docker run command again, this time specifying a new\ncommand to run in our container.\n$ sudo docker run -t -i ubuntu:14.04 /bin/bash\nroot@af8bae53bdd3:/#\n\nHere we've again specified the docker run command and launched an\nubuntu:14.04 image. But we've also passed in two flags: -t and -i.\nThe -t flag assigns a pseudo-tty or terminal inside our new container\nand the -i flag allows us to make an interactive connection by\ngrabbing the standard in (STDIN) of the container.\nWe've also specified a new command for our container to run:\n/bin/bash. This will launch a Bash shell inside our container.\nSo now when our container is launched we can see that we've got a\ncommand prompt inside it:\nroot@af8bae53bdd3:/#\n\nLet's try running some commands inside our container:\nroot@af8bae53bdd3:/# pwd\n/\nroot@af8bae53bdd3:/# ls\nbin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var\n\nYou can see we've run the pwd to show our current directory and can\nsee we're in the / root directory. We've also done a directory listing\nof the root directory which shows us what looks like a typical Linux\nfile system.\nYou can play around inside this container and when you're done you can\nuse the exit command or enter Ctrl-D to finish.\nroot@af8bae53bdd3:/# exit\n\nAs with our previous container, once the Bash shell process has\nfinished, the container is stopped.\nA Daemonized Hello world\nNow a container that runs a command and then exits has some uses but\nit's not overly helpful. Let's create a container that runs as a daemon,\nlike most of the applications we're probably going to run with Docker.\nAgain we can do this with the docker run command:\n$ sudo docker run -d ubuntu:14.04 /bin/sh -c \"while true; do echo hello world; sleep 1; done\"\n1e5535038e285177d5214659a068137486f96ee5c2e85a4ac52dc83f2ebe4147\n\nWait what? Where's our \"Hello world\" Let's look at what we've run here.\nIt should look pretty familiar. We ran docker run but this time we\nspecified a flag: -d. The -d flag tells Docker to run the container\nand put it in the background, to daemonize it.\nWe also specified the same image: ubuntu:14.04.\nFinally, we specified a command to run:\n/bin/sh -c \"while true; do echo hello world; sleep 1; done\"\n\nThis is the (hello) world's silliest daemon: a shell script that echoes\nhello world forever.\nSo why aren't we seeing any hello world's? Instead Docker has returned\na really long string:\n1e5535038e285177d5214659a068137486f96ee5c2e85a4ac52dc83f2ebe4147\n\nThis really long string is called a container ID. It uniquely\nidentifies a container so we can work with it.\n\nNote: \nThe container ID is a bit long and unwieldy and a bit later\non we'll see a shorter ID and some ways to name our containers to make\nworking with them easier.\n\nWe can use this container ID to see what's happening with our hello world daemon.\nFirstly let's make sure our container is running. We can\ndo that with the docker ps command. The docker ps command queries\nthe Docker daemon for information about all the containers it knows\nabout.\n$ sudo docker ps\nCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES\n1e5535038e28 ubuntu:14.04 /bin/sh -c 'while tr 2 minutes ago Up 1 minute insane_babbage\n\nHere we can see our daemonized container. The docker ps has returned some useful\ninformation about it, starting with a shorter variant of its container ID:\n1e5535038e28.\nWe can also see the image we used to build it, ubuntu:14.04, the command it\nis running, its status and an automatically assigned name,\ninsane_babbage. \n\nNote: \nDocker automatically names any containers you start, a\nlittle later on we'll see how you can specify your own names.\n\nOkay, so we now know it's running. But is it doing what we asked it to do? To see this\nwe're going to look inside the container using the docker logs\ncommand. Let's use the container name Docker assigned.\n$ sudo docker logs insane_babbage\nhello world\nhello world\nhello world\n. . .\n\nThe docker logs command looks inside the container and returns its standard\noutput: in this case the output of our command hello world.\nAwesome! Our daemon is working and we've just created our first\nDockerized application!\nNow we've established we can create our own containers let's tidy up\nafter ourselves and stop our daemonized container. To do this we use the\ndocker stop command.\n$ sudo docker stop insane_babbage\ninsane_babbage\n\nThe docker stop command tells Docker to politely stop the running\ncontainer. If it succeeds it will return the name of the container it\nhas just stopped.\nLet's check it worked with the docker ps command.\n$ sudo docker ps\nCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES\n\nExcellent. Our container has been stopped.\nNext steps\nNow we've seen how simple it is to get started with Docker let's learn how to\ndo some more advanced tasks.\nGo to Working With Containers.",
|
|
"title": "Dockerizing Applications"
|
|
},
|
|
{
|
|
"loc": "/userguide/dockerizing#dockerizing-applications-a-hello-world",
|
|
"tags": "",
|
|
"text": "So what's this Docker thing all about? Docker allows you to run applications inside containers. Running an\napplication inside a container takes a single command: docker run . Note: if you are using a remote Docker daemon, such as Boot2Docker, \nthen do not type the sudo before the docker commands shown in the\ndocumentation's examples.",
|
|
"title": "Dockerizing Applications: A \"Hello world\""
|
|
},
|
|
{
|
|
"loc": "/userguide/dockerizing#hello-world",
|
|
"tags": "",
|
|
"text": "Let's try it now. $ sudo docker run ubuntu:14.04 /bin/echo 'Hello world'\nHello world And you just launched your first container! So what just happened? Let's step through what the docker run command\ndid. First we specified the docker binary and the command we wanted to\nexecute, run . The docker run combination runs containers. Next we specified an image: ubuntu:14.04 . This is the source of the container\nwe ran. Docker calls this an image. In this case we used an Ubuntu 14.04\noperating system image. When you specify an image, Docker looks first for the image on your\nDocker host. If it can't find it then it downloads the image from the public\nimage registry: Docker Hub . Next we told Docker what command to run inside our new container: /bin/echo 'Hello world' When our container was launched Docker created a new Ubuntu 14.04\nenvironment and then executed the /bin/echo command inside it. We saw\nthe result on the command line: Hello world So what happened to our container after that? Well Docker containers\nonly run as long as the command you specify is active. Here, as soon as Hello world was echoed, the container stopped.",
|
|
"title": "Hello world"
|
|
},
|
|
{
|
|
"loc": "/userguide/dockerizing#an-interactive-container",
|
|
"tags": "",
|
|
"text": "Let's try the docker run command again, this time specifying a new\ncommand to run in our container. $ sudo docker run -t -i ubuntu:14.04 /bin/bash\nroot@af8bae53bdd3:/# Here we've again specified the docker run command and launched an ubuntu:14.04 image. But we've also passed in two flags: -t and -i .\nThe -t flag assigns a pseudo-tty or terminal inside our new container\nand the -i flag allows us to make an interactive connection by\ngrabbing the standard in ( STDIN ) of the container. We've also specified a new command for our container to run: /bin/bash . This will launch a Bash shell inside our container. So now when our container is launched we can see that we've got a\ncommand prompt inside it: root@af8bae53bdd3:/# Let's try running some commands inside our container: root@af8bae53bdd3:/# pwd\n/\nroot@af8bae53bdd3:/# ls\nbin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var You can see we've run the pwd to show our current directory and can\nsee we're in the / root directory. We've also done a directory listing\nof the root directory which shows us what looks like a typical Linux\nfile system. You can play around inside this container and when you're done you can\nuse the exit command or enter Ctrl-D to finish. root@af8bae53bdd3:/# exit As with our previous container, once the Bash shell process has\nfinished, the container is stopped.",
|
|
"title": "An Interactive Container"
|
|
},
|
|
{
|
|
"loc": "/userguide/dockerizing#a-daemonized-hello-world",
|
|
"tags": "",
|
|
"text": "Now a container that runs a command and then exits has some uses but\nit's not overly helpful. Let's create a container that runs as a daemon,\nlike most of the applications we're probably going to run with Docker. Again we can do this with the docker run command: $ sudo docker run -d ubuntu:14.04 /bin/sh -c \"while true; do echo hello world; sleep 1; done\"\n1e5535038e285177d5214659a068137486f96ee5c2e85a4ac52dc83f2ebe4147 Wait what? Where's our \"Hello world\" Let's look at what we've run here.\nIt should look pretty familiar. We ran docker run but this time we\nspecified a flag: -d . The -d flag tells Docker to run the container\nand put it in the background, to daemonize it. We also specified the same image: ubuntu:14.04 . Finally, we specified a command to run: /bin/sh -c \"while true; do echo hello world; sleep 1; done\" This is the (hello) world's silliest daemon: a shell script that echoes hello world forever. So why aren't we seeing any hello world 's? Instead Docker has returned\na really long string: 1e5535038e285177d5214659a068137486f96ee5c2e85a4ac52dc83f2ebe4147 This really long string is called a container ID . It uniquely\nidentifies a container so we can work with it. Note: \nThe container ID is a bit long and unwieldy and a bit later\non we'll see a shorter ID and some ways to name our containers to make\nworking with them easier. We can use this container ID to see what's happening with our hello world daemon. Firstly let's make sure our container is running. We can\ndo that with the docker ps command. The docker ps command queries\nthe Docker daemon for information about all the containers it knows\nabout. $ sudo docker ps\nCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES\n1e5535038e28 ubuntu:14.04 /bin/sh -c 'while tr 2 minutes ago Up 1 minute insane_babbage Here we can see our daemonized container. The docker ps has returned some useful\ninformation about it, starting with a shorter variant of its container ID: 1e5535038e28 . We can also see the image we used to build it, ubuntu:14.04 , the command it\nis running, its status and an automatically assigned name, insane_babbage . Note: \nDocker automatically names any containers you start, a\nlittle later on we'll see how you can specify your own names. Okay, so we now know it's running. But is it doing what we asked it to do? To see this\nwe're going to look inside the container using the docker logs \ncommand. Let's use the container name Docker assigned. $ sudo docker logs insane_babbage\nhello world\nhello world\nhello world\n. . . The docker logs command looks inside the container and returns its standard\noutput: in this case the output of our command hello world . Awesome! Our daemon is working and we've just created our first\nDockerized application! Now we've established we can create our own containers let's tidy up\nafter ourselves and stop our daemonized container. To do this we use the docker stop command. $ sudo docker stop insane_babbage\ninsane_babbage The docker stop command tells Docker to politely stop the running\ncontainer. If it succeeds it will return the name of the container it\nhas just stopped. Let's check it worked with the docker ps command. $ sudo docker ps\nCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES Excellent. Our container has been stopped.",
|
|
"title": "A Daemonized Hello world"
|
|
},
|
|
{
|
|
"loc": "/userguide/dockerizing#next-steps",
|
|
"tags": "",
|
|
"text": "Now we've seen how simple it is to get started with Docker let's learn how to\ndo some more advanced tasks. Go to Working With Containers .",
|
|
"title": "Next steps"
|
|
},
|
|
{
|
|
"loc": "/userguide/usingdocker/",
|
|
"tags": "",
|
|
"text": "Working with Containers\nIn the last section of the Docker User Guide\nwe launched our first containers. We launched two containers using the\ndocker run command.\n\nContainers we ran interactively in the foreground.\nOne container we ran daemonized in the background.\n\nIn the process we learned about several Docker commands:\n\ndocker ps - Lists containers.\ndocker logs - Shows us the standard output of a container.\ndocker stop - Stops running containers.\n\n\nTip:\nAnother way to learn about docker commands is our\ninteractive tutorial.\n\nThe docker client is pretty simple. Each action you can take\nwith Docker is a command and each command can take a series of\nflags and arguments.\n# Usage: [sudo] docker [command] [flags] [arguments] ..\n# Example:\n$ sudo docker run -i -t ubuntu /bin/bash\n\nLet's see this in action by using the docker version command to return\nversion information on the currently installed Docker client and daemon.\n$ sudo docker version\n\nThis command will not only provide you the version of Docker client and\ndaemon you are using, but also the version of Go (the programming\nlanguage powering Docker).\nClient version: 0.8.0\nGo version (client): go1.2\n\nGit commit (client): cc3a8c8\nServer version: 0.8.0\n\nGit commit (server): cc3a8c8\nGo version (server): go1.2\n\nLast stable version: 0.8.0\n\nSeeing what the Docker client can do\nWe can see all of the commands available to us with the Docker client by\nrunning the docker binary without any options.\n$ sudo docker\n\nYou will see a list of all currently available commands.\nCommands:\n attach Attach to a running container\n build Build an image from a Dockerfile\n commit Create a new image from a container's changes\n. . .\n\nSeeing Docker command usage\nYou can also zoom in and review the usage for specific Docker commands.\nTry typing Docker followed with a [command] to see the usage for that\ncommand:\n$ sudo docker attach\nHelp output . . .\n\nOr you can also pass the --help flag to the docker binary.\n$ sudo docker attach --help\n\nThis will display the help text and all available flags:\nUsage: docker attach [OPTIONS] CONTAINER\n\nAttach to a running container\n\n --no-stdin=false: Do not attach stdin\n --sig-proxy=true: Proxify all received signal to the process (non-TTY mode only)\n\n\nNote: \nYou can see a full list of Docker's commands\nhere.\n\nRunning a Web Application in Docker\nSo now we've learnt a bit more about the docker client let's move onto\nthe important stuff: running more containers. So far none of the\ncontainers we've run did anything particularly useful though. So let's\nbuild on that experience by running an example web application in\nDocker.\nFor our web application we're going to run a Python Flask application.\nLet's start with a docker run command.\n$ sudo docker run -d -P training/webapp python app.py\n\nLet's review what our command did. We've specified two flags: -d and\n-P. We've already seen the -d flag which tells Docker to run the\ncontainer in the background. The -P flag is new and tells Docker to\nmap any required network ports inside our container to our host. This\nlets us view our web application.\nWe've specified an image: training/webapp. This image is a\npre-built image we've created that contains a simple Python Flask web\napplication.\nLastly, we've specified a command for our container to run: python app.py. This launches our web application.\n\nNote: \nYou can see more detail on the docker run command in the command\nreference and the Docker Run\nReference.\n\nViewing our Web Application Container\nNow let's see our running container using the docker ps command.\n$ sudo docker ps -l\nCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES\nbc533791f3f5 training/webapp:latest python app.py 5 seconds ago Up 2 seconds 0.0.0.0:49155-5000/tcp nostalgic_morse\n\nYou can see we've specified a new flag, -l, for the docker ps\ncommand. This tells the docker ps command to return the details of the\nlast container started.\n\nNote: \nBy default, the docker ps command only shows information about running\ncontainers. If you want to see stopped containers too use the -a flag.\n\nWe can see the same details we saw when we first Dockerized a\ncontainer with one important addition in the PORTS\ncolumn.\nPORTS\n0.0.0.0:49155-5000/tcp\n\nWhen we passed the -P flag to the docker run command Docker mapped any\nports exposed in our image to our host.\n\nNote: \nWe'll learn more about how to expose ports in Docker images when\nwe learn how to build images.\n\nIn this case Docker has exposed port 5000 (the default Python Flask\nport) on port 49155.\nNetwork port bindings are very configurable in Docker. In our last\nexample the -P flag is a shortcut for -p 5000 that maps port 5000\ninside the container to a high port (from the range 49153 to 65535) on\nthe local Docker host. We can also bind Docker containers to specific\nports using the -p flag, for example:\n$ sudo docker run -d -p 5000:5000 training/webapp python app.py\n\nThis would map port 5000 inside our container to port 5000 on our local\nhost. You might be asking about now: why wouldn't we just want to always\nuse 1:1 port mappings in Docker containers rather than mapping to high\nports? Well 1:1 mappings have the constraint of only being able to map\none of each port on your local host. Let's say you want to test two\nPython applications: both bound to port 5000 inside their own containers.\nWithout Docker's port mapping you could only access one at a time on the\nDocker host.\nSo let's now browse to port 49155 in a web browser to\nsee the application.\n.\nOur Python application is live!\n\nNote:\nIf you have used the boot2docker virtual machine on OS X, Windows or Linux,\nyou'll need to get the IP of the virtual host instead of using localhost.\nYou can do this by running the following in\nthe boot2docker shell.\n$ boot2docker ip\nThe VM's Host only interface IP address is: 192.168.59.103\n\nIn this case you'd browse to http://192.168.59.103:49155 for the above example.\n\nA Network Port Shortcut\nUsing the docker ps command to return the mapped port is a bit clumsy so\nDocker has a useful shortcut we can use: docker port. To use docker port we\nspecify the ID or name of our container and then the port for which we need the\ncorresponding public-facing port.\n$ sudo docker port nostalgic_morse 5000\n0.0.0.0:49155\n\nIn this case we've looked up what port is mapped externally to port 5000 inside\nthe container.\nViewing the Web Application's Logs\nLet's also find out a bit more about what's happening with our application and\nuse another of the commands we've learnt, docker logs.\n$ sudo docker logs -f nostalgic_morse\n* Running on http://0.0.0.0:5000/\n10.0.2.2 - - [23/May/2014 20:16:31] \"GET / HTTP/1.1\" 200 -\n10.0.2.2 - - [23/May/2014 20:16:31] \"GET /favicon.ico HTTP/1.1\" 404 -\n\nThis time though we've added a new flag, -f. This causes the docker\nlogs command to act like the tail -f command and watch the\ncontainer's standard out. We can see here the logs from Flask showing\nthe application running on port 5000 and the access log entries for it.\nLooking at our Web Application Container's processes\nIn addition to the container's logs we can also examine the processes\nrunning inside it using the docker top command.\n$ sudo docker top nostalgic_morse\nPID USER COMMAND\n854 root python app.py\n\nHere we can see our python app.py command is the only process running inside\nthe container.\nInspecting our Web Application Container\nLastly, we can take a low-level dive into our Docker container using the\ndocker inspect command. It returns a JSON hash of useful configuration\nand status information about Docker containers.\n$ sudo docker inspect nostalgic_morse\n\nLet's see a sample of that JSON output.\n[{\n \"ID\": \"bc533791f3f500b280a9626688bc79e342e3ea0d528efe3a86a51ecb28ea20\",\n \"Created\": \"2014-05-26T05:52:40.808952951Z\",\n \"Path\": \"python\",\n \"Args\": [\n \"app.py\"\n ],\n \"Config\": {\n \"Hostname\": \"bc533791f3f5\",\n \"Domainname\": \"\",\n \"User\": \"\",\n. . .\n\nWe can also narrow down the information we want to return by requesting a\nspecific element, for example to return the container's IP address we would:\n$ sudo docker inspect -f '{{ .NetworkSettings.IPAddress }}' nostalgic_morse\n172.17.0.5\n\nStopping our Web Application Container\nOkay we've seen web application working. Now let's stop it using the\ndocker stop command and the name of our container: nostalgic_morse.\n$ sudo docker stop nostalgic_morse\nnostalgic_morse\n\nWe can now use the docker ps command to check if the container has\nbeen stopped.\n$ sudo docker ps -l\n\nRestarting our Web Application Container\nOops! Just after you stopped the container you get a call to say another\ndeveloper needs the container back. From here you have two choices: you\ncan create a new container or restart the old one. Let's look at\nstarting our previous container back up.\n$ sudo docker start nostalgic_morse\nnostalgic_morse\n\nNow quickly run docker ps -l again to see the running container is\nback up or browse to the container's URL to see if the application\nresponds.\n\nNote: \nAlso available is the docker restart command that runs a stop and\nthen start on the container.\n\nRemoving our Web Application Container\nYour colleague has let you know that they've now finished with the container\nand won't need it again. So let's remove it using the docker rm command.\n$ sudo docker rm nostalgic_morse\nError: Impossible to remove a running container, please stop it first or use -f\n2014/05/24 08:12:56 Error: failed to remove one or more containers\n\nWhat's happened? We can't actually remove a running container. This protects\nyou from accidentally removing a running container you might need. Let's try\nthis again by stopping the container first.\n$ sudo docker stop nostalgic_morse\nnostalgic_morse\n$ sudo docker rm nostalgic_morse\nnostalgic_morse\n\nAnd now our container is stopped and deleted.\n\nNote:\nAlways remember that deleting a container is final!\n\nNext steps\nUntil now we've only used images that we've downloaded from\nDocker Hub now let's get introduced to\nbuilding and sharing our own images.\nGo to Working with Docker Images.",
|
|
"title": "Working with Containers"
|
|
},
|
|
{
|
|
"loc": "/userguide/usingdocker#working-with-containers",
|
|
"tags": "",
|
|
"text": "In the last section of the Docker User Guide \nwe launched our first containers. We launched two containers using the docker run command. Containers we ran interactively in the foreground. One container we ran daemonized in the background. In the process we learned about several Docker commands: docker ps - Lists containers. docker logs - Shows us the standard output of a container. docker stop - Stops running containers. Tip: \nAnother way to learn about docker commands is our interactive tutorial . The docker client is pretty simple. Each action you can take\nwith Docker is a command and each command can take a series of\nflags and arguments. # Usage: [sudo] docker [command] [flags] [arguments] ..\n# Example:\n$ sudo docker run -i -t ubuntu /bin/bash Let's see this in action by using the docker version command to return\nversion information on the currently installed Docker client and daemon. $ sudo docker version This command will not only provide you the version of Docker client and\ndaemon you are using, but also the version of Go (the programming\nlanguage powering Docker). Client version: 0.8.0\nGo version (client): go1.2\n\nGit commit (client): cc3a8c8\nServer version: 0.8.0\n\nGit commit (server): cc3a8c8\nGo version (server): go1.2\n\nLast stable version: 0.8.0 Seeing what the Docker client can do We can see all of the commands available to us with the Docker client by\nrunning the docker binary without any options. $ sudo docker You will see a list of all currently available commands. Commands:\n attach Attach to a running container\n build Build an image from a Dockerfile\n commit Create a new image from a container's changes\n. . . Seeing Docker command usage You can also zoom in and review the usage for specific Docker commands. Try typing Docker followed with a [command] to see the usage for that\ncommand: $ sudo docker attach\nHelp output . . . Or you can also pass the --help flag to the docker binary. $ sudo docker attach --help This will display the help text and all available flags: Usage: docker attach [OPTIONS] CONTAINER\n\nAttach to a running container\n\n --no-stdin=false: Do not attach stdin\n --sig-proxy=true: Proxify all received signal to the process (non-TTY mode only) Note: \nYou can see a full list of Docker's commands here .",
|
|
"title": "Working with Containers"
|
|
},
|
|
{
|
|
"loc": "/userguide/usingdocker#running-a-web-application-in-docker",
|
|
"tags": "",
|
|
"text": "So now we've learnt a bit more about the docker client let's move onto\nthe important stuff: running more containers. So far none of the\ncontainers we've run did anything particularly useful though. So let's\nbuild on that experience by running an example web application in\nDocker. For our web application we're going to run a Python Flask application.\nLet's start with a docker run command. $ sudo docker run -d -P training/webapp python app.py Let's review what our command did. We've specified two flags: -d and -P . We've already seen the -d flag which tells Docker to run the\ncontainer in the background. The -P flag is new and tells Docker to\nmap any required network ports inside our container to our host. This\nlets us view our web application. We've specified an image: training/webapp . This image is a\npre-built image we've created that contains a simple Python Flask web\napplication. Lastly, we've specified a command for our container to run: python app.py . This launches our web application. Note: \nYou can see more detail on the docker run command in the command\nreference and the Docker Run\nReference .",
|
|
"title": "Running a Web Application in Docker"
|
|
},
|
|
{
|
|
"loc": "/userguide/usingdocker#viewing-our-web-application-container",
|
|
"tags": "",
|
|
"text": "Now let's see our running container using the docker ps command. $ sudo docker ps -l\nCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES\nbc533791f3f5 training/webapp:latest python app.py 5 seconds ago Up 2 seconds 0.0.0.0:49155- 5000/tcp nostalgic_morse You can see we've specified a new flag, -l , for the docker ps \ncommand. This tells the docker ps command to return the details of the last container started. Note: \nBy default, the docker ps command only shows information about running\ncontainers. If you want to see stopped containers too use the -a flag. We can see the same details we saw when we first Dockerized a\ncontainer with one important addition in the PORTS \ncolumn. PORTS\n0.0.0.0:49155- 5000/tcp When we passed the -P flag to the docker run command Docker mapped any\nports exposed in our image to our host. Note: \nWe'll learn more about how to expose ports in Docker images when we learn how to build images . In this case Docker has exposed port 5000 (the default Python Flask\nport) on port 49155. Network port bindings are very configurable in Docker. In our last\nexample the -P flag is a shortcut for -p 5000 that maps port 5000\ninside the container to a high port (from the range 49153 to 65535) on\nthe local Docker host. We can also bind Docker containers to specific\nports using the -p flag, for example: $ sudo docker run -d -p 5000:5000 training/webapp python app.py This would map port 5000 inside our container to port 5000 on our local\nhost. You might be asking about now: why wouldn't we just want to always\nuse 1:1 port mappings in Docker containers rather than mapping to high\nports? Well 1:1 mappings have the constraint of only being able to map\none of each port on your local host. Let's say you want to test two\nPython applications: both bound to port 5000 inside their own containers.\nWithout Docker's port mapping you could only access one at a time on the\nDocker host. So let's now browse to port 49155 in a web browser to\nsee the application. . Our Python application is live! Note: \nIf you have used the boot2docker virtual machine on OS X, Windows or Linux,\nyou'll need to get the IP of the virtual host instead of using localhost.\nYou can do this by running the following in\nthe boot2docker shell. $ boot2docker ip\nThe VM's Host only interface IP address is: 192.168.59.103 In this case you'd browse to http://192.168.59.103:49155 for the above example.",
|
|
"title": "Viewing our Web Application Container"
|
|
},
|
|
{
|
|
"loc": "/userguide/usingdocker#a-network-port-shortcut",
|
|
"tags": "",
|
|
"text": "Using the docker ps command to return the mapped port is a bit clumsy so\nDocker has a useful shortcut we can use: docker port . To use docker port we\nspecify the ID or name of our container and then the port for which we need the\ncorresponding public-facing port. $ sudo docker port nostalgic_morse 5000\n0.0.0.0:49155 In this case we've looked up what port is mapped externally to port 5000 inside\nthe container.",
|
|
"title": "A Network Port Shortcut"
|
|
},
|
|
{
|
|
"loc": "/userguide/usingdocker#viewing-the-web-applications-logs",
|
|
"tags": "",
|
|
"text": "Let's also find out a bit more about what's happening with our application and\nuse another of the commands we've learnt, docker logs . $ sudo docker logs -f nostalgic_morse\n* Running on http://0.0.0.0:5000/\n10.0.2.2 - - [23/May/2014 20:16:31] \"GET / HTTP/1.1\" 200 -\n10.0.2.2 - - [23/May/2014 20:16:31] \"GET /favicon.ico HTTP/1.1\" 404 - This time though we've added a new flag, -f . This causes the docker\nlogs command to act like the tail -f command and watch the\ncontainer's standard out. We can see here the logs from Flask showing\nthe application running on port 5000 and the access log entries for it.",
|
|
"title": "Viewing the Web Application's Logs"
|
|
},
|
|
{
|
|
"loc": "/userguide/usingdocker#looking-at-our-web-application-containers-processes",
|
|
"tags": "",
|
|
"text": "In addition to the container's logs we can also examine the processes\nrunning inside it using the docker top command. $ sudo docker top nostalgic_morse\nPID USER COMMAND\n854 root python app.py Here we can see our python app.py command is the only process running inside\nthe container.",
|
|
"title": "Looking at our Web Application Container's processes"
|
|
},
|
|
{
|
|
"loc": "/userguide/usingdocker#inspecting-our-web-application-container",
|
|
"tags": "",
|
|
"text": "Lastly, we can take a low-level dive into our Docker container using the docker inspect command. It returns a JSON hash of useful configuration\nand status information about Docker containers. $ sudo docker inspect nostalgic_morse Let's see a sample of that JSON output. [{\n \"ID\": \"bc533791f3f500b280a9626688bc79e342e3ea0d528efe3a86a51ecb28ea20\",\n \"Created\": \"2014-05-26T05:52:40.808952951Z\",\n \"Path\": \"python\",\n \"Args\": [\n \"app.py\"\n ],\n \"Config\": {\n \"Hostname\": \"bc533791f3f5\",\n \"Domainname\": \"\",\n \"User\": \"\",\n. . . We can also narrow down the information we want to return by requesting a\nspecific element, for example to return the container's IP address we would: $ sudo docker inspect -f '{{ .NetworkSettings.IPAddress }}' nostalgic_morse\n172.17.0.5",
|
|
"title": "Inspecting our Web Application Container"
|
|
},
|
|
{
|
|
"loc": "/userguide/usingdocker#stopping-our-web-application-container",
|
|
"tags": "",
|
|
"text": "Okay we've seen web application working. Now let's stop it using the docker stop command and the name of our container: nostalgic_morse . $ sudo docker stop nostalgic_morse\nnostalgic_morse We can now use the docker ps command to check if the container has\nbeen stopped. $ sudo docker ps -l",
|
|
"title": "Stopping our Web Application Container"
|
|
},
|
|
{
|
|
"loc": "/userguide/usingdocker#restarting-our-web-application-container",
|
|
"tags": "",
|
|
"text": "Oops! Just after you stopped the container you get a call to say another\ndeveloper needs the container back. From here you have two choices: you\ncan create a new container or restart the old one. Let's look at\nstarting our previous container back up. $ sudo docker start nostalgic_morse\nnostalgic_morse Now quickly run docker ps -l again to see the running container is\nback up or browse to the container's URL to see if the application\nresponds. Note: \nAlso available is the docker restart command that runs a stop and\nthen start on the container.",
|
|
"title": "Restarting our Web Application Container"
|
|
},
|
|
{
|
|
"loc": "/userguide/usingdocker#removing-our-web-application-container",
|
|
"tags": "",
|
|
"text": "Your colleague has let you know that they've now finished with the container\nand won't need it again. So let's remove it using the docker rm command. $ sudo docker rm nostalgic_morse\nError: Impossible to remove a running container, please stop it first or use -f\n2014/05/24 08:12:56 Error: failed to remove one or more containers What's happened? We can't actually remove a running container. This protects\nyou from accidentally removing a running container you might need. Let's try\nthis again by stopping the container first. $ sudo docker stop nostalgic_morse\nnostalgic_morse\n$ sudo docker rm nostalgic_morse\nnostalgic_morse And now our container is stopped and deleted. Note: \nAlways remember that deleting a container is final!",
|
|
"title": "Removing our Web Application Container"
|
|
},
|
|
{
|
|
"loc": "/userguide/usingdocker#next-steps",
|
|
"tags": "",
|
|
"text": "Until now we've only used images that we've downloaded from Docker Hub now let's get introduced to\nbuilding and sharing our own images. Go to Working with Docker Images .",
|
|
"title": "Next steps"
|
|
},
|
|
{
|
|
"loc": "/userguide/dockerimages/",
|
|
"tags": "",
|
|
"text": "Working with Docker Images\nIn the introduction we've discovered that Docker\nimages are the basis of containers. In the\nprevious sections\nwe've used Docker images that already exist, for example the ubuntu\nimage and the training/webapp image.\nWe've also discovered that Docker stores downloaded images on the Docker\nhost. If an image isn't already present on the host then it'll be\ndownloaded from a registry: by default the\nDocker Hub Registry.\nIn this section we're going to explore Docker images a bit more\nincluding:\n\nManaging and working with images locally on your Docker host;\nCreating basic images;\nUploading images to Docker Hub Registry.\n\nListing images on the host\nLet's start with listing the images we have locally on our host. You can\ndo this using the docker images command like so:\n$ sudo docker images\nREPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE\ntraining/webapp latest fc77f57ad303 3 weeks ago 280.5 MB\nubuntu 13.10 5e019ab7bf6d 4 weeks ago 180 MB\nubuntu saucy 5e019ab7bf6d 4 weeks ago 180 MB\nubuntu 12.04 74fe38d11401 4 weeks ago 209.6 MB\nubuntu precise 74fe38d11401 4 weeks ago 209.6 MB\nubuntu 12.10 a7cf8ae4e998 4 weeks ago 171.3 MB\nubuntu quantal a7cf8ae4e998 4 weeks ago 171.3 MB\nubuntu 14.04 99ec81b80c55 4 weeks ago 266 MB\nubuntu latest 99ec81b80c55 4 weeks ago 266 MB\nubuntu trusty 99ec81b80c55 4 weeks ago 266 MB\nubuntu 13.04 316b678ddf48 4 weeks ago 169.4 MB\nubuntu raring 316b678ddf48 4 weeks ago 169.4 MB\nubuntu 10.04 3db9c44f4520 4 weeks ago 183 MB\nubuntu lucid 3db9c44f4520 4 weeks ago 183 MB\n\nWe can see the images we've previously used in our user guide.\nEach has been downloaded from Docker Hub when we\nlaunched a container using that image.\nWe can see three crucial pieces of information about our images in the listing.\n\nWhat repository they came from, for example ubuntu.\nThe tags for each image, for example 14.04.\nThe image ID of each image.\n\nA repository potentially holds multiple variants of an image. In the case of\nour ubuntu image we can see multiple variants covering Ubuntu 10.04, 12.04,\n12.10, 13.04, 13.10 and 14.04. Each variant is identified by a tag and you can\nrefer to a tagged image like so:\nubuntu:14.04\n\nSo when we run a container we refer to a tagged image like so:\n$ sudo docker run -t -i ubuntu:14.04 /bin/bash\n\nIf instead we wanted to run an Ubuntu 12.04 image we'd use:\n$ sudo docker run -t -i ubuntu:12.04 /bin/bash\n\nIf you don't specify a variant, for example you just use ubuntu, then Docker\nwill default to using the ubuntu:latest image.\n\nTip: \nWe recommend you always use a specific tagged image, for example\nubuntu:12.04. That way you always know exactly what variant of an image is\nbeing used.\n\nGetting a new image\nSo how do we get new images? Well Docker will automatically download any image\nwe use that isn't already present on the Docker host. But this can potentially\nadd some time to the launch of a container. If we want to pre-load an image we\ncan download it using the docker pull command. Let's say we'd like to\ndownload the centos image.\n$ sudo docker pull centos\nPulling repository centos\nb7de3133ff98: Pulling dependent layers\n5cc9e91966f7: Pulling fs layer\n511136ea3c5a: Download complete\nef52fb1fe610: Download complete\n. . .\n\nStatus: Downloaded newer image for centos\n\nWe can see that each layer of the image has been pulled down and now we\ncan run a container from this image and we won't have to wait to\ndownload the image.\n$ sudo docker run -t -i centos /bin/bash\nbash-4.1#\n\nFinding images\nOne of the features of Docker is that a lot of people have created Docker\nimages for a variety of purposes. Many of these have been uploaded to\nDocker Hub. We can search these images on the\nDocker Hub website.\n\nWe can also search for images on the command line using the docker search\ncommand. Let's say our team wants an image with Ruby and Sinatra installed on\nwhich to do our web application development. We can search for a suitable image\nby using the docker search command to find all the images that contain the\nterm sinatra.\n$ sudo docker search sinatra\nNAME DESCRIPTION STARS OFFICIAL AUTOMATED\ntraining/sinatra Sinatra training image 0 [OK]\nmarceldegraaf/sinatra Sinatra test app 0\nmattwarren/docker-sinatra-demo 0 [OK]\nluisbebop/docker-sinatra-hello-world 0 [OK]\nbmorearty/handson-sinatra handson-ruby + Sinatra for Hands on with D... 0\nsubwiz/sinatra 0\nbmorearty/sinatra 0\n. . .\n\nWe can see we've returned a lot of images that use the term sinatra. We've\nreturned a list of image names, descriptions, Stars (which measure the social\npopularity of images - if a user likes an image then they can \"star\" it), and\nthe Official and Automated build statuses. Official repositories are built and\nmaintained by the Stackbrew project,\nand Automated repositories are Automated Builds that allow you to validate the source\nand content of an image.\nWe've reviewed the images available to use and we decided to use the\ntraining/sinatra image. So far we've seen two types of images repositories,\nimages like ubuntu, which are called base or root images. These base images\nare provided by Docker Inc and are built, validated and supported. These can be\nidentified by their single word names.\nWe've also seen user images, for example the training/sinatra image we've\nchosen. A user image belongs to a member of the Docker community and is built\nand maintained by them. You can identify user images as they are always\nprefixed with the user name, here training, of the user that created them.\nPulling our image\nWe've identified a suitable image, training/sinatra, and now we can download it using the docker pull command.\n$ sudo docker pull training/sinatra\n\nThe team can now use this image by running their own containers.\n$ sudo docker run -t -i training/sinatra /bin/bash\nroot@a8cb6ce02d85:/#\n\nCreating our own images\nThe team has found the training/sinatra image pretty useful but it's not quite what\nthey need and we need to make some changes to it. There are two ways we can\nupdate and create images.\n\nWe can update a container created from an image and commit the results to an image.\nWe can use a Dockerfile to specify instructions to create an image.\n\nUpdating and committing an image\nTo update an image we first need to create a container from the image\nwe'd like to update.\n$ sudo docker run -t -i training/sinatra /bin/bash\nroot@0b2616b0e5a8:/#\n\n\nNote: \nTake note of the container ID that has been created, 0b2616b0e5a8, as we'll\nneed it in a moment.\n\nInside our running container let's add the json gem.\nroot@0b2616b0e5a8:/# gem install json\n\nOnce this has completed let's exit our container using the exit\ncommand.\nNow we have a container with the change we want to make. We can then\ncommit a copy of this container to an image using the docker commit\ncommand.\n$ sudo docker commit -m \"Added json gem\" -a \"Kate Smith\" \\\n0b2616b0e5a8 ouruser/sinatra:v2\n4f177bd27a9ff0f6dc2a830403925b5360bfe0b93d476f7fc3231110e7f71b1c\n\nHere we've used the docker commit command. We've specified two flags: -m\nand -a. The -m flag allows us to specify a commit message, much like you\nwould with a commit on a version control system. The -a flag allows us to\nspecify an author for our update.\nWe've also specified the container we want to create this new image from,\n0b2616b0e5a8 (the ID we recorded earlier) and we've specified a target for\nthe image:\nouruser/sinatra:v2\n\nLet's break this target down. It consists of a new user, ouruser, that we're\nwriting this image to. We've also specified the name of the image, here we're\nkeeping the original image name sinatra. Finally we're specifying a tag for\nthe image: v2.\nWe can then look at our new ouruser/sinatra image using the docker images\ncommand.\n$ sudo docker images\nREPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE\ntraining/sinatra latest 5bc342fa0b91 10 hours ago 446.7 MB\nouruser/sinatra v2 3c59e02ddd1a 10 hours ago 446.7 MB\nouruser/sinatra latest 5db5f8471261 10 hours ago 446.7 MB\n\nTo use our new image to create a container we can then:\n$ sudo docker run -t -i ouruser/sinatra:v2 /bin/bash\nroot@78e82f680994:/#\n\nBuilding an image from a Dockerfile\nUsing the docker commit command is a pretty simple way of extending an image\nbut it's a bit cumbersome and it's not easy to share a development process for\nimages amongst a team. Instead we can use a new command, docker build, to\nbuild new images from scratch.\nTo do this we create a Dockerfile that contains a set of instructions that\ntell Docker how to build our image.\nLet's create a directory and a Dockerfile first.\n$ mkdir sinatra\n$ cd sinatra\n$ touch Dockerfile\n\nEach instruction creates a new layer of the image. Let's look at a simple\nexample now for building our own Sinatra image for our development team.\n# This is a comment\nFROM ubuntu:14.04\nMAINTAINER Kate Smith ksmith@example.com\nRUN apt-get update apt-get install -y ruby ruby-dev\nRUN gem install sinatra\n\nLet's look at what our Dockerfile does. Each instruction prefixes a statement and is capitalized.\nINSTRUCTION statement\n\n\nNote:\nWe use # to indicate a comment\n\nThe first instruction FROM tells Docker what the source of our image is, in\nthis case we're basing our new image on an Ubuntu 14.04 image.\nNext we use the MAINTAINER instruction to specify who maintains our new image.\nLastly, we've specified two RUN instructions. A RUN instruction executes\na command inside the image, for example installing a package. Here we're\nupdating our APT cache, installing Ruby and RubyGems and then installing the\nSinatra gem.\n\nNote: \nThere are a lot more instructions available to us in a Dockerfile.\n\nNow let's take our Dockerfile and use the docker build command to build an image.\n$ sudo docker build -t ouruser/sinatra:v2 .\nSending build context to Docker daemon 2.048 kB\nSending build context to Docker daemon \nStep 0 : FROM ubuntu:14.04\n --- e54ca5efa2e9\nStep 1 : MAINTAINER Kate Smith ksmith@example.com\n --- Using cache\n --- 851baf55332b\nStep 2 : RUN apt-get update apt-get install -y ruby ruby-dev\n --- Running in 3a2558904e9b\nSelecting previously unselected package libasan0:amd64.\n(Reading database ... 11518 files and directories currently installed.)\nPreparing to unpack .../libasan0_4.8.2-19ubuntu1_amd64.deb ...\nUnpacking libasan0:amd64 (4.8.2-19ubuntu1) ...\nSelecting previously unselected package libatomic1:amd64.\nPreparing to unpack .../libatomic1_4.8.2-19ubuntu1_amd64.deb ...\nUnpacking libatomic1:amd64 (4.8.2-19ubuntu1) ...\nSelecting previously unselected package libgmp10:amd64.\nPreparing to unpack .../libgmp10_2%3a5.1.3+dfsg-1ubuntu1_amd64.deb ...\nUnpacking libgmp10:amd64 (2:5.1.3+dfsg-1ubuntu1) ...\nSelecting previously unselected package libisl10:amd64.\nPreparing to unpack .../libisl10_0.12.2-1_amd64.deb ...\nUnpacking libisl10:amd64 (0.12.2-1) ...\nSelecting previously unselected package libcloog-isl4:amd64.\nPreparing to unpack .../libcloog-isl4_0.18.2-1_amd64.deb ...\nUnpacking libcloog-isl4:amd64 (0.18.2-1) ...\nSelecting previously unselected package libgomp1:amd64.\nPreparing to unpack .../libgomp1_4.8.2-19ubuntu1_amd64.deb ...\nUnpacking libgomp1:amd64 (4.8.2-19ubuntu1) ...\nSelecting previously unselected package libitm1:amd64.\nPreparing to unpack .../libitm1_4.8.2-19ubuntu1_amd64.deb ...\nUnpacking libitm1:amd64 (4.8.2-19ubuntu1) ...\nSelecting previously unselected package libmpfr4:amd64.\nPreparing to unpack .../libmpfr4_3.1.2-1_amd64.deb ...\nUnpacking libmpfr4:amd64 (3.1.2-1) ...\nSelecting previously unselected package libquadmath0:amd64.\nPreparing to unpack .../libquadmath0_4.8.2-19ubuntu1_amd64.deb ...\nUnpacking libquadmath0:amd64 (4.8.2-19ubuntu1) ...\nSelecting previously unselected package libtsan0:amd64.\nPreparing to unpack .../libtsan0_4.8.2-19ubuntu1_amd64.deb ...\nUnpacking libtsan0:amd64 (4.8.2-19ubuntu1) ...\nSelecting previously unselected package libyaml-0-2:amd64.\nPreparing to unpack .../libyaml-0-2_0.1.4-3ubuntu3_amd64.deb ...\nUnpacking libyaml-0-2:amd64 (0.1.4-3ubuntu3) ...\nSelecting previously unselected package libmpc3:amd64.\nPreparing to unpack .../libmpc3_1.0.1-1ubuntu1_amd64.deb ...\nUnpacking libmpc3:amd64 (1.0.1-1ubuntu1) ...\nSelecting previously unselected package openssl.\nPreparing to unpack .../openssl_1.0.1f-1ubuntu2.4_amd64.deb ...\nUnpacking openssl (1.0.1f-1ubuntu2.4) ...\nSelecting previously unselected package ca-certificates.\nPreparing to unpack .../ca-certificates_20130906ubuntu2_all.deb ...\nUnpacking ca-certificates (20130906ubuntu2) ...\nSelecting previously unselected package manpages.\nPreparing to unpack .../manpages_3.54-1ubuntu1_all.deb ...\nUnpacking manpages (3.54-1ubuntu1) ...\nSelecting previously unselected package binutils.\nPreparing to unpack .../binutils_2.24-5ubuntu3_amd64.deb ...\nUnpacking binutils (2.24-5ubuntu3) ...\nSelecting previously unselected package cpp-4.8.\nPreparing to unpack .../cpp-4.8_4.8.2-19ubuntu1_amd64.deb ...\nUnpacking cpp-4.8 (4.8.2-19ubuntu1) ...\nSelecting previously unselected package cpp.\nPreparing to unpack .../cpp_4%3a4.8.2-1ubuntu6_amd64.deb ...\nUnpacking cpp (4:4.8.2-1ubuntu6) ...\nSelecting previously unselected package libgcc-4.8-dev:amd64.\nPreparing to unpack .../libgcc-4.8-dev_4.8.2-19ubuntu1_amd64.deb ...\nUnpacking libgcc-4.8-dev:amd64 (4.8.2-19ubuntu1) ...\nSelecting previously unselected package gcc-4.8.\nPreparing to unpack .../gcc-4.8_4.8.2-19ubuntu1_amd64.deb ...\nUnpacking gcc-4.8 (4.8.2-19ubuntu1) ...\nSelecting previously unselected package gcc.\nPreparing to unpack .../gcc_4%3a4.8.2-1ubuntu6_amd64.deb ...\nUnpacking gcc (4:4.8.2-1ubuntu6) ...\nSelecting previously unselected package libc-dev-bin.\nPreparing to unpack .../libc-dev-bin_2.19-0ubuntu6_amd64.deb ...\nUnpacking libc-dev-bin (2.19-0ubuntu6) ...\nSelecting previously unselected package linux-libc-dev:amd64.\nPreparing to unpack .../linux-libc-dev_3.13.0-30.55_amd64.deb ...\nUnpacking linux-libc-dev:amd64 (3.13.0-30.55) ...\nSelecting previously unselected package libc6-dev:amd64.\nPreparing to unpack .../libc6-dev_2.19-0ubuntu6_amd64.deb ...\nUnpacking libc6-dev:amd64 (2.19-0ubuntu6) ...\nSelecting previously unselected package ruby.\nPreparing to unpack .../ruby_1%3a1.9.3.4_all.deb ...\nUnpacking ruby (1:1.9.3.4) ...\nSelecting previously unselected package ruby1.9.1.\nPreparing to unpack .../ruby1.9.1_1.9.3.484-2ubuntu1_amd64.deb ...\nUnpacking ruby1.9.1 (1.9.3.484-2ubuntu1) ...\nSelecting previously unselected package libruby1.9.1.\nPreparing to unpack .../libruby1.9.1_1.9.3.484-2ubuntu1_amd64.deb ...\nUnpacking libruby1.9.1 (1.9.3.484-2ubuntu1) ...\nSelecting previously unselected package manpages-dev.\nPreparing to unpack .../manpages-dev_3.54-1ubuntu1_all.deb ...\nUnpacking manpages-dev (3.54-1ubuntu1) ...\nSelecting previously unselected package ruby1.9.1-dev.\nPreparing to unpack .../ruby1.9.1-dev_1.9.3.484-2ubuntu1_amd64.deb ...\nUnpacking ruby1.9.1-dev (1.9.3.484-2ubuntu1) ...\nSelecting previously unselected package ruby-dev.\nPreparing to unpack .../ruby-dev_1%3a1.9.3.4_all.deb ...\nUnpacking ruby-dev (1:1.9.3.4) ...\nSetting up libasan0:amd64 (4.8.2-19ubuntu1) ...\nSetting up libatomic1:amd64 (4.8.2-19ubuntu1) ...\nSetting up libgmp10:amd64 (2:5.1.3+dfsg-1ubuntu1) ...\nSetting up libisl10:amd64 (0.12.2-1) ...\nSetting up libcloog-isl4:amd64 (0.18.2-1) ...\nSetting up libgomp1:amd64 (4.8.2-19ubuntu1) ...\nSetting up libitm1:amd64 (4.8.2-19ubuntu1) ...\nSetting up libmpfr4:amd64 (3.1.2-1) ...\nSetting up libquadmath0:amd64 (4.8.2-19ubuntu1) ...\nSetting up libtsan0:amd64 (4.8.2-19ubuntu1) ...\nSetting up libyaml-0-2:amd64 (0.1.4-3ubuntu3) ...\nSetting up libmpc3:amd64 (1.0.1-1ubuntu1) ...\nSetting up openssl (1.0.1f-1ubuntu2.4) ...\nSetting up ca-certificates (20130906ubuntu2) ...\ndebconf: unable to initialize frontend: Dialog\ndebconf: (TERM is not set, so the dialog frontend is not usable.)\ndebconf: falling back to frontend: Readline\ndebconf: unable to initialize frontend: Readline\ndebconf: (This frontend requires a controlling tty.)\ndebconf: falling back to frontend: Teletype\nSetting up manpages (3.54-1ubuntu1) ...\nSetting up binutils (2.24-5ubuntu3) ...\nSetting up cpp-4.8 (4.8.2-19ubuntu1) ...\nSetting up cpp (4:4.8.2-1ubuntu6) ...\nSetting up libgcc-4.8-dev:amd64 (4.8.2-19ubuntu1) ...\nSetting up gcc-4.8 (4.8.2-19ubuntu1) ...\nSetting up gcc (4:4.8.2-1ubuntu6) ...\nSetting up libc-dev-bin (2.19-0ubuntu6) ...\nSetting up linux-libc-dev:amd64 (3.13.0-30.55) ...\nSetting up libc6-dev:amd64 (2.19-0ubuntu6) ...\nSetting up manpages-dev (3.54-1ubuntu1) ...\nSetting up libruby1.9.1 (1.9.3.484-2ubuntu1) ...\nSetting up ruby1.9.1-dev (1.9.3.484-2ubuntu1) ...\nSetting up ruby-dev (1:1.9.3.4) ...\nSetting up ruby (1:1.9.3.4) ...\nSetting up ruby1.9.1 (1.9.3.484-2ubuntu1) ...\nProcessing triggers for libc-bin (2.19-0ubuntu6) ...\nProcessing triggers for ca-certificates (20130906ubuntu2) ...\nUpdating certificates in /etc/ssl/certs... 164 added, 0 removed; done.\nRunning hooks in /etc/ca-certificates/update.d....done.\n --- c55c31703134\nRemoving intermediate container 3a2558904e9b\nStep 3 : RUN gem install sinatra\n --- Running in 6b81cb6313e5\nunable to convert \"\\xC3\" to UTF-8 in conversion from ASCII-8BIT to UTF-8 to US-ASCII for README.rdoc, skipping\nunable to convert \"\\xC3\" to UTF-8 in conversion from ASCII-8BIT to UTF-8 to US-ASCII for README.rdoc, skipping\nSuccessfully installed rack-1.5.2\nSuccessfully installed tilt-1.4.1\nSuccessfully installed rack-protection-1.5.3\nSuccessfully installed sinatra-1.4.5\n4 gems installed\nInstalling ri documentation for rack-1.5.2...\nInstalling ri documentation for tilt-1.4.1...\nInstalling ri documentation for rack-protection-1.5.3...\nInstalling ri documentation for sinatra-1.4.5...\nInstalling RDoc documentation for rack-1.5.2...\nInstalling RDoc documentation for tilt-1.4.1...\nInstalling RDoc documentation for rack-protection-1.5.3...\nInstalling RDoc documentation for sinatra-1.4.5...\n --- 97feabe5d2ed\nRemoving intermediate container 6b81cb6313e5\nSuccessfully built 97feabe5d2ed\n\nWe've specified our docker build command and used the -t flag to identify\nour new image as belonging to the user ouruser, the repository name sinatra\nand given it the tag v2.\nWe've also specified the location of our Dockerfile using the . to\nindicate a Dockerfile in the current directory.\n\nNote:\nYou can also specify a path to a Dockerfile.\n\nNow we can see the build process at work. The first thing Docker does is\nupload the build context: basically the contents of the directory you're\nbuilding in. This is done because the Docker daemon does the actual\nbuild of the image and it needs the local context to do it.\nNext we can see each instruction in the Dockerfile being executed\nstep-by-step. We can see that each step creates a new container, runs\nthe instruction inside that container and then commits that change -\njust like the docker commit work flow we saw earlier. When all the\ninstructions have executed we're left with the 97feabe5d2ed image\n(also helpfully tagged as ouruser/sinatra:v2) and all intermediate\ncontainers will get removed to clean things up.\n\nNote: \nAn image can't have more than 127 layers regardless of the storage driver.\nThis limitation is set globally to encourage optimization of the overall \nsize of images.\n\nWe can then create a container from our new image.\n$ sudo docker run -t -i ouruser/sinatra:v2 /bin/bash\nroot@8196968dac35:/#\n\n\nNote: \nThis is just a brief introduction to creating images. We've\nskipped a whole bunch of other instructions that you can use. We'll see more of\nthose instructions in later sections of the Guide or you can refer to the\nDockerfile reference for a\ndetailed description and examples of every instruction.\nTo help you write a clear, readable, maintainable Dockerfile, we've also\nwritten a Dockerfile Best Practices guide.\n\nMore\nTo learn more, check out the Dockerfile tutorial.\nSetting tags on an image\nYou can also add a tag to an existing image after you commit or build it. We\ncan do this using the docker tag command. Let's add a new tag to our\nouruser/sinatra image.\n$ sudo docker tag 5db5f8471261 ouruser/sinatra:devel\n\nThe docker tag command takes the ID of the image, here 5db5f8471261, and our\nuser name, the repository name and the new tag.\nLet's see our new tag using the docker images command.\n$ sudo docker images ouruser/sinatra\nREPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE\nouruser/sinatra latest 5db5f8471261 11 hours ago 446.7 MB\nouruser/sinatra devel 5db5f8471261 11 hours ago 446.7 MB\nouruser/sinatra v2 5db5f8471261 11 hours ago 446.7 MB\n\nPush an image to Docker Hub\nOnce you've built or created a new image you can push it to Docker\nHub using the docker push command. This\nallows you to share it with others, either publicly, or push it into a\nprivate repository.\n$ sudo docker push ouruser/sinatra\nThe push refers to a repository [ouruser/sinatra] (len: 1)\nSending image list\nPushing repository ouruser/sinatra (3 tags)\n. . .\n\nRemove an image from the host\nYou can also remove images on your Docker host in a way similar to\ncontainers using the docker rmi command.\nLet's delete the training/sinatra image as we don't need it anymore.\n$ sudo docker rmi training/sinatra\nUntagged: training/sinatra:latest\nDeleted: 5bc342fa0b91cabf65246837015197eecfa24b2213ed6a51a8974ae250fedd8d\nDeleted: ed0fffdcdae5eb2c3a55549857a8be7fc8bc4241fb19ad714364cbfd7a56b22f\nDeleted: 5c58979d73ae448df5af1d8142436d81116187a7633082650549c52c3a2418f0\n\n\nNote: In order to remove an image from the host, please make sure\nthat there are no containers actively based on it.\n\nNext steps\nUntil now we've seen how to build individual applications inside Docker\ncontainers. Now learn how to build whole application stacks with Docker\nby linking together multiple Docker containers.\nTest your Dockerfile knowledge with the\nDockerfile tutorial.\nGo to Linking Containers Together.",
|
|
"title": "Working with Docker Images"
|
|
},
|
|
{
|
|
"loc": "/userguide/dockerimages#working-with-docker-images",
|
|
"tags": "",
|
|
"text": "In the introduction we've discovered that Docker\nimages are the basis of containers. In the previous sections \nwe've used Docker images that already exist, for example the ubuntu \nimage and the training/webapp image. We've also discovered that Docker stores downloaded images on the Docker\nhost. If an image isn't already present on the host then it'll be\ndownloaded from a registry: by default the Docker Hub Registry . In this section we're going to explore Docker images a bit more\nincluding: Managing and working with images locally on your Docker host; Creating basic images; Uploading images to Docker Hub Registry .",
|
|
"title": "Working with Docker Images"
|
|
},
|
|
{
|
|
"loc": "/userguide/dockerimages#listing-images-on-the-host",
|
|
"tags": "",
|
|
"text": "Let's start with listing the images we have locally on our host. You can\ndo this using the docker images command like so: $ sudo docker images\nREPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE\ntraining/webapp latest fc77f57ad303 3 weeks ago 280.5 MB\nubuntu 13.10 5e019ab7bf6d 4 weeks ago 180 MB\nubuntu saucy 5e019ab7bf6d 4 weeks ago 180 MB\nubuntu 12.04 74fe38d11401 4 weeks ago 209.6 MB\nubuntu precise 74fe38d11401 4 weeks ago 209.6 MB\nubuntu 12.10 a7cf8ae4e998 4 weeks ago 171.3 MB\nubuntu quantal a7cf8ae4e998 4 weeks ago 171.3 MB\nubuntu 14.04 99ec81b80c55 4 weeks ago 266 MB\nubuntu latest 99ec81b80c55 4 weeks ago 266 MB\nubuntu trusty 99ec81b80c55 4 weeks ago 266 MB\nubuntu 13.04 316b678ddf48 4 weeks ago 169.4 MB\nubuntu raring 316b678ddf48 4 weeks ago 169.4 MB\nubuntu 10.04 3db9c44f4520 4 weeks ago 183 MB\nubuntu lucid 3db9c44f4520 4 weeks ago 183 MB We can see the images we've previously used in our user guide .\nEach has been downloaded from Docker Hub when we\nlaunched a container using that image. We can see three crucial pieces of information about our images in the listing. What repository they came from, for example ubuntu . The tags for each image, for example 14.04 . The image ID of each image. A repository potentially holds multiple variants of an image. In the case of\nour ubuntu image we can see multiple variants covering Ubuntu 10.04, 12.04,\n12.10, 13.04, 13.10 and 14.04. Each variant is identified by a tag and you can\nrefer to a tagged image like so: ubuntu:14.04 So when we run a container we refer to a tagged image like so: $ sudo docker run -t -i ubuntu:14.04 /bin/bash If instead we wanted to run an Ubuntu 12.04 image we'd use: $ sudo docker run -t -i ubuntu:12.04 /bin/bash If you don't specify a variant, for example you just use ubuntu , then Docker\nwill default to using the ubuntu:latest image. Tip: \nWe recommend you always use a specific tagged image, for example ubuntu:12.04 . That way you always know exactly what variant of an image is\nbeing used.",
|
|
"title": "Listing images on the host"
|
|
},
|
|
{
|
|
"loc": "/userguide/dockerimages#getting-a-new-image",
|
|
"tags": "",
|
|
"text": "So how do we get new images? Well Docker will automatically download any image\nwe use that isn't already present on the Docker host. But this can potentially\nadd some time to the launch of a container. If we want to pre-load an image we\ncan download it using the docker pull command. Let's say we'd like to\ndownload the centos image. $ sudo docker pull centos\nPulling repository centos\nb7de3133ff98: Pulling dependent layers\n5cc9e91966f7: Pulling fs layer\n511136ea3c5a: Download complete\nef52fb1fe610: Download complete\n. . .\n\nStatus: Downloaded newer image for centos We can see that each layer of the image has been pulled down and now we\ncan run a container from this image and we won't have to wait to\ndownload the image. $ sudo docker run -t -i centos /bin/bash\nbash-4.1#",
|
|
"title": "Getting a new image"
|
|
},
|
|
{
|
|
"loc": "/userguide/dockerimages#finding-images",
|
|
"tags": "",
|
|
"text": "One of the features of Docker is that a lot of people have created Docker\nimages for a variety of purposes. Many of these have been uploaded to Docker Hub . We can search these images on the Docker Hub website. We can also search for images on the command line using the docker search \ncommand. Let's say our team wants an image with Ruby and Sinatra installed on\nwhich to do our web application development. We can search for a suitable image\nby using the docker search command to find all the images that contain the\nterm sinatra . $ sudo docker search sinatra\nNAME DESCRIPTION STARS OFFICIAL AUTOMATED\ntraining/sinatra Sinatra training image 0 [OK]\nmarceldegraaf/sinatra Sinatra test app 0\nmattwarren/docker-sinatra-demo 0 [OK]\nluisbebop/docker-sinatra-hello-world 0 [OK]\nbmorearty/handson-sinatra handson-ruby + Sinatra for Hands on with D... 0\nsubwiz/sinatra 0\nbmorearty/sinatra 0\n. . . We can see we've returned a lot of images that use the term sinatra . We've\nreturned a list of image names, descriptions, Stars (which measure the social\npopularity of images - if a user likes an image then they can \"star\" it), and\nthe Official and Automated build statuses. Official repositories are built and\nmaintained by the Stackbrew project,\nand Automated repositories are Automated Builds that allow you to validate the source\nand content of an image. We've reviewed the images available to use and we decided to use the training/sinatra image. So far we've seen two types of images repositories,\nimages like ubuntu , which are called base or root images. These base images\nare provided by Docker Inc and are built, validated and supported. These can be\nidentified by their single word names. We've also seen user images, for example the training/sinatra image we've\nchosen. A user image belongs to a member of the Docker community and is built\nand maintained by them. You can identify user images as they are always\nprefixed with the user name, here training , of the user that created them.",
|
|
"title": "Finding images"
|
|
},
|
|
{
|
|
"loc": "/userguide/dockerimages#pulling-our-image",
|
|
"tags": "",
|
|
"text": "We've identified a suitable image, training/sinatra , and now we can download it using the docker pull command. $ sudo docker pull training/sinatra The team can now use this image by running their own containers. $ sudo docker run -t -i training/sinatra /bin/bash\nroot@a8cb6ce02d85:/#",
|
|
"title": "Pulling our image"
|
|
},
|
|
{
|
|
"loc": "/userguide/dockerimages#creating-our-own-images",
|
|
"tags": "",
|
|
"text": "The team has found the training/sinatra image pretty useful but it's not quite what\nthey need and we need to make some changes to it. There are two ways we can\nupdate and create images. We can update a container created from an image and commit the results to an image. We can use a Dockerfile to specify instructions to create an image. Updating and committing an image To update an image we first need to create a container from the image\nwe'd like to update. $ sudo docker run -t -i training/sinatra /bin/bash\nroot@0b2616b0e5a8:/# Note: \nTake note of the container ID that has been created, 0b2616b0e5a8 , as we'll\nneed it in a moment. Inside our running container let's add the json gem. root@0b2616b0e5a8:/# gem install json Once this has completed let's exit our container using the exit \ncommand. Now we have a container with the change we want to make. We can then\ncommit a copy of this container to an image using the docker commit \ncommand. $ sudo docker commit -m \"Added json gem\" -a \"Kate Smith\" \\\n0b2616b0e5a8 ouruser/sinatra:v2\n4f177bd27a9ff0f6dc2a830403925b5360bfe0b93d476f7fc3231110e7f71b1c Here we've used the docker commit command. We've specified two flags: -m \nand -a . The -m flag allows us to specify a commit message, much like you\nwould with a commit on a version control system. The -a flag allows us to\nspecify an author for our update. We've also specified the container we want to create this new image from, 0b2616b0e5a8 (the ID we recorded earlier) and we've specified a target for\nthe image: ouruser/sinatra:v2 Let's break this target down. It consists of a new user, ouruser , that we're\nwriting this image to. We've also specified the name of the image, here we're\nkeeping the original image name sinatra . Finally we're specifying a tag for\nthe image: v2 . We can then look at our new ouruser/sinatra image using the docker images \ncommand. $ sudo docker images\nREPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE\ntraining/sinatra latest 5bc342fa0b91 10 hours ago 446.7 MB\nouruser/sinatra v2 3c59e02ddd1a 10 hours ago 446.7 MB\nouruser/sinatra latest 5db5f8471261 10 hours ago 446.7 MB To use our new image to create a container we can then: $ sudo docker run -t -i ouruser/sinatra:v2 /bin/bash\nroot@78e82f680994:/# Building an image from a Dockerfile Using the docker commit command is a pretty simple way of extending an image\nbut it's a bit cumbersome and it's not easy to share a development process for\nimages amongst a team. Instead we can use a new command, docker build , to\nbuild new images from scratch. To do this we create a Dockerfile that contains a set of instructions that\ntell Docker how to build our image. Let's create a directory and a Dockerfile first. $ mkdir sinatra\n$ cd sinatra\n$ touch Dockerfile Each instruction creates a new layer of the image. Let's look at a simple\nexample now for building our own Sinatra image for our development team. # This is a comment\nFROM ubuntu:14.04\nMAINTAINER Kate Smith ksmith@example.com \nRUN apt-get update apt-get install -y ruby ruby-dev\nRUN gem install sinatra Let's look at what our Dockerfile does. Each instruction prefixes a statement and is capitalized. INSTRUCTION statement Note: \nWe use # to indicate a comment The first instruction FROM tells Docker what the source of our image is, in\nthis case we're basing our new image on an Ubuntu 14.04 image. Next we use the MAINTAINER instruction to specify who maintains our new image. Lastly, we've specified two RUN instructions. A RUN instruction executes\na command inside the image, for example installing a package. Here we're\nupdating our APT cache, installing Ruby and RubyGems and then installing the\nSinatra gem. Note: \nThere are a lot more instructions available to us in a Dockerfile . Now let's take our Dockerfile and use the docker build command to build an image. $ sudo docker build -t ouruser/sinatra:v2 .\nSending build context to Docker daemon 2.048 kB\nSending build context to Docker daemon \nStep 0 : FROM ubuntu:14.04\n --- e54ca5efa2e9\nStep 1 : MAINTAINER Kate Smith ksmith@example.com \n --- Using cache\n --- 851baf55332b\nStep 2 : RUN apt-get update apt-get install -y ruby ruby-dev\n --- Running in 3a2558904e9b\nSelecting previously unselected package libasan0:amd64.\n(Reading database ... 11518 files and directories currently installed.)\nPreparing to unpack .../libasan0_4.8.2-19ubuntu1_amd64.deb ...\nUnpacking libasan0:amd64 (4.8.2-19ubuntu1) ...\nSelecting previously unselected package libatomic1:amd64.\nPreparing to unpack .../libatomic1_4.8.2-19ubuntu1_amd64.deb ...\nUnpacking libatomic1:amd64 (4.8.2-19ubuntu1) ...\nSelecting previously unselected package libgmp10:amd64.\nPreparing to unpack .../libgmp10_2%3a5.1.3+dfsg-1ubuntu1_amd64.deb ...\nUnpacking libgmp10:amd64 (2:5.1.3+dfsg-1ubuntu1) ...\nSelecting previously unselected package libisl10:amd64.\nPreparing to unpack .../libisl10_0.12.2-1_amd64.deb ...\nUnpacking libisl10:amd64 (0.12.2-1) ...\nSelecting previously unselected package libcloog-isl4:amd64.\nPreparing to unpack .../libcloog-isl4_0.18.2-1_amd64.deb ...\nUnpacking libcloog-isl4:amd64 (0.18.2-1) ...\nSelecting previously unselected package libgomp1:amd64.\nPreparing to unpack .../libgomp1_4.8.2-19ubuntu1_amd64.deb ...\nUnpacking libgomp1:amd64 (4.8.2-19ubuntu1) ...\nSelecting previously unselected package libitm1:amd64.\nPreparing to unpack .../libitm1_4.8.2-19ubuntu1_amd64.deb ...\nUnpacking libitm1:amd64 (4.8.2-19ubuntu1) ...\nSelecting previously unselected package libmpfr4:amd64.\nPreparing to unpack .../libmpfr4_3.1.2-1_amd64.deb ...\nUnpacking libmpfr4:amd64 (3.1.2-1) ...\nSelecting previously unselected package libquadmath0:amd64.\nPreparing to unpack .../libquadmath0_4.8.2-19ubuntu1_amd64.deb ...\nUnpacking libquadmath0:amd64 (4.8.2-19ubuntu1) ...\nSelecting previously unselected package libtsan0:amd64.\nPreparing to unpack .../libtsan0_4.8.2-19ubuntu1_amd64.deb ...\nUnpacking libtsan0:amd64 (4.8.2-19ubuntu1) ...\nSelecting previously unselected package libyaml-0-2:amd64.\nPreparing to unpack .../libyaml-0-2_0.1.4-3ubuntu3_amd64.deb ...\nUnpacking libyaml-0-2:amd64 (0.1.4-3ubuntu3) ...\nSelecting previously unselected package libmpc3:amd64.\nPreparing to unpack .../libmpc3_1.0.1-1ubuntu1_amd64.deb ...\nUnpacking libmpc3:amd64 (1.0.1-1ubuntu1) ...\nSelecting previously unselected package openssl.\nPreparing to unpack .../openssl_1.0.1f-1ubuntu2.4_amd64.deb ...\nUnpacking openssl (1.0.1f-1ubuntu2.4) ...\nSelecting previously unselected package ca-certificates.\nPreparing to unpack .../ca-certificates_20130906ubuntu2_all.deb ...\nUnpacking ca-certificates (20130906ubuntu2) ...\nSelecting previously unselected package manpages.\nPreparing to unpack .../manpages_3.54-1ubuntu1_all.deb ...\nUnpacking manpages (3.54-1ubuntu1) ...\nSelecting previously unselected package binutils.\nPreparing to unpack .../binutils_2.24-5ubuntu3_amd64.deb ...\nUnpacking binutils (2.24-5ubuntu3) ...\nSelecting previously unselected package cpp-4.8.\nPreparing to unpack .../cpp-4.8_4.8.2-19ubuntu1_amd64.deb ...\nUnpacking cpp-4.8 (4.8.2-19ubuntu1) ...\nSelecting previously unselected package cpp.\nPreparing to unpack .../cpp_4%3a4.8.2-1ubuntu6_amd64.deb ...\nUnpacking cpp (4:4.8.2-1ubuntu6) ...\nSelecting previously unselected package libgcc-4.8-dev:amd64.\nPreparing to unpack .../libgcc-4.8-dev_4.8.2-19ubuntu1_amd64.deb ...\nUnpacking libgcc-4.8-dev:amd64 (4.8.2-19ubuntu1) ...\nSelecting previously unselected package gcc-4.8.\nPreparing to unpack .../gcc-4.8_4.8.2-19ubuntu1_amd64.deb ...\nUnpacking gcc-4.8 (4.8.2-19ubuntu1) ...\nSelecting previously unselected package gcc.\nPreparing to unpack .../gcc_4%3a4.8.2-1ubuntu6_amd64.deb ...\nUnpacking gcc (4:4.8.2-1ubuntu6) ...\nSelecting previously unselected package libc-dev-bin.\nPreparing to unpack .../libc-dev-bin_2.19-0ubuntu6_amd64.deb ...\nUnpacking libc-dev-bin (2.19-0ubuntu6) ...\nSelecting previously unselected package linux-libc-dev:amd64.\nPreparing to unpack .../linux-libc-dev_3.13.0-30.55_amd64.deb ...\nUnpacking linux-libc-dev:amd64 (3.13.0-30.55) ...\nSelecting previously unselected package libc6-dev:amd64.\nPreparing to unpack .../libc6-dev_2.19-0ubuntu6_amd64.deb ...\nUnpacking libc6-dev:amd64 (2.19-0ubuntu6) ...\nSelecting previously unselected package ruby.\nPreparing to unpack .../ruby_1%3a1.9.3.4_all.deb ...\nUnpacking ruby (1:1.9.3.4) ...\nSelecting previously unselected package ruby1.9.1.\nPreparing to unpack .../ruby1.9.1_1.9.3.484-2ubuntu1_amd64.deb ...\nUnpacking ruby1.9.1 (1.9.3.484-2ubuntu1) ...\nSelecting previously unselected package libruby1.9.1.\nPreparing to unpack .../libruby1.9.1_1.9.3.484-2ubuntu1_amd64.deb ...\nUnpacking libruby1.9.1 (1.9.3.484-2ubuntu1) ...\nSelecting previously unselected package manpages-dev.\nPreparing to unpack .../manpages-dev_3.54-1ubuntu1_all.deb ...\nUnpacking manpages-dev (3.54-1ubuntu1) ...\nSelecting previously unselected package ruby1.9.1-dev.\nPreparing to unpack .../ruby1.9.1-dev_1.9.3.484-2ubuntu1_amd64.deb ...\nUnpacking ruby1.9.1-dev (1.9.3.484-2ubuntu1) ...\nSelecting previously unselected package ruby-dev.\nPreparing to unpack .../ruby-dev_1%3a1.9.3.4_all.deb ...\nUnpacking ruby-dev (1:1.9.3.4) ...\nSetting up libasan0:amd64 (4.8.2-19ubuntu1) ...\nSetting up libatomic1:amd64 (4.8.2-19ubuntu1) ...\nSetting up libgmp10:amd64 (2:5.1.3+dfsg-1ubuntu1) ...\nSetting up libisl10:amd64 (0.12.2-1) ...\nSetting up libcloog-isl4:amd64 (0.18.2-1) ...\nSetting up libgomp1:amd64 (4.8.2-19ubuntu1) ...\nSetting up libitm1:amd64 (4.8.2-19ubuntu1) ...\nSetting up libmpfr4:amd64 (3.1.2-1) ...\nSetting up libquadmath0:amd64 (4.8.2-19ubuntu1) ...\nSetting up libtsan0:amd64 (4.8.2-19ubuntu1) ...\nSetting up libyaml-0-2:amd64 (0.1.4-3ubuntu3) ...\nSetting up libmpc3:amd64 (1.0.1-1ubuntu1) ...\nSetting up openssl (1.0.1f-1ubuntu2.4) ...\nSetting up ca-certificates (20130906ubuntu2) ...\ndebconf: unable to initialize frontend: Dialog\ndebconf: (TERM is not set, so the dialog frontend is not usable.)\ndebconf: falling back to frontend: Readline\ndebconf: unable to initialize frontend: Readline\ndebconf: (This frontend requires a controlling tty.)\ndebconf: falling back to frontend: Teletype\nSetting up manpages (3.54-1ubuntu1) ...\nSetting up binutils (2.24-5ubuntu3) ...\nSetting up cpp-4.8 (4.8.2-19ubuntu1) ...\nSetting up cpp (4:4.8.2-1ubuntu6) ...\nSetting up libgcc-4.8-dev:amd64 (4.8.2-19ubuntu1) ...\nSetting up gcc-4.8 (4.8.2-19ubuntu1) ...\nSetting up gcc (4:4.8.2-1ubuntu6) ...\nSetting up libc-dev-bin (2.19-0ubuntu6) ...\nSetting up linux-libc-dev:amd64 (3.13.0-30.55) ...\nSetting up libc6-dev:amd64 (2.19-0ubuntu6) ...\nSetting up manpages-dev (3.54-1ubuntu1) ...\nSetting up libruby1.9.1 (1.9.3.484-2ubuntu1) ...\nSetting up ruby1.9.1-dev (1.9.3.484-2ubuntu1) ...\nSetting up ruby-dev (1:1.9.3.4) ...\nSetting up ruby (1:1.9.3.4) ...\nSetting up ruby1.9.1 (1.9.3.484-2ubuntu1) ...\nProcessing triggers for libc-bin (2.19-0ubuntu6) ...\nProcessing triggers for ca-certificates (20130906ubuntu2) ...\nUpdating certificates in /etc/ssl/certs... 164 added, 0 removed; done.\nRunning hooks in /etc/ca-certificates/update.d....done.\n --- c55c31703134\nRemoving intermediate container 3a2558904e9b\nStep 3 : RUN gem install sinatra\n --- Running in 6b81cb6313e5\nunable to convert \"\\xC3\" to UTF-8 in conversion from ASCII-8BIT to UTF-8 to US-ASCII for README.rdoc, skipping\nunable to convert \"\\xC3\" to UTF-8 in conversion from ASCII-8BIT to UTF-8 to US-ASCII for README.rdoc, skipping\nSuccessfully installed rack-1.5.2\nSuccessfully installed tilt-1.4.1\nSuccessfully installed rack-protection-1.5.3\nSuccessfully installed sinatra-1.4.5\n4 gems installed\nInstalling ri documentation for rack-1.5.2...\nInstalling ri documentation for tilt-1.4.1...\nInstalling ri documentation for rack-protection-1.5.3...\nInstalling ri documentation for sinatra-1.4.5...\nInstalling RDoc documentation for rack-1.5.2...\nInstalling RDoc documentation for tilt-1.4.1...\nInstalling RDoc documentation for rack-protection-1.5.3...\nInstalling RDoc documentation for sinatra-1.4.5...\n --- 97feabe5d2ed\nRemoving intermediate container 6b81cb6313e5\nSuccessfully built 97feabe5d2ed We've specified our docker build command and used the -t flag to identify\nour new image as belonging to the user ouruser , the repository name sinatra \nand given it the tag v2 . We've also specified the location of our Dockerfile using the . to\nindicate a Dockerfile in the current directory. Note: \nYou can also specify a path to a Dockerfile . Now we can see the build process at work. The first thing Docker does is\nupload the build context: basically the contents of the directory you're\nbuilding in. This is done because the Docker daemon does the actual\nbuild of the image and it needs the local context to do it. Next we can see each instruction in the Dockerfile being executed\nstep-by-step. We can see that each step creates a new container, runs\nthe instruction inside that container and then commits that change -\njust like the docker commit work flow we saw earlier. When all the\ninstructions have executed we're left with the 97feabe5d2ed image\n(also helpfully tagged as ouruser/sinatra:v2 ) and all intermediate\ncontainers will get removed to clean things up. Note: \nAn image can't have more than 127 layers regardless of the storage driver.\nThis limitation is set globally to encourage optimization of the overall \nsize of images. We can then create a container from our new image. $ sudo docker run -t -i ouruser/sinatra:v2 /bin/bash\nroot@8196968dac35:/# Note: \nThis is just a brief introduction to creating images. We've\nskipped a whole bunch of other instructions that you can use. We'll see more of\nthose instructions in later sections of the Guide or you can refer to the Dockerfile reference for a\ndetailed description and examples of every instruction.\nTo help you write a clear, readable, maintainable Dockerfile , we've also\nwritten a Dockerfile Best Practices guide . More To learn more, check out the Dockerfile tutorial .",
|
|
"title": "Creating our own images"
|
|
},
|
|
{
|
|
"loc": "/userguide/dockerimages#setting-tags-on-an-image",
|
|
"tags": "",
|
|
"text": "You can also add a tag to an existing image after you commit or build it. We\ncan do this using the docker tag command. Let's add a new tag to our ouruser/sinatra image. $ sudo docker tag 5db5f8471261 ouruser/sinatra:devel The docker tag command takes the ID of the image, here 5db5f8471261 , and our\nuser name, the repository name and the new tag. Let's see our new tag using the docker images command. $ sudo docker images ouruser/sinatra\nREPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE\nouruser/sinatra latest 5db5f8471261 11 hours ago 446.7 MB\nouruser/sinatra devel 5db5f8471261 11 hours ago 446.7 MB\nouruser/sinatra v2 5db5f8471261 11 hours ago 446.7 MB",
|
|
"title": "Setting tags on an image"
|
|
},
|
|
{
|
|
"loc": "/userguide/dockerimages#push-an-image-to-docker-hub",
|
|
"tags": "",
|
|
"text": "Once you've built or created a new image you can push it to Docker\nHub using the docker push command. This\nallows you to share it with others, either publicly, or push it into a\nprivate repository . $ sudo docker push ouruser/sinatra\nThe push refers to a repository [ouruser/sinatra] (len: 1)\nSending image list\nPushing repository ouruser/sinatra (3 tags)\n. . .",
|
|
"title": "Push an image to Docker Hub"
|
|
},
|
|
{
|
|
"loc": "/userguide/dockerimages#remove-an-image-from-the-host",
|
|
"tags": "",
|
|
"text": "You can also remove images on your Docker host in a way similar to\ncontainers using the docker rmi command. Let's delete the training/sinatra image as we don't need it anymore. $ sudo docker rmi training/sinatra\nUntagged: training/sinatra:latest\nDeleted: 5bc342fa0b91cabf65246837015197eecfa24b2213ed6a51a8974ae250fedd8d\nDeleted: ed0fffdcdae5eb2c3a55549857a8be7fc8bc4241fb19ad714364cbfd7a56b22f\nDeleted: 5c58979d73ae448df5af1d8142436d81116187a7633082650549c52c3a2418f0 Note: In order to remove an image from the host, please make sure\nthat there are no containers actively based on it.",
|
|
"title": "Remove an image from the host"
|
|
},
|
|
{
|
|
"loc": "/userguide/dockerimages#next-steps",
|
|
"tags": "",
|
|
"text": "Until now we've seen how to build individual applications inside Docker\ncontainers. Now learn how to build whole application stacks with Docker\nby linking together multiple Docker containers. Test your Dockerfile knowledge with the Dockerfile tutorial . Go to Linking Containers Together .",
|
|
"title": "Next steps"
|
|
},
|
|
{
|
|
"loc": "/userguide/dockerlinks/",
|
|
"tags": "",
|
|
"text": "Linking Containers Together\nIn the Using Docker section, you saw how you can\nconnect to a service running inside a Docker container via a network\nport. But a port connection is only one way you can interact with services and\napplications running inside Docker containers. In this section, we'll briefly revisit\nconnecting via a network port and then we'll introduce you to another method of access:\ncontainer linking.\nConnect using Network port mapping\nIn the Using Docker section, you created a\ncontainer that ran a Python Flask application:\n$ sudo docker run -d -P training/webapp python app.py\n\n\nNote: \nContainers have an internal network and an IP address\n(as we saw when we used the docker inspect command to show the container's\nIP address in the Using Docker section).\nDocker can have a variety of network configurations. You can see more\ninformation on Docker networking here.\n\nWhen that container was created, the -P flag was used to automatically map any\nnetwork ports inside it to a random high port from the range 49153\nto 65535 on our Docker host. Next, when docker ps was run, you saw that\nport 5000 in the container was bound to port 49155 on the host.\n$ sudo docker ps nostalgic_morse\nCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES\nbc533791f3f5 training/webapp:latest python app.py 5 seconds ago Up 2 seconds 0.0.0.0:49155-5000/tcp nostalgic_morse\n\nYou also saw how you can bind a container's ports to a specific port using\nthe -p flag:\n$ sudo docker run -d -p 5000:5000 training/webapp python app.py\n\nAnd you saw why this isn't such a great idea because it constrains you to\nonly one container on that specific port.\nThere are also a few other ways you can configure the -p flag. By\ndefault the -p flag will bind the specified port to all interfaces on\nthe host machine. But you can also specify a binding to a specific\ninterface, for example only to the localhost.\n$ sudo docker run -d -p 127.0.0.1:5000:5000 training/webapp python app.py\n\nThis would bind port 5000 inside the container to port 5000 on the\nlocalhost or 127.0.0.1 interface on the host machine.\nOr, to bind port 5000 of the container to a dynamic port but only on the\nlocalhost, you could use:\n$ sudo docker run -d -p 127.0.0.1::5000 training/webapp python app.py\n\nYou can also bind UDP ports by adding a trailing /udp. For example:\n$ sudo docker run -d -p 127.0.0.1:5000:5000/udp training/webapp python app.py\n\nYou also learned about the useful docker port shortcut which showed us the\ncurrent port bindings. This is also useful for showing you specific port\nconfigurations. For example, if you've bound the container port to the\nlocalhost on the host machine, then the docker port output will reflect that.\n$ sudo docker port nostalgic_morse 5000\n127.0.0.1:49155\n\n\nNote: \nThe -p flag can be used multiple times to configure multiple ports.\n\nConnect with the linking system\nNetwork port mappings are not the only way Docker containers can connect\nto one another. Docker also has a linking system that allows you to link\nmultiple containers together and send connection information from one to another.\nWhen containers are linked, information about a source container can be sent to a\nrecipient container. This allows the recipient to see selected data describing\naspects of the source container.\nThe importance of naming\nTo establish links, Docker relies on the names of your containers.\nYou've already seen that each container you create has an automatically\ncreated name; indeed you've become familiar with our old friend\nnostalgic_morse during this guide. You can also name containers\nyourself. This naming provides two useful functions:\n\n\nIt can be useful to name containers that do specific functions in a way\n that makes it easier for you to remember them, for example naming a\n container containing a web application web.\n\n\nIt provides Docker with a reference point that allows it to refer to other\n containers, for example, you can specify to link the container web to container db.\n\n\nYou can name your container by using the --name flag, for example:\n$ sudo docker run -d -P --name web training/webapp python app.py\n\nThis launches a new container and uses the --name flag to\nname the container web. You can see the container's name using the\ndocker ps command.\n$ sudo docker ps -l\nCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES\naed84ee21bde training/webapp:latest python app.py 12 hours ago Up 2 seconds 0.0.0.0:49154-5000/tcp web\n\nYou can also use docker inspect to return the container's name.\n$ sudo docker inspect -f \"{{ .Name }}\" aed84ee21bde\n/web\n\n\nNote: \nContainer names have to be unique. That means you can only call\none container web. If you want to re-use a container name you must delete\nthe old container (with docker rm) before you can create a new\ncontainer with the same name. As an alternative you can use the --rm\nflag with the docker run command. This will delete the container\nimmediately after it is stopped.\n\nCommunication across links\nLinks allow containers to discover each other and securely transfer information about one\ncontainer to another container. When you set up a link, you create a conduit between a\nsource container and a recipient container. The recipient can then access select data\nabout the source. To create a link, you use the --link flag. First, create a new\ncontainer, this time one containing a database.\n$ sudo docker run -d --name db training/postgres\n\nThis creates a new container called db from the training/postgres\nimage, which contains a PostgreSQL database.\nNow, you need to delete the web container you created previously so you can replace it\nwith a linked one:\n$ sudo docker rm -f web\n\nNow, create a new web container and link it with your db container.\n$ sudo docker run -d -P --name web --link db:db training/webapp python app.py\n\nThis will link the new web container with the db container you created\nearlier. The --link flag takes the form:\n--link name or id:alias\n\nWhere name is the name of the container we're linking to and alias is an\nalias for the link name. You'll see how that alias gets used shortly.\nNext, inspect your linked containers with docker inspect:\n$ sudo docker inspect -f \"{{ .HostConfig.Links }}\" web\n[/db:/web/db]\n\nYou can see that the web container is now linked to the db container\nweb/db. Which allows it to access information about the db container.\nSo what does linking the containers actually do? You've learned that a link allows a\nsource container to provide information about itself to a recipient container. In\nour example, the recipient, web, can access information about the source db. To do\nthis, Docker creates a secure tunnel between the containers that doesn't need to\nexpose any ports externally on the container; you'll note when we started the\ndb container we did not use either the -P or -p flags. That's a big benefit of\nlinking: we don't need to expose the source container, here the PostgreSQL database, to\nthe network.\nDocker exposes connectivity information for the source container to the\nrecipient container in two ways:\n\nEnvironment variables,\nUpdating the /etc/hosts file.\n\nEnvironment Variables\nDocker creates several environment variables when you link containers. Docker\nautomatically creates environment variables in the target container based on\nthe --link parameters. It will also expose all environment variables \noriginating from Docker from the source container. These include variables from:\n\nthe ENV commands in the source container's Dockerfile\nthe -e, --env and --env-file options on the docker run\ncommand when the source container is started\n\nThese environment variables enable programmatic discovery from within the\ntarget container of information related to the source container.\n\nWarning:\nIt is important to understand that all environment variables originating\nfrom Docker within a container are made available to any container\nthat links to it. This could have serious security implications if sensitive\ndata is stored in them.\n\nDocker sets an alias_NAME environment variable for each target container\nlisted in the --link parameter. For example, if a new container called\nweb is linked to a database container called db via --link db:webdb,\nthen Docker creates a WEBDB_NAME=/web/webdb variable in the web container.\nDocker also defines a set of environment variables for each port exposed by the\nsource container. Each variable has a unique prefix in the form:\nname_PORT_port_protocol\nThe components in this prefix are:\n\nthe alias name specified in the --link parameter (for example, webdb)\nthe port number exposed\na protocol which is either TCP or UDP\n\nDocker uses this prefix format to define three distinct environment variables:\n\nThe prefix_ADDR variable contains the IP Address from the URL, for\nexample WEBDB_PORT_8080_TCP_ADDR=172.17.0.82.\nThe prefix_PORT variable contains just the port number from the URL for\nexample WEBDB_PORT_8080_TCP_PORT=8080.\nThe prefix_PROTO variable contains just the protocol from the URL for\nexample WEBDB_PORT_8080_TCP_PROTO=tcp.\n\nIf the container exposes multiple ports, an environment variable set is\ndefined for each one. This means, for example, if a container exposes 4 ports\nthat Docker creates 12 environment variables, 3 for each port.\nAdditionally, Docker creates an environment variable called alias_PORT.\nThis variable contains the URL of the source container's first exposed port.\nThe 'first' port is defined as the exposed port with the lowest number.\nFor example, consider the WEBDB_PORT=tcp://172.17.0.82:8080 variable. If\nthat port is used for both tcp and udp, then the tcp one is specified.\nFinally, Docker also exposes each Docker originated environment variable\nfrom the source container as an environment variable in the target. For each\nvariable Docker creates an alias_ENV_name variable in the target \ncontainer. The variable's value is set to the value Docker used when it \nstarted the source container.\nReturning back to our database example, you can run the env\ncommand to list the specified container's environment variables.\n $ sudo docker run --rm --name web2 --link db:db training/webapp env\n . . .\n DB_NAME=/web2/db\n DB_PORT=tcp://172.17.0.5:5432\n DB_PORT_5432_TCP=tcp://172.17.0.5:5432\n DB_PORT_5432_TCP_PROTO=tcp\n DB_PORT_5432_TCP_PORT=5432\n DB_PORT_5432_TCP_ADDR=172.17.0.5\n . . .\n\n\nYou can see that Docker has created a series of environment variables with\nuseful information about the source db container. Each variable is prefixed\nwith\nDB_, which is populated from the alias you specified above. If the alias\nwere db1, the variables would be prefixed with DB1_. You can use these\nenvironment variables to configure your applications to connect to the database\non the db container. The connection will be secure and private; only the\nlinked web container will be able to talk to the db container.\nImportant notes on Docker environment variables\nUnlike host entries in the /etc/hosts file,\nIP addresses stored in the environment variables are not automatically updated\nif the source container is restarted. We recommend using the host entries in\n/etc/hosts to resolve the IP address of linked containers.\nThese environment variables are only set for the first process in the\ncontainer. Some daemons, such as sshd, will scrub them when spawning shells\nfor connection.\nUpdating the /etc/hosts file\nIn addition to the environment variables, Docker adds a host entry for the\nsource container to the /etc/hosts file. Here's an entry for the web\ncontainer:\n$ sudo docker run -t -i --rm --link db:db training/webapp /bin/bash\nroot@aed84ee21bde:/opt/webapp# cat /etc/hosts\n172.17.0.7 aed84ee21bde\n. . .\n172.17.0.5 db\n\nYou can see two relevant host entries. The first is an entry for the web\ncontainer that uses the Container ID as a host name. The second entry uses the\nlink alias to reference the IP address of the db container. You can ping\nthat host now via this host name.\nroot@aed84ee21bde:/opt/webapp# apt-get install -yqq inetutils-ping\nroot@aed84ee21bde:/opt/webapp# ping db\nPING db (172.17.0.5): 48 data bytes\n56 bytes from 172.17.0.5: icmp_seq=0 ttl=64 time=0.267 ms\n56 bytes from 172.17.0.5: icmp_seq=1 ttl=64 time=0.250 ms\n56 bytes from 172.17.0.5: icmp_seq=2 ttl=64 time=0.256 ms\n\n\nNote: \nIn the example, you'll note you had to install ping because it was not included\nin the container initially.\n\nHere, you used the ping command to ping the db container using its host entry,\nwhich resolves to 172.17.0.5. You can use this host entry to configure an application\nto make use of your db container.\n\nNote: \nYou can link multiple recipient containers to a single source. For\nexample, you could have multiple (differently named) web containers attached to your\ndb container.\n\nIf you restart the source container, the linked containers /etc/hosts files\nwill be automatically updated with the source container's new IP address,\nallowing linked communication to continue.\n$ sudo docker restart db\ndb\n$ sudo docker run -t -i --rm --link db:db training/webapp /bin/bash\nroot@aed84ee21bde:/opt/webapp# cat /etc/hosts\n172.17.0.7 aed84ee21bde\n. . .\n172.17.0.9 db\n\nNext step\nNow that you know how to link Docker containers together, the next step is\nlearning how to manage data, volumes and mounts inside your containers.\nGo to Managing Data in Containers.",
|
|
"title": "Linking containers together"
|
|
},
|
|
{
|
|
"loc": "/userguide/dockerlinks#linking-containers-together",
|
|
"tags": "",
|
|
"text": "In the Using Docker section , you saw how you can\nconnect to a service running inside a Docker container via a network\nport. But a port connection is only one way you can interact with services and\napplications running inside Docker containers. In this section, we'll briefly revisit\nconnecting via a network port and then we'll introduce you to another method of access:\ncontainer linking.",
|
|
"title": "Linking Containers Together"
|
|
},
|
|
{
|
|
"loc": "/userguide/dockerlinks#connect-using-network-port-mapping",
|
|
"tags": "",
|
|
"text": "In the Using Docker section , you created a\ncontainer that ran a Python Flask application: $ sudo docker run -d -P training/webapp python app.py Note: \nContainers have an internal network and an IP address\n(as we saw when we used the docker inspect command to show the container's\nIP address in the Using Docker section).\nDocker can have a variety of network configurations. You can see more\ninformation on Docker networking here . When that container was created, the -P flag was used to automatically map any\nnetwork ports inside it to a random high port from the range 49153\nto 65535 on our Docker host. Next, when docker ps was run, you saw that\nport 5000 in the container was bound to port 49155 on the host. $ sudo docker ps nostalgic_morse\nCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES\nbc533791f3f5 training/webapp:latest python app.py 5 seconds ago Up 2 seconds 0.0.0.0:49155- 5000/tcp nostalgic_morse You also saw how you can bind a container's ports to a specific port using\nthe -p flag: $ sudo docker run -d -p 5000:5000 training/webapp python app.py And you saw why this isn't such a great idea because it constrains you to\nonly one container on that specific port. There are also a few other ways you can configure the -p flag. By\ndefault the -p flag will bind the specified port to all interfaces on\nthe host machine. But you can also specify a binding to a specific\ninterface, for example only to the localhost . $ sudo docker run -d -p 127.0.0.1:5000:5000 training/webapp python app.py This would bind port 5000 inside the container to port 5000 on the localhost or 127.0.0.1 interface on the host machine. Or, to bind port 5000 of the container to a dynamic port but only on the localhost , you could use: $ sudo docker run -d -p 127.0.0.1::5000 training/webapp python app.py You can also bind UDP ports by adding a trailing /udp . For example: $ sudo docker run -d -p 127.0.0.1:5000:5000/udp training/webapp python app.py You also learned about the useful docker port shortcut which showed us the\ncurrent port bindings. This is also useful for showing you specific port\nconfigurations. For example, if you've bound the container port to the localhost on the host machine, then the docker port output will reflect that. $ sudo docker port nostalgic_morse 5000\n127.0.0.1:49155 Note: \nThe -p flag can be used multiple times to configure multiple ports.",
|
|
"title": "Connect using Network port mapping"
|
|
},
|
|
{
|
|
"loc": "/userguide/dockerlinks#connect-with-the-linking-system",
|
|
"tags": "",
|
|
"text": "Network port mappings are not the only way Docker containers can connect\nto one another. Docker also has a linking system that allows you to link\nmultiple containers together and send connection information from one to another.\nWhen containers are linked, information about a source container can be sent to a\nrecipient container. This allows the recipient to see selected data describing\naspects of the source container. The importance of naming To establish links, Docker relies on the names of your containers.\nYou've already seen that each container you create has an automatically\ncreated name; indeed you've become familiar with our old friend nostalgic_morse during this guide. You can also name containers\nyourself. This naming provides two useful functions: It can be useful to name containers that do specific functions in a way\n that makes it easier for you to remember them, for example naming a\n container containing a web application web . It provides Docker with a reference point that allows it to refer to other\n containers, for example, you can specify to link the container web to container db . You can name your container by using the --name flag, for example: $ sudo docker run -d -P --name web training/webapp python app.py This launches a new container and uses the --name flag to\nname the container web . You can see the container's name using the docker ps command. $ sudo docker ps -l\nCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES\naed84ee21bde training/webapp:latest python app.py 12 hours ago Up 2 seconds 0.0.0.0:49154- 5000/tcp web You can also use docker inspect to return the container's name. $ sudo docker inspect -f \"{{ .Name }}\" aed84ee21bde\n/web Note: \nContainer names have to be unique. That means you can only call\none container web . If you want to re-use a container name you must delete\nthe old container (with docker rm ) before you can create a new\ncontainer with the same name. As an alternative you can use the --rm \nflag with the docker run command. This will delete the container\nimmediately after it is stopped.",
|
|
"title": "Connect with the linking system"
|
|
},
|
|
{
|
|
"loc": "/userguide/dockerlinks#communication-across-links",
|
|
"tags": "",
|
|
"text": "Links allow containers to discover each other and securely transfer information about one\ncontainer to another container. When you set up a link, you create a conduit between a\nsource container and a recipient container. The recipient can then access select data\nabout the source. To create a link, you use the --link flag. First, create a new\ncontainer, this time one containing a database. $ sudo docker run -d --name db training/postgres This creates a new container called db from the training/postgres \nimage, which contains a PostgreSQL database. Now, you need to delete the web container you created previously so you can replace it\nwith a linked one: $ sudo docker rm -f web Now, create a new web container and link it with your db container. $ sudo docker run -d -P --name web --link db:db training/webapp python app.py This will link the new web container with the db container you created\nearlier. The --link flag takes the form: --link name or id :alias Where name is the name of the container we're linking to and alias is an\nalias for the link name. You'll see how that alias gets used shortly. Next, inspect your linked containers with docker inspect : $ sudo docker inspect -f \"{{ .HostConfig.Links }}\" web\n[/db:/web/db] You can see that the web container is now linked to the db container web/db . Which allows it to access information about the db container. So what does linking the containers actually do? You've learned that a link allows a\nsource container to provide information about itself to a recipient container. In\nour example, the recipient, web , can access information about the source db . To do\nthis, Docker creates a secure tunnel between the containers that doesn't need to\nexpose any ports externally on the container; you'll note when we started the db container we did not use either the -P or -p flags. That's a big benefit of\nlinking: we don't need to expose the source container, here the PostgreSQL database, to\nthe network. Docker exposes connectivity information for the source container to the\nrecipient container in two ways: Environment variables, Updating the /etc/hosts file. Environment Variables Docker creates several environment variables when you link containers. Docker\nautomatically creates environment variables in the target container based on\nthe --link parameters. It will also expose all environment variables \noriginating from Docker from the source container. These include variables from: the ENV commands in the source container's Dockerfile the -e , --env and --env-file options on the docker run \ncommand when the source container is started These environment variables enable programmatic discovery from within the\ntarget container of information related to the source container. Warning :\nIt is important to understand that all environment variables originating\nfrom Docker within a container are made available to any container\nthat links to it. This could have serious security implications if sensitive\ndata is stored in them. Docker sets an alias _NAME environment variable for each target container\nlisted in the --link parameter. For example, if a new container called web is linked to a database container called db via --link db:webdb ,\nthen Docker creates a WEBDB_NAME=/web/webdb variable in the web container. Docker also defines a set of environment variables for each port exposed by the\nsource container. Each variable has a unique prefix in the form: name _PORT_ port _ protocol The components in this prefix are: the alias name specified in the --link parameter (for example, webdb ) the port number exposed a protocol which is either TCP or UDP Docker uses this prefix format to define three distinct environment variables: The prefix_ADDR variable contains the IP Address from the URL, for\nexample WEBDB_PORT_8080_TCP_ADDR=172.17.0.82 . The prefix_PORT variable contains just the port number from the URL for\nexample WEBDB_PORT_8080_TCP_PORT=8080 . The prefix_PROTO variable contains just the protocol from the URL for\nexample WEBDB_PORT_8080_TCP_PROTO=tcp . If the container exposes multiple ports, an environment variable set is\ndefined for each one. This means, for example, if a container exposes 4 ports\nthat Docker creates 12 environment variables, 3 for each port. Additionally, Docker creates an environment variable called alias _PORT .\nThis variable contains the URL of the source container's first exposed port.\nThe 'first' port is defined as the exposed port with the lowest number.\nFor example, consider the WEBDB_PORT=tcp://172.17.0.82:8080 variable. If\nthat port is used for both tcp and udp, then the tcp one is specified. Finally, Docker also exposes each Docker originated environment variable\nfrom the source container as an environment variable in the target. For each\nvariable Docker creates an alias _ENV_ name variable in the target \ncontainer. The variable's value is set to the value Docker used when it \nstarted the source container. Returning back to our database example, you can run the env \ncommand to list the specified container's environment variables. $ sudo docker run --rm --name web2 --link db:db training/webapp env\n . . .\n DB_NAME=/web2/db\n DB_PORT=tcp://172.17.0.5:5432\n DB_PORT_5432_TCP=tcp://172.17.0.5:5432\n DB_PORT_5432_TCP_PROTO=tcp\n DB_PORT_5432_TCP_PORT=5432\n DB_PORT_5432_TCP_ADDR=172.17.0.5\n . . . You can see that Docker has created a series of environment variables with\nuseful information about the source db container. Each variable is prefixed\nwith DB_ , which is populated from the alias you specified above. If the alias \nwere db1 , the variables would be prefixed with DB1_ . You can use these\nenvironment variables to configure your applications to connect to the database\non the db container. The connection will be secure and private; only the\nlinked web container will be able to talk to the db container. Important notes on Docker environment variables Unlike host entries in the /etc/hosts file ,\nIP addresses stored in the environment variables are not automatically updated\nif the source container is restarted. We recommend using the host entries in /etc/hosts to resolve the IP address of linked containers. These environment variables are only set for the first process in the\ncontainer. Some daemons, such as sshd , will scrub them when spawning shells\nfor connection. Updating the /etc/hosts file In addition to the environment variables, Docker adds a host entry for the\nsource container to the /etc/hosts file. Here's an entry for the web \ncontainer: $ sudo docker run -t -i --rm --link db:db training/webapp /bin/bash\nroot@aed84ee21bde:/opt/webapp# cat /etc/hosts\n172.17.0.7 aed84ee21bde\n. . .\n172.17.0.5 db You can see two relevant host entries. The first is an entry for the web \ncontainer that uses the Container ID as a host name. The second entry uses the\nlink alias to reference the IP address of the db container. You can ping\nthat host now via this host name. root@aed84ee21bde:/opt/webapp# apt-get install -yqq inetutils-ping\nroot@aed84ee21bde:/opt/webapp# ping db\nPING db (172.17.0.5): 48 data bytes\n56 bytes from 172.17.0.5: icmp_seq=0 ttl=64 time=0.267 ms\n56 bytes from 172.17.0.5: icmp_seq=1 ttl=64 time=0.250 ms\n56 bytes from 172.17.0.5: icmp_seq=2 ttl=64 time=0.256 ms Note: \nIn the example, you'll note you had to install ping because it was not included\nin the container initially. Here, you used the ping command to ping the db container using its host entry,\nwhich resolves to 172.17.0.5 . You can use this host entry to configure an application\nto make use of your db container. Note: \nYou can link multiple recipient containers to a single source. For\nexample, you could have multiple (differently named) web containers attached to your db container. If you restart the source container, the linked containers /etc/hosts files\nwill be automatically updated with the source container's new IP address,\nallowing linked communication to continue. $ sudo docker restart db\ndb\n$ sudo docker run -t -i --rm --link db:db training/webapp /bin/bash\nroot@aed84ee21bde:/opt/webapp# cat /etc/hosts\n172.17.0.7 aed84ee21bde\n. . .\n172.17.0.9 db",
|
|
"title": "Communication across links"
|
|
},
|
|
{
|
|
"loc": "/userguide/dockerlinks#next-step",
|
|
"tags": "",
|
|
"text": "Now that you know how to link Docker containers together, the next step is\nlearning how to manage data, volumes and mounts inside your containers. Go to Managing Data in Containers .",
|
|
"title": "Next step"
|
|
},
|
|
{
|
|
"loc": "/userguide/dockervolumes/",
|
|
"tags": "",
|
|
"text": "Managing Data in Containers\nSo far we've been introduced to some basic Docker\nconcepts, seen how to work with Docker\nimages as well as learned about networking\nand links between containers. In this section\nwe're going to discuss how you can manage data inside and between your\nDocker containers.\nWe're going to look at the two primary ways you can manage data in\nDocker.\n\nData volumes, and\nData volume containers.\n\nData volumes\nA data volume is a specially-designated directory within one or more\ncontainers that bypasses the Union File\nSystem. Data volumes provide several \nuseful features for persistent or shared data:\n\nVolumes are initialized when a container is created. If the container's\n base image contains data at the specified mount point, that data is \n copied into the new volume.\nData volumes can be shared and reused among containers.\nChanges to a data volume are made directly.\nChanges to a data volume will not be included when you update an image.\nData volumes persist even if the container itself is deleted.\n\nData volumes are designed to persist data, independent of the container's life \ncycle. Docker therefore never automatically delete volumes when you remove \na container, nor will it \"garbage collect\" volumes that are no longer \nreferenced by a container.\nAdding a data volume\nYou can add a data volume to a container using the -v flag with the\ndocker create and docker run command. You can use the -v multiple times\nto mount multiple data volumes. Let's mount a single volume now in our web\napplication container.\n$ sudo docker run -d -P --name web -v /webapp training/webapp python app.py\n\nThis will create a new volume inside a container at /webapp.\n\nNote: \nYou can also use the VOLUME instruction in a Dockerfile to add one or\nmore new volumes to any container created from that image.\n\nMount a Host Directory as a Data Volume\nIn addition to creating a volume using the -v flag you can also mount a\ndirectory from your Docker daemon's host into a container.\n\nNote:\nIf you are using Boot2Docker, your Docker daemon only has limited access to\nyour OSX/Windows filesystem. Boot2Docker tries to auto-share your /Users\n(OSX) or C:\\Users (Windows) directory - and so you can mount files or directories\nusing docker run -v /Users/path:/container path ... (OSX) or\ndocker run -v /c/Users/path:/container path ... (Windows). All other paths\ncome from the Boot2Docker virtual machine's filesystem.\n\n$ sudo docker run -d -P --name web -v /src/webapp:/opt/webapp training/webapp python app.py\n\nThis will mount the host directory, /src/webapp, into the container at\n/opt/webapp.\n\nNote:\nIf the path /opt/webapp already exists inside the container's image, its\ncontents will be replaced by the contents of /src/webapp on the host to stay\nconsistent with the expected behavior of mount\n\nThis is very useful for testing, for example we can\nmount our source code inside the container and see our application at work as\nwe change the source code. The directory on the host must be specified as an\nabsolute path and if the directory doesn't exist Docker will automatically\ncreate it for you.\n\nNote: \nThis is not available from a Dockerfile due to the portability\nand sharing purpose of built images. The host directory is, by its nature,\nhost-dependent, so a host directory specified in a Dockerfile probably\nwouldn't work on all hosts.\n\nDocker defaults to a read-write volume but we can also mount a directory\nread-only.\n$ sudo docker run -d -P --name web -v /src/webapp:/opt/webapp:ro training/webapp python app.py\n\nHere we've mounted the same /src/webapp directory but we've added the ro\noption to specify that the mount should be read-only.\nMount a Host File as a Data Volume\nThe -v flag can also be used to mount a single file - instead of just \ndirectories - from the host machine.\n$ sudo docker run --rm -it -v ~/.bash_history:/.bash_history ubuntu /bin/bash\n\nThis will drop you into a bash shell in a new container, you will have your bash \nhistory from the host and when you exit the container, the host will have the \nhistory of the commands typed while in the container.\n\nNote: \nMany tools used to edit files including vi and sed --in-place may result \nin an inode change. Since Docker v1.1.0, this will produce an error such as\n\"sed: cannot rename ./sedKdJ9Dy: Device or resource busy\". In the case where \nyou want to edit the mounted file, it is often easiest to instead mount the \nparent directory.\n\nCreating and mounting a Data Volume Container\nIf you have some persistent data that you want to share between\ncontainers, or want to use from non-persistent containers, it's best to\ncreate a named Data Volume Container, and then to mount the data from\nit.\nLet's create a new named container with a volume to share.\nWhile this container doesn't run an application, it reuses the training/postgres\nimage so that all containers are using layers in common, saving disk space.\n$ sudo docker create -v /dbdata --name dbdata training/postgres\n\nYou can then use the --volumes-from flag to mount the /dbdata volume in another container.\n$ sudo docker run -d --volumes-from dbdata --name db1 training/postgres\n\nAnd another:\n$ sudo docker run -d --volumes-from dbdata --name db2 training/postgres\n\nIn this case, if the postgres image contained a directory called /dbdata\nthen mounting the volumes from the dbdata container hides the\n/dbdata files from the postgres image. The result is only the files\nfrom the dbdata container are visible.\nYou can use multiple --volumes-from parameters to bring together multiple data\nvolumes from multiple containers.\nYou can also extend the chain by mounting the volume that came from the\ndbdata container in yet another container via the db1 or db2 containers.\n$ sudo docker run -d --name db3 --volumes-from db1 training/postgres\n\nIf you remove containers that mount volumes, including the initial dbdata\ncontainer, or the subsequent containers db1 and db2, the volumes will not\nbe deleted. To delete the volume from disk, you must explicitly call\ndocker rm -v against the last container with a reference to the volume. This\nallows you to upgrade, or effectively migrate data volumes between containers.\n\nNote: Docker will not warn you when removing a container without \nproviding the -v option to delete its volumes. If you remove containers\nwithout using the -v option, you may end up with \"dangling\" volumes; \nvolumes that are no longer referenced by a container.\nDangling volumes are difficult to get rid of and can take up a large amount\nof disk space. We're working on improving volume management and you can check\nprogress on this in pull request #8484\n\nBackup, restore, or migrate data volumes\nAnother useful function we can perform with volumes is use them for\nbackups, restores or migrations. We do this by using the\n--volumes-from flag to create a new container that mounts that volume,\nlike so:\n$ sudo docker run --volumes-from dbdata -v $(pwd):/backup ubuntu tar cvf /backup/backup.tar /dbdata\n\nHere we've launched a new container and mounted the volume from the\ndbdata container. We've then mounted a local host directory as\n/backup. Finally, we've passed a command that uses tar to backup the\ncontents of the dbdata volume to a backup.tar file inside our\n/backup directory. When the command completes and the container stops\nwe'll be left with a backup of our dbdata volume.\nYou could then restore it to the same container, or another that you've made\nelsewhere. Create a new container.\n$ sudo docker run -v /dbdata --name dbdata2 ubuntu /bin/bash\n\nThen un-tar the backup file in the new container's data volume.\n$ sudo docker run --volumes-from dbdata2 -v $(pwd):/backup busybox tar xvf /backup/backup.tar\n\nYou can use the techniques above to automate backup, migration and\nrestore testing using your preferred tools.\nNext steps\nNow we've learned a bit more about how to use Docker we're going to see how to\ncombine Docker with the services available on\nDocker Hub including Automated Builds and private\nrepositories.\nGo to Working with Docker Hub.",
|
|
"title": "Managing data in containers"
|
|
},
|
|
{
|
|
"loc": "/userguide/dockervolumes#managing-data-in-containers",
|
|
"tags": "",
|
|
"text": "So far we've been introduced to some basic Docker\nconcepts , seen how to work with Docker\nimages as well as learned about networking\nand links between containers . In this section\nwe're going to discuss how you can manage data inside and between your\nDocker containers. We're going to look at the two primary ways you can manage data in\nDocker. Data volumes, and Data volume containers.",
|
|
"title": "Managing Data in Containers"
|
|
},
|
|
{
|
|
"loc": "/userguide/dockervolumes#data-volumes",
|
|
"tags": "",
|
|
"text": "A data volume is a specially-designated directory within one or more\ncontainers that bypasses the Union File\nSystem . Data volumes provide several \nuseful features for persistent or shared data: Volumes are initialized when a container is created. If the container's\n base image contains data at the specified mount point, that data is \n copied into the new volume. Data volumes can be shared and reused among containers. Changes to a data volume are made directly. Changes to a data volume will not be included when you update an image. Data volumes persist even if the container itself is deleted. Data volumes are designed to persist data, independent of the container's life \ncycle. Docker therefore never automatically delete volumes when you remove \na container, nor will it \"garbage collect\" volumes that are no longer \nreferenced by a container. Adding a data volume You can add a data volume to a container using the -v flag with the docker create and docker run command. You can use the -v multiple times\nto mount multiple data volumes. Let's mount a single volume now in our web\napplication container. $ sudo docker run -d -P --name web -v /webapp training/webapp python app.py This will create a new volume inside a container at /webapp . Note: \nYou can also use the VOLUME instruction in a Dockerfile to add one or\nmore new volumes to any container created from that image. Mount a Host Directory as a Data Volume In addition to creating a volume using the -v flag you can also mount a\ndirectory from your Docker daemon's host into a container. Note: \nIf you are using Boot2Docker, your Docker daemon only has limited access to\nyour OSX/Windows filesystem. Boot2Docker tries to auto-share your /Users \n(OSX) or C:\\Users (Windows) directory - and so you can mount files or directories\nusing docker run -v /Users/ path :/ container path ... (OSX) or docker run -v /c/Users/ path :/ container path ... (Windows). All other paths\ncome from the Boot2Docker virtual machine's filesystem. $ sudo docker run -d -P --name web -v /src/webapp:/opt/webapp training/webapp python app.py This will mount the host directory, /src/webapp , into the container at /opt/webapp . Note: \nIf the path /opt/webapp already exists inside the container's image, its\ncontents will be replaced by the contents of /src/webapp on the host to stay\nconsistent with the expected behavior of mount This is very useful for testing, for example we can\nmount our source code inside the container and see our application at work as\nwe change the source code. The directory on the host must be specified as an\nabsolute path and if the directory doesn't exist Docker will automatically\ncreate it for you. Note: \nThis is not available from a Dockerfile due to the portability\nand sharing purpose of built images. The host directory is, by its nature,\nhost-dependent, so a host directory specified in a Dockerfile probably\nwouldn't work on all hosts. Docker defaults to a read-write volume but we can also mount a directory\nread-only. $ sudo docker run -d -P --name web -v /src/webapp:/opt/webapp:ro training/webapp python app.py Here we've mounted the same /src/webapp directory but we've added the ro \noption to specify that the mount should be read-only. Mount a Host File as a Data Volume The -v flag can also be used to mount a single file - instead of just \ndirectories - from the host machine. $ sudo docker run --rm -it -v ~/.bash_history:/.bash_history ubuntu /bin/bash This will drop you into a bash shell in a new container, you will have your bash \nhistory from the host and when you exit the container, the host will have the \nhistory of the commands typed while in the container. Note: \nMany tools used to edit files including vi and sed --in-place may result \nin an inode change. Since Docker v1.1.0, this will produce an error such as\n\" sed: cannot rename ./sedKdJ9Dy: Device or resource busy \". In the case where \nyou want to edit the mounted file, it is often easiest to instead mount the \nparent directory.",
|
|
"title": "Data volumes"
|
|
},
|
|
{
|
|
"loc": "/userguide/dockervolumes#creating-and-mounting-a-data-volume-container",
|
|
"tags": "",
|
|
"text": "If you have some persistent data that you want to share between\ncontainers, or want to use from non-persistent containers, it's best to\ncreate a named Data Volume Container, and then to mount the data from\nit. Let's create a new named container with a volume to share.\nWhile this container doesn't run an application, it reuses the training/postgres \nimage so that all containers are using layers in common, saving disk space. $ sudo docker create -v /dbdata --name dbdata training/postgres You can then use the --volumes-from flag to mount the /dbdata volume in another container. $ sudo docker run -d --volumes-from dbdata --name db1 training/postgres And another: $ sudo docker run -d --volumes-from dbdata --name db2 training/postgres In this case, if the postgres image contained a directory called /dbdata \nthen mounting the volumes from the dbdata container hides the /dbdata files from the postgres image. The result is only the files\nfrom the dbdata container are visible. You can use multiple --volumes-from parameters to bring together multiple data\nvolumes from multiple containers. You can also extend the chain by mounting the volume that came from the dbdata container in yet another container via the db1 or db2 containers. $ sudo docker run -d --name db3 --volumes-from db1 training/postgres If you remove containers that mount volumes, including the initial dbdata \ncontainer, or the subsequent containers db1 and db2 , the volumes will not\nbe deleted. To delete the volume from disk, you must explicitly call docker rm -v against the last container with a reference to the volume. This\nallows you to upgrade, or effectively migrate data volumes between containers. Note: Docker will not warn you when removing a container without \nproviding the -v option to delete its volumes. If you remove containers\nwithout using the -v option, you may end up with \"dangling\" volumes; \nvolumes that are no longer referenced by a container.\nDangling volumes are difficult to get rid of and can take up a large amount\nof disk space. We're working on improving volume management and you can check\nprogress on this in pull request #8484",
|
|
"title": "Creating and mounting a Data Volume Container"
|
|
},
|
|
{
|
|
"loc": "/userguide/dockervolumes#backup-restore-or-migrate-data-volumes",
|
|
"tags": "",
|
|
"text": "Another useful function we can perform with volumes is use them for\nbackups, restores or migrations. We do this by using the --volumes-from flag to create a new container that mounts that volume,\nlike so: $ sudo docker run --volumes-from dbdata -v $(pwd):/backup ubuntu tar cvf /backup/backup.tar /dbdata Here we've launched a new container and mounted the volume from the dbdata container. We've then mounted a local host directory as /backup . Finally, we've passed a command that uses tar to backup the\ncontents of the dbdata volume to a backup.tar file inside our /backup directory. When the command completes and the container stops\nwe'll be left with a backup of our dbdata volume. You could then restore it to the same container, or another that you've made\nelsewhere. Create a new container. $ sudo docker run -v /dbdata --name dbdata2 ubuntu /bin/bash Then un-tar the backup file in the new container's data volume. $ sudo docker run --volumes-from dbdata2 -v $(pwd):/backup busybox tar xvf /backup/backup.tar You can use the techniques above to automate backup, migration and\nrestore testing using your preferred tools.",
|
|
"title": "Backup, restore, or migrate data volumes"
|
|
},
|
|
{
|
|
"loc": "/userguide/dockervolumes#next-steps",
|
|
"tags": "",
|
|
"text": "Now we've learned a bit more about how to use Docker we're going to see how to\ncombine Docker with the services available on Docker Hub including Automated Builds and private\nrepositories. Go to Working with Docker Hub .",
|
|
"title": "Next steps"
|
|
},
|
|
{
|
|
"loc": "/userguide/dockerrepos/",
|
|
"tags": "",
|
|
"text": "Working with Docker Hub\nSo far you've learned how to use the command line to run Docker on your local host.\nYou've learned how to pull down images to build containers\nfrom existing images and you've learned how to create your own images.\nNext, you're going to learn how to use the Docker Hub to\nsimplify and enhance your Docker workflows.\nThe Docker Hub is a public registry maintained by Docker,\nInc. It contains over 15,000 images you can download and use to build containers. It also\nprovides authentication, work group structure, workflow tools like webhooks and build\ntriggers, and privacy tools like private repositories for storing images you don't want\nto share publicly.\nDocker commands and Docker Hub\nDocker itself provides access to Docker Hub services via the docker search,\npull, login, and push commands. This page will show you how these commands work.\nAccount creation and login\nTypically, you'll want to start by creating an account on Docker Hub (if you haven't\nalready) and logging in. You can create your account directly on\nDocker Hub, or by running:\n$ sudo docker login\n\nThis will prompt you for a user name, which will become the public namespace for your\npublic repositories.\nIf your user name is available, Docker will prompt you to enter a password and your\ne-mail address. It will then automatically log you in. You can now commit and\npush your own images up to your repos on Docker Hub.\n\nNote:\nYour authentication credentials will be stored in the .dockercfg\nauthentication file in your home directory.\n\nSearching for images\nYou can search the Docker Hub registry via its search\ninterface or by using the command line interface. Searching can find images by image\nname, user name, or description:\n$ sudo docker search centos\nNAME DESCRIPTION STARS OFFICIAL TRUSTED\ncentos Official CentOS 6 Image as of 12 April 2014 88\ntianon/centos CentOS 5 and 6, created using rinse instea... 21\n...\n\nThere you can see two example results: centos and\ntianon/centos. The second result shows that it comes from\nthe public repository of a user, named tianon/, while the first result,\ncentos, doesn't explicitly list a repository which means that it comes from the\ntrusted top-level namespace. The / character separates a user's\nrepository from the image name.\nOnce you've found the image you want, you can download it with docker pull imagename:\n$ sudo docker pull centos\nPulling repository centos\n0b443ba03958: Download complete\n539c0211cd76: Download complete\n511136ea3c5a: Download complete\n7064731afe90: Download complete\n\nStatus: Downloaded newer image for centos\n\nYou now have an image from which you can run containers.\nContributing to Docker Hub\nAnyone can pull public images from the Docker Hub\nregistry, but if you would like to share your own images, then you must\nregister first, as we saw in the first section of the Docker User\nGuide.\nPushing a repository to Docker Hub\nIn order to push a repository to its registry, you need to have named an image\nor committed your container to a named image as we saw\nhere.\nNow you can push this repository to the registry designated by its name or tag.\n$ sudo docker push yourname/newimage\n\nThe image will then be uploaded and available for use by your team-mates and/or the\ncommunity.\nFeatures of Docker Hub\nLet's take a closer look at some of the features of Docker Hub. You can find more\ninformation here.\n\nPrivate repositories\nOrganizations and teams\nAutomated Builds\nWebhooks\n\nPrivate Repositories\nSometimes you have images you don't want to make public and share with\neveryone. So Docker Hub allows you to have private repositories. You can\nsign up for a plan here.\nOrganizations and teams\nOne of the useful aspects of private repositories is that you can share\nthem only with members of your organization or team. Docker Hub lets you\ncreate organizations where you can collaborate with your colleagues and\nmanage private repositories. You can learn how to create and manage an organization\nhere.\nAutomated Builds\nAutomated Builds automate the building and updating of images from\nGitHub or BitBucket, directly on Docker\nHub. It works by adding a commit hook to your selected GitHub or BitBucket repository,\ntriggering a build and update when you push a commit.\nTo setup an Automated Build\n\nCreate a Docker Hub account and login.\nLink your GitHub or BitBucket account through the \"Link Accounts\" menu.\nConfigure an Automated Build.\nPick a GitHub or BitBucket project that has a Dockerfile that you want to build.\nPick the branch you want to build (the default is the master branch).\nGive the Automated Build a name.\nAssign an optional Docker tag to the Build.\nSpecify where the Dockerfile is located. The default is /.\n\nOnce the Automated Build is configured it will automatically trigger a\nbuild and, in a few minutes, you should see your new Automated Build on the Docker Hub\nRegistry. It will stay in sync with your GitHub and BitBucket repository until you\ndeactivate the Automated Build.\nIf you want to see the status of your Automated Builds, you can go to your\nAutomated Builds page on the Docker Hub,\nand it will show you the status of your builds and their build history.\nOnce you've created an Automated Build you can deactivate or delete it. You\ncannot, however, push to an Automated Build with the docker push command.\nYou can only manage it by committing code to your GitHub or BitBucket\nrepository.\nYou can create multiple Automated Builds per repository and configure them\nto point to specific Dockerfile's or Git branches.\nBuild Triggers\nAutomated Builds can also be triggered via a URL on Docker Hub. This\nallows you to rebuild an Automated build image on demand.\nWebhooks\nWebhooks are attached to your repositories and allow you to trigger an\nevent when an image or updated image is pushed to the repository. With\na webhook you can specify a target URL and a JSON payload that will be\ndelivered when the image is pushed.\nSee the Docker Hub documentation for more information on\nwebhooks\nNext steps\nGo and use Docker!",
|
|
"title": "Working with Docker Hub"
|
|
},
|
|
{
|
|
"loc": "/userguide/dockerrepos#working-with-docker-hub",
|
|
"tags": "",
|
|
"text": "So far you've learned how to use the command line to run Docker on your local host.\nYou've learned how to pull down images to build containers\nfrom existing images and you've learned how to create your own images . Next, you're going to learn how to use the Docker Hub to\nsimplify and enhance your Docker workflows. The Docker Hub is a public registry maintained by Docker,\nInc. It contains over 15,000 images you can download and use to build containers. It also\nprovides authentication, work group structure, workflow tools like webhooks and build\ntriggers, and privacy tools like private repositories for storing images you don't want\nto share publicly.",
|
|
"title": "Working with Docker Hub"
|
|
},
|
|
{
|
|
"loc": "/userguide/dockerrepos#docker-commands-and-docker-hub",
|
|
"tags": "",
|
|
"text": "Docker itself provides access to Docker Hub services via the docker search , pull , login , and push commands. This page will show you how these commands work. Account creation and login Typically, you'll want to start by creating an account on Docker Hub (if you haven't\nalready) and logging in. You can create your account directly on Docker Hub , or by running: $ sudo docker login This will prompt you for a user name, which will become the public namespace for your\npublic repositories.\nIf your user name is available, Docker will prompt you to enter a password and your\ne-mail address. It will then automatically log you in. You can now commit and\npush your own images up to your repos on Docker Hub. Note: \nYour authentication credentials will be stored in the .dockercfg \nauthentication file in your home directory.",
|
|
"title": "Docker commands and Docker Hub"
|
|
},
|
|
{
|
|
"loc": "/userguide/dockerrepos#searching-for-images",
|
|
"tags": "",
|
|
"text": "You can search the Docker Hub registry via its search\ninterface or by using the command line interface. Searching can find images by image\nname, user name, or description: $ sudo docker search centos\nNAME DESCRIPTION STARS OFFICIAL TRUSTED\ncentos Official CentOS 6 Image as of 12 April 2014 88\ntianon/centos CentOS 5 and 6, created using rinse instea... 21\n... There you can see two example results: centos and tianon/centos . The second result shows that it comes from\nthe public repository of a user, named tianon/ , while the first result, centos , doesn't explicitly list a repository which means that it comes from the\ntrusted top-level namespace. The / character separates a user's\nrepository from the image name. Once you've found the image you want, you can download it with docker pull imagename : $ sudo docker pull centos\nPulling repository centos\n0b443ba03958: Download complete\n539c0211cd76: Download complete\n511136ea3c5a: Download complete\n7064731afe90: Download complete\n\nStatus: Downloaded newer image for centos You now have an image from which you can run containers.",
|
|
"title": "Searching for images"
|
|
},
|
|
{
|
|
"loc": "/userguide/dockerrepos#contributing-to-docker-hub",
|
|
"tags": "",
|
|
"text": "Anyone can pull public images from the Docker Hub \nregistry, but if you would like to share your own images, then you must\nregister first, as we saw in the first section of the Docker User\nGuide .",
|
|
"title": "Contributing to Docker Hub"
|
|
},
|
|
{
|
|
"loc": "/userguide/dockerrepos#pushing-a-repository-to-docker-hub",
|
|
"tags": "",
|
|
"text": "In order to push a repository to its registry, you need to have named an image\nor committed your container to a named image as we saw here . Now you can push this repository to the registry designated by its name or tag. $ sudo docker push yourname/newimage The image will then be uploaded and available for use by your team-mates and/or the\ncommunity.",
|
|
"title": "Pushing a repository to Docker Hub"
|
|
},
|
|
{
|
|
"loc": "/userguide/dockerrepos#features-of-docker-hub",
|
|
"tags": "",
|
|
"text": "Let's take a closer look at some of the features of Docker Hub. You can find more\ninformation here . Private repositories Organizations and teams Automated Builds Webhooks Private Repositories Sometimes you have images you don't want to make public and share with\neveryone. So Docker Hub allows you to have private repositories. You can\nsign up for a plan here . Organizations and teams One of the useful aspects of private repositories is that you can share\nthem only with members of your organization or team. Docker Hub lets you\ncreate organizations where you can collaborate with your colleagues and\nmanage private repositories. You can learn how to create and manage an organization here . Automated Builds Automated Builds automate the building and updating of images from GitHub or BitBucket , directly on Docker\nHub. It works by adding a commit hook to your selected GitHub or BitBucket repository,\ntriggering a build and update when you push a commit. To setup an Automated Build Create a Docker Hub account and login. Link your GitHub or BitBucket account through the \"Link Accounts\" menu. Configure an Automated Build . Pick a GitHub or BitBucket project that has a Dockerfile that you want to build. Pick the branch you want to build (the default is the master branch). Give the Automated Build a name. Assign an optional Docker tag to the Build. Specify where the Dockerfile is located. The default is / . Once the Automated Build is configured it will automatically trigger a\nbuild and, in a few minutes, you should see your new Automated Build on the Docker Hub \nRegistry. It will stay in sync with your GitHub and BitBucket repository until you\ndeactivate the Automated Build. If you want to see the status of your Automated Builds, you can go to your Automated Builds page on the Docker Hub,\nand it will show you the status of your builds and their build history. Once you've created an Automated Build you can deactivate or delete it. You\ncannot, however, push to an Automated Build with the docker push command.\nYou can only manage it by committing code to your GitHub or BitBucket\nrepository. You can create multiple Automated Builds per repository and configure them\nto point to specific Dockerfile 's or Git branches. Build Triggers Automated Builds can also be triggered via a URL on Docker Hub. This\nallows you to rebuild an Automated build image on demand. Webhooks Webhooks are attached to your repositories and allow you to trigger an\nevent when an image or updated image is pushed to the repository. With\na webhook you can specify a target URL and a JSON payload that will be\ndelivered when the image is pushed. See the Docker Hub documentation for more information on\nwebhooks",
|
|
"title": "Features of Docker Hub"
|
|
},
|
|
{
|
|
"loc": "/userguide/dockerrepos#next-steps",
|
|
"tags": "",
|
|
"text": "Go and use Docker!",
|
|
"title": "Next steps"
|
|
},
|
|
{
|
|
"loc": "/userguide/level1/",
|
|
"tags": "",
|
|
"text": "Back\nDockerfile Tutorial\nTest your Dockerfile knowledge - Level 1\nQuestions\n\n What is the Dockerfile instruction to specify the base image ?\n \n The right answer was FROM\n \n What is the Dockerfile instruction to execute any commands on the current image and commit the results?\n \n The right answer was RUN\n \n What is the Dockerfile instruction to specify the maintainer of the Dockerfile?\n \n The right answer was MAINTAINER\n \n What is the character used to add comment in Dockerfiles?\n \n The right answer was #\n \n Congratulations, you made no mistake!\n Tell the world Tweet\n And try the next challenge: Fill the Dockerfile\n \n Your Dockerfile skills are not yet perfect, try to take the time to read this tutorial again.\n You're almost there! Read carefully the sections corresponding to your errors, and take the test again!\n \n Check your answers\n\n\nFill the Dockerfile\nYour best friend Eric Bardin sent you a Dockerfile, but some parts were lost in the ocean. Can you find the missing parts?\n\n\n This is a Dockerfile to create an image with Memcached and Emacs installed. \n VERSION 1.0\n use the ubuntu base image provided by dotCloud\n ub\n E B, eric.bardin@dotcloud.com\n make sure the package repository is up to date\n echo \"deb http://archive.ubuntu.com/ubuntu precise main universe\" /etc/apt/sources.list\n apt-get update\n install memcached\nRUN apt-get install -y \n install emacs\n apt-get install -y emacs23\n\n\nCongratulations, you successfully restored Eric's Dockerfile! You are ready to containerize the world!.\nTell the world! Tweet\n\n\nWooops, there are one or more errors in the Dockerfile. Try again.\n\n\nCheck the Dockerfile\nWhat's next?\nIn the next level, we will go into more detail about how to specify which command should be executed when the container starts,\nwhich user to use, and how expose a particular port.\n\nBack\nGo to the next level",
|
|
"title": "**HIDDEN**"
|
|
},
|
|
{
|
|
"loc": "/userguide/level1#dockerfile-tutorial",
|
|
"tags": "",
|
|
"text": "",
|
|
"title": "Dockerfile Tutorial"
|
|
},
|
|
{
|
|
"loc": "/userguide/level1#test-your-dockerfile-knowledge-level-1",
|
|
"tags": "",
|
|
"text": "Questions \n What is the Dockerfile instruction to specify the base image ? \n \n The right answer was FROM \n \n What is the Dockerfile instruction to execute any commands on the current image and commit the results? \n \n The right answer was RUN \n \n What is the Dockerfile instruction to specify the maintainer of the Dockerfile? \n \n The right answer was MAINTAINER \n \n What is the character used to add comment in Dockerfiles? \n \n The right answer was # \n \n Congratulations, you made no mistake! \n Tell the world Tweet \n And try the next challenge: Fill the Dockerfile \n \n Your Dockerfile skills are not yet perfect, try to take the time to read this tutorial again. \n You're almost there! Read carefully the sections corresponding to your errors, and take the test again! \n \n Check your answers Fill the Dockerfile Your best friend Eric Bardin sent you a Dockerfile, but some parts were lost in the ocean. Can you find the missing parts? This is a Dockerfile to create an image with Memcached and Emacs installed. VERSION 1.0 use the ubuntu base image provided by dotCloud ub E B , eric.bardin@dotcloud.com make sure the package repository is up to date echo \"deb http://archive.ubuntu.com/ubuntu precise main universe\" /etc/apt/sources.list apt-get update install memcached\nRUN apt-get install -y install emacs apt-get install -y emacs23 Congratulations, you successfully restored Eric's Dockerfile! You are ready to containerize the world!. \nTell the world! Tweet Wooops, there are one or more errors in the Dockerfile. Try again. Check the Dockerfile",
|
|
"title": "Test your Dockerfile knowledge - Level 1"
|
|
},
|
|
{
|
|
"loc": "/userguide/level1#whats-next",
|
|
"tags": "",
|
|
"text": "In the next level, we will go into more detail about how to specify which command should be executed when the container starts,\nwhich user to use, and how expose a particular port. Back Go to the next level",
|
|
"title": "What's next?"
|
|
},
|
|
{
|
|
"loc": "/userguide/level2/",
|
|
"tags": "",
|
|
"text": "Back\nDockerfile Tutorial\nTest your Dockerfile knowledge - Level 2\nQuestions:\n\nWhat is the Dockerfile instruction to specify the base image?\n \n The right answer was FROM\n Which Dockerfile instruction sets the default command for your image?\n \n The right answer was ENTRYPOINT or CMD\n What is the character used to add comments in Dockerfiles?\n \n The right answer was #\n Which Dockerfile instruction sets the username to use when running the image?\n \n The right answer was USER\n What is the Dockerfile instruction to execute any command on the current image and commit the results?\n \n The right answer was RUN\n Which Dockerfile instruction sets ports to be exposed when running the image?\n \n The right answer was EXPOSE\n What is the Dockerfile instruction to specify the maintainer of the Dockerfile?\n \n The right answer was MAINTAINER\n Which Dockerfile instruction lets you trigger a command as soon as the container starts?\n \n The right answer was ENTRYPOINT or CMD\n \n\n Congratulations, you made no mistake!\n Tell the world Tweet\n And try the next challenge: Fill the Dockerfile\n \n Your Dockerfile skills are not yet perfect, try to take the time to read this tutorial again.\n You're almost there! Read carefully the sections corresponding to your errors, and take the test again!\n \n Check your answers\n\n\nFill the Dockerfile\n\nYour best friend Roberto Hashioka sent you a Dockerfile, but some parts were lost in the ocean. Can you find the missing parts?\n\n\n Redis\n\n VERSION 0.42\n\n use the ubuntu base image provided by dotCloud\n ub\nMAINT Ro Ha roberto.hashioka@dotcloud.com\n make sure the package repository is up to date\n echo \"deb http://archive.ubuntu.com/ubuntu precise main universe\" /etc/apt/sources.list\n apt-get update\n install wget (required for redis installation)\n apt-get install -y wget\n install make (required for redis installation)\n apt-get install -y make\n install gcc (required for redis installation)\nRUN apt-get install -y \n install apache2\n wget http://download.redis.io/redis-stable.tar.gz\ntar xvzf redis-stable.tar.gz\ncd redis-stable make make install\n launch redis when starting the image\n [\"redis-server\"]\n run as user dameon\n daemon\n expose port 6379\n 6379\n\nCongratulations, you successfully restored Roberto's Dockerfile! You are ready to containerize the world!.\n Tell the world! Tweet\n\nWooops, there are one or more errors in the Dockerfile. Try again.\n\nCheck the Dockerfile\n\nWhat's next?\n\nThanks for going through our tutorial! We will be posting Level 3 in the future. \nTo improve your Dockerfile writing skills even further, visit the Dockerfile best practices page.\nBack to the Docs!",
|
|
"title": "**HIDDEN**"
|
|
},
|
|
{
|
|
"loc": "/userguide/level2#dockerfile-tutorial",
|
|
"tags": "",
|
|
"text": "",
|
|
"title": "Dockerfile Tutorial"
|
|
},
|
|
{
|
|
"loc": "/userguide/level2#test-your-dockerfile-knowledge-level-2",
|
|
"tags": "",
|
|
"text": "Questions: \nWhat is the Dockerfile instruction to specify the base image? \n \n The right answer was FROM \n Which Dockerfile instruction sets the default command for your image? \n \n The right answer was ENTRYPOINT or CMD \n What is the character used to add comments in Dockerfiles? \n \n The right answer was # \n Which Dockerfile instruction sets the username to use when running the image? \n \n The right answer was USER \n What is the Dockerfile instruction to execute any command on the current image and commit the results? \n \n The right answer was RUN \n Which Dockerfile instruction sets ports to be exposed when running the image? \n \n The right answer was EXPOSE \n What is the Dockerfile instruction to specify the maintainer of the Dockerfile? \n \n The right answer was MAINTAINER \n Which Dockerfile instruction lets you trigger a command as soon as the container starts? \n \n The right answer was ENTRYPOINT or CMD \n \n\n Congratulations, you made no mistake! \n Tell the world Tweet \n And try the next challenge: Fill the Dockerfile \n \n Your Dockerfile skills are not yet perfect, try to take the time to read this tutorial again. \n You're almost there! Read carefully the sections corresponding to your errors, and take the test again! \n \n Check your answers Fill the Dockerfile \nYour best friend Roberto Hashioka sent you a Dockerfile, but some parts were lost in the ocean. Can you find the missing parts? Redis VERSION 0.42 use the ubuntu base image provided by dotCloud ub \nMAINT Ro Ha roberto.hashioka@dotcloud.com make sure the package repository is up to date echo \"deb http://archive.ubuntu.com/ubuntu precise main universe\" /etc/apt/sources.list apt-get update install wget (required for redis installation) apt-get install -y wget install make (required for redis installation) apt-get install -y make install gcc (required for redis installation)\nRUN apt-get install -y install apache2 wget http://download.redis.io/redis-stable.tar.gz tar xvzf redis-stable.tar.gz cd redis-stable make make install launch redis when starting the image [\"redis-server\"] run as user dameon daemon expose port 6379 6379 Congratulations, you successfully restored Roberto's Dockerfile! You are ready to containerize the world!. \n Tell the world! Tweet Wooops, there are one or more errors in the Dockerfile. Try again. Check the Dockerfile",
|
|
"title": "Test your Dockerfile knowledge - Level 2"
|
|
},
|
|
{
|
|
"loc": "/userguide/level2#whats-next",
|
|
"tags": "",
|
|
"text": "Thanks for going through our tutorial! We will be posting Level 3 in the future. To improve your Dockerfile writing skills even further, visit the Dockerfile best practices page . Back to the Docs!",
|
|
"title": "What's next?"
|
|
},
|
|
{
|
|
"loc": "/compose/",
|
|
"tags": "",
|
|
"text": "Docker Compose\nCompose is a tool for defining and running complex applications with Docker.\nWith Compose, you define a multi-container application in a single file, then\nspin your application up in a single command which does everything that needs to\nbe done to get it running.\nCompose is great for development environments, staging servers, and CI. We don't\nrecommend that you use it in production yet.\nUsing Compose is basically a three-step process.\nFirst, you define your app's environment with a Dockerfile so it can be\nreproduced anywhere:\nFROM python:2.7\nWORKDIR /code\nADD requirements.txt /code/\nRUN pip install -r requirements.txt\nADD . /code\nCMD python app.py\n\n\nNext, you define the services that make up your app in docker-compose.yml so\nthey can be run together in an isolated environment:\nweb:\n build: .\n links:\n - db\n ports:\n - 8000:8000\ndb:\n image: postgres\n\n\nLastly, run docker-compose up and Compose will start and run your entire app.\nCompose has commands for managing the whole lifecycle of your application:\n\nStart, stop and rebuild services\nView the status of running services\nStream the log output of running services\nRun a one-off command on a service\n\nCompose documentation\n\nInstalling Compose\nCommand line reference\nYaml file reference\nCompose environment variables\nCompose command line completion\n\nQuick start\nLet's get started with a walkthrough of getting a simple Python web app running\non Compose. It assumes a little knowledge of Python, but the concepts\ndemonstrated here should be understandable even if you're not familiar with\nPython.\nInstallation and set-up\nFirst, install Docker and Compose.\nNext, you'll want to make a directory for the project:\n$ mkdir composetest\n$ cd composetest\n\nInside this directory, create app.py, a simple web app that uses the Flask\nframework and increments a value in Redis:\nfrom flask import Flask\nfrom redis import Redis\nimport os\napp = Flask(__name__)\nredis = Redis(host='redis', port=6379)\n\n@app.route('/')\ndef hello():\n redis.incr('hits')\n return 'Hello World! I have been seen %s times.' % redis.get('hits')\n\nif __name__ == __main__:\n app.run(host=0.0.0.0, debug=True)\n\n\nNext, define the Python dependencies in a file called requirements.txt:\nflask\nredis\n\nCreate a Docker image\nNow, create a Docker image containing all of your app's dependencies. You\nspecify how to build the image using a file called\nDockerfile:\nFROM python:2.7\nADD . /code\nWORKDIR /code\nRUN pip install -r requirements.txt\n\nThis tells Docker to include Python, your code, and your Python dependencies in\na Docker image. For more information on how to write Dockerfiles, see the\nDocker user\nguide\nand the\nDockerfile reference.\nDefine services\nNext, define a set of services using docker-compose.yml:\nweb:\n build: .\n command: python app.py\n ports:\n - \"5000:5000\"\n volumes:\n - .:/code\n links:\n - redis\nredis:\n image: redis\n\nThis defines two services:\n\nweb, which is built from the Dockerfile in the current directory. It also\n says to run the command python app.py inside the image, forward the exposed\n port 5000 on the container to port 5000 on the host machine, connect up the\n Redis service, and mount the current directory inside the container so we can\n work on code without having to rebuild the image.\nredis, which uses the public image\n redis, which gets pulled from the\n Docker Hub registry.\n\nBuild and run your app with Compose\nNow, when you run docker-compose up, Compose will pull a Redis image, build an\nimage for your code, and start everything up:\n$ docker-compose up\nPulling image redis...\nBuilding web...\nStarting composetest_redis_1...\nStarting composetest_web_1...\nredis_1 | [8] 02 Jan 18:43:35.576 # Server started, Redis version 2.8.3\nweb_1 | * Running on http://0.0.0.0:5000/\n\nThe web app should now be listening on port 5000 on your Docker daemon host (if\nyou're using Boot2docker, boot2docker ip will tell you its address).\nIf you want to run your services in the background, you can pass the -d flag\n(for daemon mode) to docker-compose up and use docker-compose ps to see what\nis currently running:\n$ docker-compose up -d\nStarting composetest_redis_1...\nStarting composetest_web_1...\n$ docker-compose ps\n Name Command State Ports\n-------------------------------------------------------------------\ncomposetest_redis_1 /usr/local/bin/run Up\ncomposetest_web_1 /bin/sh -c python app.py Up 5000-5000/tcp\n\nThe docker-compose run command allows you to run one-off commands for your\nservices. For example, to see what environment variables are available to the\nweb service:\n$ docker-compose run web env\n\nSee docker-compose --help to see other available commands.\nIf you started Compose with docker-compose up -d, you'll probably want to stop\nyour services once you've finished with them:\n$ docker-compose stop\n\nAt this point, you have seen the basics of how Compose works.",
|
|
"title": "Docker Compose"
|
|
},
|
|
{
|
|
"loc": "/compose#docker-compose",
|
|
"tags": "",
|
|
"text": "Compose is a tool for defining and running complex applications with Docker.\nWith Compose, you define a multi-container application in a single file, then\nspin your application up in a single command which does everything that needs to\nbe done to get it running. Compose is great for development environments, staging servers, and CI. We don't\nrecommend that you use it in production yet. Using Compose is basically a three-step process. First, you define your app's environment with a Dockerfile so it can be\nreproduced anywhere: FROM python:2.7\nWORKDIR /code\nADD requirements.txt /code/\nRUN pip install -r requirements.txt\nADD . /code\nCMD python app.py Next, you define the services that make up your app in docker-compose.yml so\nthey can be run together in an isolated environment: web:\n build: .\n links:\n - db\n ports:\n - 8000:8000 \ndb:\n image: postgres Lastly, run docker-compose up and Compose will start and run your entire app. Compose has commands for managing the whole lifecycle of your application: Start, stop and rebuild services View the status of running services Stream the log output of running services Run a one-off command on a service",
|
|
"title": "Docker Compose"
|
|
},
|
|
{
|
|
"loc": "/compose#compose-documentation",
|
|
"tags": "",
|
|
"text": "Installing Compose Command line reference Yaml file reference Compose environment variables Compose command line completion",
|
|
"title": "Compose documentation"
|
|
},
|
|
{
|
|
"loc": "/compose#quick-start",
|
|
"tags": "",
|
|
"text": "Let's get started with a walkthrough of getting a simple Python web app running\non Compose. It assumes a little knowledge of Python, but the concepts\ndemonstrated here should be understandable even if you're not familiar with\nPython. Installation and set-up First, install Docker and Compose . Next, you'll want to make a directory for the project: $ mkdir composetest\n$ cd composetest Inside this directory, create app.py , a simple web app that uses the Flask\nframework and increments a value in Redis: from flask import Flask\nfrom redis import Redis\nimport os\napp = Flask(__name__)\nredis = Redis(host='redis', port=6379)\n\n@app.route('/')\ndef hello():\n redis.incr('hits')\n return 'Hello World! I have been seen %s times.' % redis.get('hits')\n\nif __name__ == __main__ :\n app.run(host= 0.0.0.0 , debug=True) Next, define the Python dependencies in a file called requirements.txt : flask\nredis Create a Docker image Now, create a Docker image containing all of your app's dependencies. You\nspecify how to build the image using a file called Dockerfile : FROM python:2.7\nADD . /code\nWORKDIR /code\nRUN pip install -r requirements.txt This tells Docker to include Python, your code, and your Python dependencies in\na Docker image. For more information on how to write Dockerfiles, see the Docker user\nguide \nand the Dockerfile reference . Define services Next, define a set of services using docker-compose.yml : web:\n build: .\n command: python app.py\n ports:\n - \"5000:5000\"\n volumes:\n - .:/code\n links:\n - redis\nredis:\n image: redis This defines two services: web , which is built from the Dockerfile in the current directory. It also\n says to run the command python app.py inside the image, forward the exposed\n port 5000 on the container to port 5000 on the host machine, connect up the\n Redis service, and mount the current directory inside the container so we can\n work on code without having to rebuild the image. redis , which uses the public image\n redis , which gets pulled from the\n Docker Hub registry. Build and run your app with Compose Now, when you run docker-compose up , Compose will pull a Redis image, build an\nimage for your code, and start everything up: $ docker-compose up\nPulling image redis...\nBuilding web...\nStarting composetest_redis_1...\nStarting composetest_web_1...\nredis_1 | [8] 02 Jan 18:43:35.576 # Server started, Redis version 2.8.3\nweb_1 | * Running on http://0.0.0.0:5000/ The web app should now be listening on port 5000 on your Docker daemon host (if\nyou're using Boot2docker, boot2docker ip will tell you its address). If you want to run your services in the background, you can pass the -d flag\n(for daemon mode) to docker-compose up and use docker-compose ps to see what\nis currently running: $ docker-compose up -d\nStarting composetest_redis_1...\nStarting composetest_web_1...\n$ docker-compose ps\n Name Command State Ports\n-------------------------------------------------------------------\ncomposetest_redis_1 /usr/local/bin/run Up\ncomposetest_web_1 /bin/sh -c python app.py Up 5000- 5000/tcp The docker-compose run command allows you to run one-off commands for your\nservices. For example, to see what environment variables are available to the web service: $ docker-compose run web env See docker-compose --help to see other available commands. If you started Compose with docker-compose up -d , you'll probably want to stop\nyour services once you've finished with them: $ docker-compose stop At this point, you have seen the basics of how Compose works.",
|
|
"title": "Quick start"
|
|
},
|
|
{
|
|
"loc": "/machine/",
|
|
"tags": "",
|
|
"text": "Docker Machine\n\nNote: Machine is currently in beta, so things are likely to change. We\ndon't recommend you use it in production yet.\n\nMachine makes it really easy to create Docker hosts on your computer, on cloud\nproviders and inside your own data center. It creates servers, installs Docker\non them, then configures the Docker client to talk to them.\nOnce your Docker host has been created, it then has a number of commands for\nmanaging them:\n\nStarting, stopping, restarting\nUpgrading Docker\nConfiguring the Docker client to talk to your host\n\nInstallation\nDocker Machine is supported on Windows, OSX, and Linux. To install Docker\nMachine, download the appropriate binary for your OS and architecture to the\ncorrect place in your PATH:\n\nWindows - x86_64\nOSX - x86_64\nLinux - x86_64\nWindows - i386\nOSX - i386\nLinux - i386\n\nNow you should be able to check the version with docker-machine -v:\n$ docker-machine -v\nmachine version 0.1.0\n\n\nGetting started with Docker Machine using a local VM\nLet's take a look at using docker-machine to creating, using, and managing a Docker\nhost inside of VirtualBox.\nFirst, ensure that\nVirtualBox 4.3.20 is correctly\ninstalled on your system.\nIf you run the docker-machine ls command to show all available machines, you will see\nthat none have been created so far.\n$ docker-machine ls\nNAME ACTIVE DRIVER STATE URL\n\n\nTo create one, we run the docker-machine create command, passing the string\nvirtualbox to the --driver flag. The final argument we pass is the name of\nthe machine - in this case, we will name our machine \"dev\".\nThis will download a lightweight Linux distribution\n(boot2docker) with the Docker\ndaemon installed, and will create and start a VirtualBox VM with Docker running.\n$ docker-machine create --driver virtualbox dev\nINFO[0000] Creating SSH key...\nINFO[0000] Creating VirtualBox VM...\nINFO[0007] Starting VirtualBox VM...\nINFO[0007] Waiting for VM to start...\nINFO[0038] dev has been created and is now the active machine\nINFO[0038] To connect: docker $(docker-machine config dev) ps\n\n\nTo use the Docker CLI, you can use the env command to list the commands\nneeded to connect to the instance.\n$ docker-machine env dev\nexport DOCKER_TLS_VERIFY=yes\nexport DOCKER_CERT_PATH=/home/ehazlett/.docker/machines/.client\nexport DOCKER_HOST=tcp://192.168.99.100:2376\n\n\n\nYou can see the machine you have created by running the docker-machine ls command\nagain:\n$ docker-machine ls\nNAME ACTIVE DRIVER STATE URL\ndev * virtualbox Running tcp://192.168.99.100:2376\n\n\nThe * next to dev indicates that it is the active host.\nNext, as noted in the output of the docker-machine create command, we have to tell\nDocker to talk to that machine. You can do this with the docker-machine config\ncommand. For example,\n$ docker $(docker-machine config dev) ps\n\n\nThis will pass arguments to the Docker client that specify the TLS settings.\nTo see what will be passed, run docker-machine config dev.\nYou can now run Docker commands on this host:\n$ docker $(docker-machine config dev) run busybox echo hello world\nUnable to find image 'busybox' locally\nPulling repository busybox\ne72ac664f4f0: Download complete\n511136ea3c5a: Download complete\ndf7546f9f060: Download complete\ne433a6c5b276: Download complete\nhello world\n\n\nAny exposed ports are available on the Docker host\u2019s IP address, which you can\nget using the docker-machine ip command:\n$ docker-machine ip\n192.168.99.100\n\n\nNow you can manage as many local VMs running Docker as you please- just run\ndocker-machine create again.\nIf you are finished using a host, you can stop it with docker stop and start\nit again with docker start:\n$ docker-machine stop\n$ docker-machine start\n\n\nIf they aren't passed any arguments, commands such as docker-machine stop will run\nagainst the active host (in this case, the VirtualBox VM). You can also specify\na host to run a command against as an argument. For instance, you could also\nhave written:\n$ docker-machine stop dev\n$ docker-machine start dev\n\n\nUsing Docker Machine with a cloud provider\nOne of the nice things about docker-machine is that it provides several \u201cdrivers\u201d\nwhich let you use the same interface to create hosts on many different cloud\nplatforms. This is accomplished by using the docker-machine create command with the\n --driver flag. Here we will be demonstrating the\nDigital Ocean driver (called digitalocean), but\nthere are drivers included for several providers including Amazon Web Services,\nGoogle Compute Engine, and Microsoft Azure.\nUsually it is required that you pass account verification credentials for these\nproviders as flags to docker-machine create. These flags are unique for each driver.\nFor instance, to pass a Digital Ocean access token you use the\n--digitalocean-access-token flag.\nLet's take a look at how to do this.\nTo generate your access token:\n\nGo to the Digital Ocean administrator panel and click on \"Apps and API\" in\nthe side panel.\nClick on \"Generate New Token\".\nGive the token a clever name (e.g. \"machine\"), make sure the \"Write\" checkbox\nis checked, and click on \"Generate Token\".\nGrab the big long hex string that is generated (this is your token) and store it somehwere safe.\n\nNow, run docker-machine create with the digitalocean driver and pass your key to\nthe --digitalocean-access-token flag.\nExample:\n$ docker-machine create \\\n --driver digitalocean \\\n --digitalocean-access-token 0ab77166d407f479c6701652cee3a46830fef88b8199722b87821621736ab2d4 \\\n staging\nINFO[0000] Creating SSH key...\nINFO[0000] Creating Digital Ocean droplet...\nINFO[0002] Waiting for SSH...\nINFO[0085] staging has been created and is now the active machine\nINFO[0085] To connect: docker $(docker-machine config dev) staging\n\n\nFor convenience, docker-machine will use sensible defaults for choosing settings such\n as the image that the VPS is based on, but they can also be overridden using\ntheir respective flags (e.g. --digitalocean-image). This is useful if, for\ninstance, you want to create a nice large instance with a lot of memory and CPUs\n(by default docker-machine creates a small VPS). For a full list of the\nflags/settings available and their defaults, see the output of\ndocker-machine create -h.\nWhen the creation of a host is initiated, a unique SSH key for accessing the\nhost (initially for provisioning, then directly later if the user runs the\ndocker-machine ssh command) will be created automatically and stored in the client's\ndirectory in ~/.docker/machines. After the creation of the SSH key, Docker\nwill be installed on the remote machine and the daemon will be configured to\naccept remote connections over TCP using TLS for authentication. Once this\nis finished, the host is ready for connection.\nAnd then from this point, the remote host behaves much like the local host we\ncreated in the last section. If we look at docker-machine, we\u2019ll see it is now the\nactive host:\n$ docker-machine active dev\n$ docker-machine ls\nNAME ACTIVE DRIVER STATE URL\ndev virtualbox Running tcp://192.168.99.103:2376\nstaging * digitalocean Running tcp://104.236.50.118:2376\n\n\nTo select an active host, you can use the docker-machine active command.\n$ docker-machine active dev\n$ docker-machine ls\nNAME ACTIVE DRIVER STATE URL\ndev * virtualbox Running tcp://192.168.99.103:2376\nstaging digitalocean Running tcp://104.236.50.118:2376\n\n\nTo remove a host and all of its containers and images, use docker-machine rm:\n$ docker-machine rm dev staging\n$ docker-machine ls\nNAME ACTIVE DRIVER STATE URL\n\n\nAdding a host without a driver\nYou can add a host to Docker which only has a URL and no driver. Therefore it\ncan be used an alias for an existing host so you don\u2019t have to type out the URL\nevery time you run a Docker command.\n$ docker-machine create --url=tcp://50.134.234.20:2376 custombox\n$ docker-machine ls\nNAME ACTIVE DRIVER STATE URL\ncustombox * none Running tcp://50.134.234.20:2376\n\n\nUsing Docker Machine with Docker Swarm\nDocker Machine can also provision Swarm \nclusters. This can be used with any driver and will be secured with TLS. \n\nNote: This is an experimental feature so the subcommands and\noptions are likely to change in future versions.\n\nFirst, create a Swarm token. Optionally, you can use another discovery service.\nSee the Swarm docs for details.\nTo create the token, first create a Machine. This example will use VirtualBox.\n$ docker-machine create -d virtualbox local\n\n\nLoad the Machine configuration into your shell:\n$ $(docker-machine env local)\n\n\nThen run generate the token using the Swarm Docker image:\n$ docker run swarm create\n1257e0f0bbb499b5cd04b4c9bdb2dab3\n\n\nOnce you have the token, you can create the cluster.\nSwarm Master\nCreate the Swarm master:\ndocker-machine create \\\n -d virtualbox \\\n --swarm \\\n --swarm-master \\\n --swarm-discovery token://TOKEN-FROM-ABOVE \\\n swarm-master\n\n\nReplace TOKEN-FROM-ABOVE with your random token.\nThis will create the Swarm master and add itself as a Swarm node.\nSwarm Nodes\nNow, create more Swarm nodes:\ndocker-machine create \\\n -d virtualbox \\\n --swarm \\\n --swarm-discovery token://TOKEN-FROM-ABOVE \\\n swarm-node-00\n\n\nYou now have a Swarm cluster across two nodes.\nTo connect to the Swarm master, use docker-machine env --swarm swarm-master\nFor example:\n$ docker-machine env --swarm swarm-master\nexport DOCKER_TLS_VERIFY=yes\nexport DOCKER_CERT_PATH=/home/ehazlett/.docker/machines/.client\nexport DOCKER_HOST=tcp://192.168.99.100:3376\n\n\nYou can load this into your environment using\n$(docker-machine env --swarm swarm-master).\nNow you can use the Docker CLI to query:\n$ docker info\nContainers: 1\nNodes: 1\n swarm-master: 192.168.99.100:2376\n \u2514 Containers: 2\n \u2514 Reserved CPUs: 0 / 4\n \u2514 Reserved Memory: 0 B / 999.9 MiB\n\n\nSubcommands\nactive\nGet or set the active machine.\n$ docker-machine ls\nNAME ACTIVE DRIVER STATE URL\ndev virtualbox Running tcp://192.168.99.103:2376\nstaging * digitalocean Running tcp://104.236.50.118:2376\n$ docker-machine active dev\n$ docker-machine ls\nNAME ACTIVE DRIVER STATE URL\ndev * virtualbox Running tcp://192.168.99.103:2376\nstaging digitalocean Running tcp://104.236.50.118:2376\n\n\ncreate\nCreate a machine.\n$ docker-machine create --driver virtualbox dev\nINFO[0000] Creating SSH key...\nINFO[0000] Creating VirtualBox VM...\nINFO[0007] Starting VirtualBox VM...\nINFO[0007] Waiting for VM to start...\nINFO[0038] dev has been created and is now the active machine. To point Docker at this machine, run: export DOCKER_HOST=$(docker-machine url) DOCKER_AUTH=identity\n\n\nconfig\nShow the Docker client configuration for a machine.\n$ docker-machine config dev\n--tls --tlscacert=/Users/ehazlett/.docker/machines/dev/ca.pem --tlscert=/Users/ehazlett/.docker/machines/dev/cert.pem --tlskey=/Users/ehazlett/.docker/machines/dev/key.pem -H tcp://192.168.99.103:2376\n\n\nenv\nSet environment variables to dictate that docker should run a command against\na particular machine.\ndocker-machine env machinename will print out export commands which can be\nrun in a subshell. Running docker-machine env -u will print\nunset commands which reverse this effect.\n$ env | grep DOCKER\n$ $(docker-machine env dev)\n$ env | grep DOCKER\nDOCKER_HOST=tcp://192.168.99.101:2376\nDOCKER_CERT_PATH=/Users/nathanleclaire/.docker/machines/.client\nDOCKER_TLS_VERIFY=yes\n$ # If you run a docker command, now it will run against that host.\n$ $(docker-machine env -u)\n$ env | grep DOCKER\n$ # The environment variables have been unset.\n\n\ninspect\nInspect information about a machine.\n$ docker-machine inspect dev\n{\n DriverName: virtualbox,\n Driver: {\n MachineName: docker-host-128be8d287b2028316c0ad5714b90bcfc11f998056f2f790f7c1f43f3d1e6eda,\n SSHPort: 55834,\n Memory: 1024,\n DiskSize: 20000,\n Boot2DockerURL: \n }\n}\n\n\nhelp\nShow help text.\nip\nGet the IP address of a machine.\n$ docker-machine ip\n192.168.99.104\n\n\nkill\nKill (abruptly force stop) a machine.\n$ docker-machine ls\nNAME ACTIVE DRIVER STATE URL\ndev * virtualbox Running tcp://192.168.99.104:2376\n$ docker-machine kill dev\n$ docker-machine ls\nNAME ACTIVE DRIVER STATE URL\ndev * virtualbox Stopped\n\n\nls\nList machines.\n$ docker-machine ls\nNAME ACTIVE DRIVER STATE URL\ndev virtualbox Stopped\nfoo0 virtualbox Running tcp://192.168.99.105:2376\nfoo1 virtualbox Running tcp://192.168.99.106:2376\nfoo2 virtualbox Running tcp://192.168.99.107:2376\nfoo3 virtualbox Running tcp://192.168.99.108:2376\nfoo4 * virtualbox Running tcp://192.168.99.109:2376\n\n\nrestart\nRestart a machine. Oftentimes this is equivalent to\ndocker-machine stop; machine start.\n$ docker-machine restart\nINFO[0005] Waiting for VM to start...\n\n\nrm\nRemove a machine. This will remove the local reference as well as delete it\non the cloud provider or virtualization management platform.\n$ docker-machine ls\nNAME ACTIVE DRIVER STATE URL\nfoo0 virtualbox Running tcp://192.168.99.105:2376\nfoo1 virtualbox Running tcp://192.168.99.106:2376\n$ docker-machine rm foo1\n$ docker-machine ls\nNAME ACTIVE DRIVER STATE URL\nfoo0 virtualbox Running tcp://192.168.99.105:2376\n\n\nssh\nLog into or run a command on a machine using SSH.\nTo login, just run docker-machine ssh machinename:\n$ docker-machine ssh dev\n ## .\n ## ## ## ==\n ## ## ## ## ===\n /\\___/ ===\n ~~~ {~~ ~~~~ ~~~ ~~~~ ~~ ~ / ===- ~~~\n \\______ o __/\n \\ \\ __/\n \\____\\______/\n _ _ ____ _ _\n| |__ ___ ___ | |_|___ \\ __| | ___ ___| | _____ _ __\n| '_ \\ / _ \\ / _ \\| __| __) / _` |/ _ \\ / __| |/ / _ \\ '__|\n| |_) | (_) | (_) | |_ / __/ (_| | (_) | (__| __/ |\n|_.__/ \\___/ \\___/ \\__|_____\\__,_|\\___/ \\___|_|\\_\\___|_|\nBoot2Docker version 1.4.0, build master : 69cf398 - Fri Dec 12 01:39:42 UTC 2014\ndocker@boot2docker:~$ ls /\nUsers/ dev/ home/ lib/ mnt/ proc/ run/ sys/ usr/\nbin/ etc/ init linuxrc opt/ root/ sbin/ tmp var/\n\n\nYou can also specify commands to run remotely by appending them directly to the\ndocker-machine ssh command, much like the regular ssh program works:\n$ docker-machine ssh dev free\n total used free shared buffers\nMem: 1023556 183136 840420 0 30920\n-/+ buffers: 152216 871340\nSwap: 1212036 0 1212036\n\n\nIf the command you are appending has flags, e.g. df -h, you can use the flag\nparsing terminator (--) to avoid confusing the docker-machine client, which\nwill otherwise interpret them as flags you intended to pass to it:\n$ docker-machine ssh dev -- df -h\nFilesystem Size Used Available Use% Mounted on\nrootfs 899.6M 85.9M 813.7M 10% /\ntmpfs 899.6M 85.9M 813.7M 10% /\ntmpfs 499.8M 0 499.8M 0% /dev/shm\n/dev/sda1 18.2G 58.2M 17.2G 0% /mnt/sda1\ncgroup 499.8M 0 499.8M 0% /sys/fs/cgroup\n/dev/sda1 18.2G 58.2M 17.2G 0%\n/mnt/sda1/var/lib/docker/aufs\n\n\nstart\nGracefully start a machine.\n$ docker-machine restart\nINFO[0005] Waiting for VM to start...\n\n\nstop\nGracefully stop a machine.\n$ docker-machine ls\nNAME ACTIVE DRIVER STATE URL\ndev * virtualbox Running tcp://192.168.99.104:2376\n$ docker-machine stop dev\n$ docker-machine ls\nNAME ACTIVE DRIVER STATE URL\ndev * virtualbox Stopped\n\n\nupgrade\nUpgrade a machine to the latest version of Docker.\n$ docker-machine upgrade dev\n\n\nurl\nGet the URL of a host\n$ docker-machine url\ntcp://192.168.99.109:2376\n\n\nDrivers\nTODO: List all possible values (where applicable) for all flags for every\ndriver.\nAmazon Web Services\nCreate machines on Amazon Web Services. You will need an Access Key ID, Secret Access Key and a VPC ID. To find the VPC ID, login to the AWS console and go to Services - VPC - Your VPCs. Select the one where you would like to launch the instance.\nOptions:\n\n--amazonec2-access-key: required Your access key id for the Amazon Web Services API.\n--amazonec2-ami: The AMI ID of the instance to use Default: ami-4ae27e22\n--amazonec2-instance-type: The instance type to run. Default: t2.micro\n--amazonec2-region: The region to use when launching the instance. Default: us-east-1\n--amazonec2-root-size: The root disk size of the instance (in GB). Default: 16\n--amazonec2-secret-key: required Your secret access key for the Amazon Web Services API.\n--amazonec2-security-group: AWS VPC security group name. Default: docker-machine\n--amazonec2-session-token: Your session token for the Amazon Web Services API.\n--amazonec2-subnet-id: AWS VPC subnet id\n--amazonec2-vpc-id: required Your VPC ID to launch the instance in.\n--amazonec2-zone: The AWS zone launch the instance in (i.e. one of a,b,c,d,e). Default: a\n\nBy default, the Amazon EC2 driver will use a daily image of Ubuntu 14.04 LTS.\n\n\n\nRegion\nAMI ID\n\n\n\n\nap-northeast-1\nami-44f1e245\n\n\nap-southeast-1\nami-f95875ab\n\n\nap-southeast-2\nami-890b62b3\n\n\ncn-north-1\nami-fe7ae8c7\n\n\neu-west-1\nami-823686f5\n\n\neu-central-1\nami-ac1524b1\n\n\nsa-east-1\nami-c770c1da\n\n\nus-east-1\nami-4ae27e22\n\n\nus-west-1\nami-d1180894\n\n\nus-west-2\nami-898dd9b9\n\n\nus-gov-west-1\nami-cf5630ec\n\n\n\nDigital Ocean\nCreate Docker machines on Digital Ocean.\nYou need to create a personal access token under \"Apps API\" in the Digital Ocean\nControl Panel and pass that to docker-machine create with the --digitalocean-access-token option.\n$ docker-machine create --driver digitalocean --digitalocean-access-token=aa9399a2175a93b17b1c86c807e08d3fc4b79876545432a629602f61cf6ccd6b test-this\n\nOptions:\n\n--digitalocean-access-token: Your personal access token for the Digital Ocean API.\n--digitalocean-image: The name of the Digital Ocean image to use. Default: docker\n--digitalocean-region: The region to create the droplet in, see Regions API for how to get a list. Default: nyc3\n--digitalocean-size: The size of the Digital Ocean driver (larger than default options are of the form 2gb). Default: 512mb\n\nThe DigitalOcean driver will use ubuntu-14-04-x64 as the default image.\nGoogle Compute Engine\nCreate machines on Google Compute Engine. You will need a Google account and project name. See https://cloud.google.com/compute/docs/projects for details on projects.\nThe Google driver uses oAuth. When creating the machine, you will have your browser opened to authorize. Once authorized, paste the code given in the prompt to launch the instance.\nOptions:\n\n--google-zone: The zone to launch the instance. Default: us-central1-a\n--google-machine-type: The type of instance. Default: f1-micro\n--google-username: The username to use for the instance. Default: docker-user\n--google-instance-name: The name of the instance. Default: docker-machine\n--google-project: The name of your project to use when launching the instance.\n\nThe GCE driver will use the ubuntu-1404-trusty-v20141212 instance type unless otherwise specified.\nIBM Softlayer\nCreate machines on Softlayer.\nYou need to generate an API key in the softlayer control panel.\nRetrieve your API key\nOptions:\n - --softlayer-api-endpoint=: Change softlayer API endpoint\n - --softlayer-user: required username for your softlayer account, api key needs to match this user.\n - --softlayer-api-key: required API key for your user account\n - --softlayer-cpu: Number of CPU's for the machine.\n - --softlayer-disk-size: Size of the disk in MB.0sets the softlayer default.\n ---softlayer-domain: **required** domain name for the machine\n ---softlayer-hostname: hostname for the machine\n ---softlayer-hourly-billing: Sets the hourly billing flag (default), otherwise uses monthly billing\n ---softlayer-image: OS Image to use\n ---softlayer-local-disk: Use local machine disk instead of softlayer SAN.\n ---softlayer-memory: Memory for host in MB\n ---softlayer-private-net-only: Disable public networking\n ---softlayer-region`: softlayer region\nThe SoftLayer driver will use UBUNTU_LATEST as the image type by default.\nMicrosoft Azure\nCreate machines on Microsoft Azure.\nYou need to create a subscription with a cert. Run these commands and answer the questions:\n$ openssl req -x509 -nodes -days 365 -newkey rsa:1024 -keyout mycert.pem -out mycert.pem\n$ openssl pkcs12 -export -out mycert.pfx -in mycert.pem -name \"My Certificate\"\n$ openssl x509 -inform pem -in mycert.pem -outform der -out mycert.cer\n\nGo to the Azure portal, go to the \"Settings\" page (you can find the link at the bottom of the\nleft sidebar - you need to scroll), then \"Management Certificates\" and upload mycert.cer.\nGrab your subscription ID from the portal, then run docker-machine create with these details:\n$ docker-machine create -d azure --azure-subscription-id=\"SUB_ID\" --azure-subscription-cert=\"mycert.pem\" A-VERY-UNIQUE-NAME\n\nOptions:\n\n--azure-subscription-id: Your Azure subscription ID (A GUID like d255d8d7-5af0-4f5c-8a3e-1545044b861e).\n--azure-subscription-cert: Your Azure subscription cert.\n\nThe Azure driver uses the b39f27a8b8c64d52b05eac6a62ebad85__Ubuntu-14_04_1-LTS-amd64-server-20140927-en-us-30GB\nimage by default. Note, this image is not available in the Chinese regions. In China you should\n specify b549f4301d0b4295b8e76ceb65df47d4__Ubuntu-14_04_1-LTS-amd64-server-20140927-en-us-30GB.\nYou may need to machine ssh in to the virtual machine and reboot to ensure that the OS is updated.\nMicrosoft Hyper-V\nCreates a Boot2Docker virtual machine locally on your Windows machine\nusing Hyper-V. See here\nfor instructions to enable Hyper-V. You will need to use an\nAdministrator level account to create and manage Hyper-V machines.\n\nNote: You will need an existing virtual switch to use the\ndriver. Hyper-V can share an external network interface (aka\nbridging), see this blog.\nIf you would like to use NAT, create an internal network, and use\nInternet Connection\nSharing.\n\nOptions:\n\n--hyper-v-boot2docker-location: Location of a local boot2docker iso to use. Overrides the URL option below.\n--hyper-v-boot2docker-url: The URL of the boot2docker iso. Defaults to the latest available version.\n--hyper-v-disk-size: Size of disk for the host in MB. Defaults to 20000.\n--hyper-v-memory: Size of memory for the host in MB. Defaults to 1024. The machine is setup to use dynamic memory.\n--hyper-v-virtual-switch: Name of the virtual switch to use. Defaults to first found.\n\nOpenstack\nCreate machines on Openstack\nMandatory:\n\n--openstack-flavor-id: The flavor ID to use when creating the machine\n--openstack-image-id: The image ID to use when creating the machine.\n\nOptions:\n\n--openstack-auth-url: Keystone service base URL.\n--openstack-username: User identifer to authenticate with.\n--openstack-password: User password. It can be omitted if the standard environment variable OS_PASSWORD is set.\n--openstack-tenant-name or --openstack-tenant-id: Identify the tenant in which the machine will be created.\n--openstack-region: The region to work on. Can be omitted if there is ony one region on the OpenStack.\n--openstack-endpoint-type: Endpoint type can be internalURL, adminURL on publicURL. If is a helper for the driver\n to choose the right URL in the OpenStack service catalog. If not provided the default id publicURL\n--openstack-net-id: The private network id the machine will be connected on. If your OpenStack project project\n contains only one private network it will be use automatically.\n--openstack-sec-groups: If security groups are available on your OpenStack you can specify a comma separated list\n to use for the machine (e.g. secgrp001,secgrp002).\n--openstack-floatingip-pool: The IP pool that will be used to get a public IP an assign it to the machine. If there is an\n IP address already allocated but not assigned to any machine, this IP will be chosen and assigned to the machine. If\n there is no IP address already allocated a new IP will be allocated and assigned to the machine.\n--openstack-ssh-user: The username to use for SSH into the machine. If not provided root will be used.\n--openstack-ssh-port: Customize the SSH port if the SSH server on the machine does not listen on the default port.\n\nEnvironment variables:\nHere comes the list of the supported variables with the corresponding options. If both environment variable\nand CLI option are provided the CLI option takes the precedence.\n\n\n\nEnvironment variable\nCLI option\n\n\n\n\nOS_AUTH_URL\n--openstack-auth-url\n\n\nOS_USERNAME\n--openstack-username\n\n\nOS_PASSWORD\n--openstack-password\n\n\nOS_TENANT_NAME\n--openstack-tenant-name\n\n\nOS_TENANT_ID\n--openstack-tenant-id\n\n\nOS_REGION_NAME\n--openstack-region\n\n\nOS_ENDPOINT_TYPE\n--openstack-endpoint-type\n\n\n\nRackspace\nCreate machines on Rackspace cloud\nOptions:\n\n--rackspace-username: Rackspace account username\n--rackspace-api-key: Rackspace API key\n--rackspace-region: Rackspace region name\n--rackspace-endpoint-type: Rackspace endpoint type (adminURL, internalURL or the default publicURL)\n--rackspace-image-id: Rackspace image ID. Default: Ubuntu 14.10 (Utopic Unicorn) (PVHVM)\n--rackspace-flavor-id: Rackspace flavor ID. Default: General Purpose 1GB\n--rackspace-ssh-user: SSH user for the newly booted machine. Set to root by default\n--rackspace-ssh-port: SSH port for the newly booted machine. Set to 22 by default\n\nEnvironment variables:\nHere comes the list of the supported variables with the corresponding options. If both environment\nvariable and CLI option are provided the CLI option takes the precedence.\n\n\n\nEnvironment variable\nCLI option\n\n\n\n\nOS_USERNAME\n--rackspace-username\n\n\nOS_API_KEY\n--rackspace-ap-key\n\n\nOS_REGION_NAME\n--rackspace-region\n\n\nOS_ENDPOINT_TYPE\n--rackspace-endpoint-type\n\n\n\nThe Rackspace driver will use 598a4282-f14b-4e50-af4c-b3e52749d9f9 (Ubuntu 14.04 LTS) by default.\nOracle VirtualBox\nCreate machines locally using VirtualBox.\nThis driver requires VirtualBox to be installed on your host.\n$ docker-machine create --driver=virtualbox vbox-test\n\nOptions:\n\n--virtualbox-boot2docker-url: The URL of the boot2docker image. Defaults to the latest available version.\n--virtualbox-disk-size: Size of disk for the host in MB. Default: 20000\n--virtualbox-memory: Size of memory for the host in MB. Default: 1024\n\nThe VirtualBox driver uses the latest boot2docker image.\nVMware Fusion\nCreates machines locally on VMware Fusion. Requires VMware Fusion to be installed.\nOptions:\n\n--vmwarefusion-boot2docker-url: URL for boot2docker image.\n--vmwarefusion-disk-size: Size of disk for host VM (in MB). Default: 20000\n--vmwarefusion-memory-size: Size of memory for host VM (in MB). Default: 1024\n\nThe VMware Fusion driver uses the latest boot2docker image.\nVMware vCloud Air\nCreates machines on vCloud Air subscription service. You need an account within an existing subscription of vCloud Air VPC or Dedicated Cloud.\nOptions:\n\n--vmwarevcloudair-username: vCloud Air Username.\n--vmwarevcloudair-password: vCloud Air Password.\n--vmwarevcloudair-catalog: Catalog. Default: Public Catalog\n--vmwarevcloudair-catalogitem: Catalog Item. Default: Ubuntu Server 12.04 LTS (amd64 20140927)\n--vmwarevcloudair-computeid: Compute ID (if using Dedicated Cloud).\n--vmwarevcloudair-cpu-count: VM Cpu Count. Default: 1\n--vmwarevcloudair-docker-port: Docker port. Default: 2376\n--vmwarevcloudair-edgegateway: Organization Edge Gateway. Default: vdcid\n--vmwarevcloudair-memory-size: VM Memory Size in MB. Default: 2048\n--vmwarevcloudair-name: vApp Name. Default: autogenerated\n--vmwarevcloudair-orgvdcnetwork: Organization VDC Network to attach. Default: vdcid-default-routed\n--vmwarevcloudair-provision: Install Docker binaries. Default: true\n--vmwarevcloudair-publicip: Org Public IP to use.\n--vmwarevcloudair-ssh-port: SSH port. Default: 22\n--vmwarevcloudair-vdcid: Virtual Data Center ID.\n\nThe VMware vCloud Air driver will use the Ubuntu Server 12.04 LTS (amd64 20140927) image by default.\nVMware vSphere\nCreates machines on a VMware vSphere Virtual Infrastructure. Requires a working vSphere (ESXi and optionally vCenter) installation. The vSphere driver depends on govc (must be in path) and has been tested with vmware/govmomi@c848630.\nOptions:\n\n--vmwarevsphere-username: vSphere Username.\n--vmwarevsphere-password: vSphere Password.\n--vmwarevsphere-boot2docker-url: URL for boot2docker image.\n--vmwarevsphere-compute-ip: Compute host IP where the Docker VM will be instantiated.\n--vmwarevsphere-cpu-count: CPU number for Docker VM. Default: 2\n--vmwarevsphere-datacenter: Datacenter for Docker VM (must be set to ha-datacenter when connecting to a single host).\n--vmwarevsphere-datastore: Datastore for Docker VM.\n--vmwarevsphere-disk-size: Size of disk for Docker VM (in MB). Default: 20000\n--vmwarevsphere-memory-size: Size of memory for Docker VM (in MB). Default: 2048\n--vmwarevsphere-network: Network where the Docker VM will be attached.\n--vmwarevsphere-pool: Resource pool for Docker VM.\n--vmwarevsphere-vcenter: IP/hostname for vCenter (or ESXi if connecting directly to a single host).\n\nThe VMware vSphere driver uses the latest boot2docker image.",
|
|
"title": "Docker Machine"
|
|
},
|
|
{
|
|
"loc": "/machine#docker-machine",
|
|
"tags": "",
|
|
"text": "Note : Machine is currently in beta, so things are likely to change. We\ndon't recommend you use it in production yet. Machine makes it really easy to create Docker hosts on your computer, on cloud\nproviders and inside your own data center. It creates servers, installs Docker\non them, then configures the Docker client to talk to them. Once your Docker host has been created, it then has a number of commands for\nmanaging them: Starting, stopping, restarting Upgrading Docker Configuring the Docker client to talk to your host",
|
|
"title": "Docker Machine"
|
|
},
|
|
{
|
|
"loc": "/machine#installation",
|
|
"tags": "",
|
|
"text": "Docker Machine is supported on Windows, OSX, and Linux. To install Docker\nMachine, download the appropriate binary for your OS and architecture to the\ncorrect place in your PATH : Windows - x86_64 OSX - x86_64 Linux - x86_64 Windows - i386 OSX - i386 Linux - i386 Now you should be able to check the version with docker-machine -v : $ docker-machine -v\nmachine version 0.1.0",
|
|
"title": "Installation"
|
|
},
|
|
{
|
|
"loc": "/machine#getting-started-with-docker-machine-using-a-local-vm",
|
|
"tags": "",
|
|
"text": "Let's take a look at using docker-machine to creating, using, and managing a Docker\nhost inside of VirtualBox . First, ensure that VirtualBox 4.3.20 is correctly\ninstalled on your system. If you run the docker-machine ls command to show all available machines, you will see\nthat none have been created so far. $ docker-machine ls\nNAME ACTIVE DRIVER STATE URL To create one, we run the docker-machine create command, passing the string virtualbox to the --driver flag. The final argument we pass is the name of\nthe machine - in this case, we will name our machine \"dev\". This will download a lightweight Linux distribution\n( boot2docker ) with the Docker\ndaemon installed, and will create and start a VirtualBox VM with Docker running. $ docker-machine create --driver virtualbox dev\nINFO[0000] Creating SSH key...\nINFO[0000] Creating VirtualBox VM...\nINFO[0007] Starting VirtualBox VM...\nINFO[0007] Waiting for VM to start...\nINFO[0038] dev has been created and is now the active machine\nINFO[0038] To connect: docker $(docker-machine config dev) ps To use the Docker CLI, you can use the env command to list the commands\nneeded to connect to the instance. $ docker-machine env dev\nexport DOCKER_TLS_VERIFY=yes\nexport DOCKER_CERT_PATH=/home/ehazlett/.docker/machines/.client\nexport DOCKER_HOST=tcp://192.168.99.100:2376 You can see the machine you have created by running the docker-machine ls command\nagain: $ docker-machine ls\nNAME ACTIVE DRIVER STATE URL\ndev * virtualbox Running tcp://192.168.99.100:2376 The * next to dev indicates that it is the active host. Next, as noted in the output of the docker-machine create command, we have to tell\nDocker to talk to that machine. You can do this with the docker-machine config \ncommand. For example, $ docker $(docker-machine config dev) ps This will pass arguments to the Docker client that specify the TLS settings.\nTo see what will be passed, run docker-machine config dev . You can now run Docker commands on this host: $ docker $(docker-machine config dev) run busybox echo hello world\nUnable to find image 'busybox' locally\nPulling repository busybox\ne72ac664f4f0: Download complete\n511136ea3c5a: Download complete\ndf7546f9f060: Download complete\ne433a6c5b276: Download complete\nhello world Any exposed ports are available on the Docker host\u2019s IP address, which you can\nget using the docker-machine ip command: $ docker-machine ip\n192.168.99.100 Now you can manage as many local VMs running Docker as you please- just run docker-machine create again. If you are finished using a host, you can stop it with docker stop and start\nit again with docker start : $ docker-machine stop\n$ docker-machine start If they aren't passed any arguments, commands such as docker-machine stop will run\nagainst the active host (in this case, the VirtualBox VM). You can also specify\na host to run a command against as an argument. For instance, you could also\nhave written: $ docker-machine stop dev\n$ docker-machine start dev",
|
|
"title": "Getting started with Docker Machine using a local VM"
|
|
},
|
|
{
|
|
"loc": "/machine#using-docker-machine-with-a-cloud-provider",
|
|
"tags": "",
|
|
"text": "One of the nice things about docker-machine is that it provides several \u201cdrivers\u201d\nwhich let you use the same interface to create hosts on many different cloud\nplatforms. This is accomplished by using the docker-machine create command with the\n --driver flag. Here we will be demonstrating the Digital Ocean driver (called digitalocean ), but\nthere are drivers included for several providers including Amazon Web Services,\nGoogle Compute Engine, and Microsoft Azure. Usually it is required that you pass account verification credentials for these\nproviders as flags to docker-machine create . These flags are unique for each driver.\nFor instance, to pass a Digital Ocean access token you use the --digitalocean-access-token flag. Let's take a look at how to do this. To generate your access token: Go to the Digital Ocean administrator panel and click on \"Apps and API\" in\nthe side panel. Click on \"Generate New Token\". Give the token a clever name (e.g. \"machine\"), make sure the \"Write\" checkbox\nis checked, and click on \"Generate Token\". Grab the big long hex string that is generated (this is your token) and store it somehwere safe. Now, run docker-machine create with the digitalocean driver and pass your key to\nthe --digitalocean-access-token flag. Example: $ docker-machine create \\\n --driver digitalocean \\\n --digitalocean-access-token 0ab77166d407f479c6701652cee3a46830fef88b8199722b87821621736ab2d4 \\\n staging\nINFO[0000] Creating SSH key...\nINFO[0000] Creating Digital Ocean droplet...\nINFO[0002] Waiting for SSH...\nINFO[0085] staging has been created and is now the active machine\nINFO[0085] To connect: docker $(docker-machine config dev) staging For convenience, docker-machine will use sensible defaults for choosing settings such\n as the image that the VPS is based on, but they can also be overridden using\ntheir respective flags (e.g. --digitalocean-image ). This is useful if, for\ninstance, you want to create a nice large instance with a lot of memory and CPUs\n(by default docker-machine creates a small VPS). For a full list of the\nflags/settings available and their defaults, see the output of docker-machine create -h . When the creation of a host is initiated, a unique SSH key for accessing the\nhost (initially for provisioning, then directly later if the user runs the docker-machine ssh command) will be created automatically and stored in the client's\ndirectory in ~/.docker/machines . After the creation of the SSH key, Docker\nwill be installed on the remote machine and the daemon will be configured to\naccept remote connections over TCP using TLS for authentication. Once this\nis finished, the host is ready for connection. And then from this point, the remote host behaves much like the local host we\ncreated in the last section. If we look at docker-machine , we\u2019ll see it is now the\nactive host: $ docker-machine active dev\n$ docker-machine ls\nNAME ACTIVE DRIVER STATE URL\ndev virtualbox Running tcp://192.168.99.103:2376\nstaging * digitalocean Running tcp://104.236.50.118:2376 To select an active host, you can use the docker-machine active command. $ docker-machine active dev\n$ docker-machine ls\nNAME ACTIVE DRIVER STATE URL\ndev * virtualbox Running tcp://192.168.99.103:2376\nstaging digitalocean Running tcp://104.236.50.118:2376 To remove a host and all of its containers and images, use docker-machine rm : $ docker-machine rm dev staging\n$ docker-machine ls\nNAME ACTIVE DRIVER STATE URL",
|
|
"title": "Using Docker Machine with a cloud provider"
|
|
},
|
|
{
|
|
"loc": "/machine#adding-a-host-without-a-driver",
|
|
"tags": "",
|
|
"text": "You can add a host to Docker which only has a URL and no driver. Therefore it\ncan be used an alias for an existing host so you don\u2019t have to type out the URL\nevery time you run a Docker command. $ docker-machine create --url=tcp://50.134.234.20:2376 custombox\n$ docker-machine ls\nNAME ACTIVE DRIVER STATE URL\ncustombox * none Running tcp://50.134.234.20:2376",
|
|
"title": "Adding a host without a driver"
|
|
},
|
|
{
|
|
"loc": "/machine#using-docker-machine-with-docker-swarm",
|
|
"tags": "",
|
|
"text": "Docker Machine can also provision Swarm \nclusters. This can be used with any driver and will be secured with TLS. Note : This is an experimental feature so the subcommands and\noptions are likely to change in future versions. First, create a Swarm token. Optionally, you can use another discovery service.\nSee the Swarm docs for details. To create the token, first create a Machine. This example will use VirtualBox. $ docker-machine create -d virtualbox local Load the Machine configuration into your shell: $ $(docker-machine env local) Then run generate the token using the Swarm Docker image: $ docker run swarm create\n1257e0f0bbb499b5cd04b4c9bdb2dab3 Once you have the token, you can create the cluster. Swarm Master Create the Swarm master: docker-machine create \\\n -d virtualbox \\\n --swarm \\\n --swarm-master \\\n --swarm-discovery token:// TOKEN-FROM-ABOVE \\\n swarm-master Replace TOKEN-FROM-ABOVE with your random token.\nThis will create the Swarm master and add itself as a Swarm node. Swarm Nodes Now, create more Swarm nodes: docker-machine create \\\n -d virtualbox \\\n --swarm \\\n --swarm-discovery token:// TOKEN-FROM-ABOVE \\\n swarm-node-00 You now have a Swarm cluster across two nodes.\nTo connect to the Swarm master, use docker-machine env --swarm swarm-master For example: $ docker-machine env --swarm swarm-master\nexport DOCKER_TLS_VERIFY=yes\nexport DOCKER_CERT_PATH=/home/ehazlett/.docker/machines/.client\nexport DOCKER_HOST=tcp://192.168.99.100:3376 You can load this into your environment using $(docker-machine env --swarm swarm-master) . Now you can use the Docker CLI to query: $ docker info\nContainers: 1\nNodes: 1\n swarm-master: 192.168.99.100:2376\n \u2514 Containers: 2\n \u2514 Reserved CPUs: 0 / 4\n \u2514 Reserved Memory: 0 B / 999.9 MiB",
|
|
"title": "Using Docker Machine with Docker Swarm"
|
|
},
|
|
{
|
|
"loc": "/machine#subcommands",
|
|
"tags": "",
|
|
"text": "active Get or set the active machine. $ docker-machine ls\nNAME ACTIVE DRIVER STATE URL\ndev virtualbox Running tcp://192.168.99.103:2376\nstaging * digitalocean Running tcp://104.236.50.118:2376\n$ docker-machine active dev\n$ docker-machine ls\nNAME ACTIVE DRIVER STATE URL\ndev * virtualbox Running tcp://192.168.99.103:2376\nstaging digitalocean Running tcp://104.236.50.118:2376 create Create a machine. $ docker-machine create --driver virtualbox dev\nINFO[0000] Creating SSH key...\nINFO[0000] Creating VirtualBox VM...\nINFO[0007] Starting VirtualBox VM...\nINFO[0007] Waiting for VM to start...\nINFO[0038] dev has been created and is now the active machine. To point Docker at this machine, run: export DOCKER_HOST=$(docker-machine url) DOCKER_AUTH=identity config Show the Docker client configuration for a machine. $ docker-machine config dev\n--tls --tlscacert=/Users/ehazlett/.docker/machines/dev/ca.pem --tlscert=/Users/ehazlett/.docker/machines/dev/cert.pem --tlskey=/Users/ehazlett/.docker/machines/dev/key.pem -H tcp://192.168.99.103:2376 env Set environment variables to dictate that docker should run a command against\na particular machine. docker-machine env machinename will print out export commands which can be\nrun in a subshell. Running docker-machine env -u will print unset commands which reverse this effect. $ env | grep DOCKER\n$ $(docker-machine env dev)\n$ env | grep DOCKER\nDOCKER_HOST=tcp://192.168.99.101:2376\nDOCKER_CERT_PATH=/Users/nathanleclaire/.docker/machines/.client\nDOCKER_TLS_VERIFY=yes\n$ # If you run a docker command, now it will run against that host.\n$ $(docker-machine env -u)\n$ env | grep DOCKER\n$ # The environment variables have been unset. inspect Inspect information about a machine. $ docker-machine inspect dev\n{\n DriverName : virtualbox ,\n Driver : {\n MachineName : docker-host-128be8d287b2028316c0ad5714b90bcfc11f998056f2f790f7c1f43f3d1e6eda ,\n SSHPort : 55834,\n Memory : 1024,\n DiskSize : 20000,\n Boot2DockerURL : \n }\n} help Show help text. ip Get the IP address of a machine. $ docker-machine ip\n192.168.99.104 kill Kill (abruptly force stop) a machine. $ docker-machine ls\nNAME ACTIVE DRIVER STATE URL\ndev * virtualbox Running tcp://192.168.99.104:2376\n$ docker-machine kill dev\n$ docker-machine ls\nNAME ACTIVE DRIVER STATE URL\ndev * virtualbox Stopped ls List machines. $ docker-machine ls\nNAME ACTIVE DRIVER STATE URL\ndev virtualbox Stopped\nfoo0 virtualbox Running tcp://192.168.99.105:2376\nfoo1 virtualbox Running tcp://192.168.99.106:2376\nfoo2 virtualbox Running tcp://192.168.99.107:2376\nfoo3 virtualbox Running tcp://192.168.99.108:2376\nfoo4 * virtualbox Running tcp://192.168.99.109:2376 restart Restart a machine. Oftentimes this is equivalent to docker-machine stop; machine start . $ docker-machine restart\nINFO[0005] Waiting for VM to start... rm Remove a machine. This will remove the local reference as well as delete it\non the cloud provider or virtualization management platform. $ docker-machine ls\nNAME ACTIVE DRIVER STATE URL\nfoo0 virtualbox Running tcp://192.168.99.105:2376\nfoo1 virtualbox Running tcp://192.168.99.106:2376\n$ docker-machine rm foo1\n$ docker-machine ls\nNAME ACTIVE DRIVER STATE URL\nfoo0 virtualbox Running tcp://192.168.99.105:2376 ssh Log into or run a command on a machine using SSH. To login, just run docker-machine ssh machinename : $ docker-machine ssh dev\n ## .\n ## ## ## ==\n ## ## ## ## ===\n / \\___/ ===\n ~~~ {~~ ~~~~ ~~~ ~~~~ ~~ ~ / ===- ~~~\n \\______ o __/\n \\ \\ __/\n \\____\\______/\n _ _ ____ _ _\n| |__ ___ ___ | |_|___ \\ __| | ___ ___| | _____ _ __\n| '_ \\ / _ \\ / _ \\| __| __) / _` |/ _ \\ / __| |/ / _ \\ '__|\n| |_) | (_) | (_) | |_ / __/ (_| | (_) | (__| __/ |\n|_.__/ \\___/ \\___/ \\__|_____\\__,_|\\___/ \\___|_|\\_\\___|_|\nBoot2Docker version 1.4.0, build master : 69cf398 - Fri Dec 12 01:39:42 UTC 2014\ndocker@boot2docker:~$ ls /\nUsers/ dev/ home/ lib/ mnt/ proc/ run/ sys/ usr/\nbin/ etc/ init linuxrc opt/ root/ sbin/ tmp var/ You can also specify commands to run remotely by appending them directly to the docker-machine ssh command, much like the regular ssh program works: $ docker-machine ssh dev free\n total used free shared buffers\nMem: 1023556 183136 840420 0 30920\n-/+ buffers: 152216 871340\nSwap: 1212036 0 1212036 If the command you are appending has flags, e.g. df -h , you can use the flag\nparsing terminator ( -- ) to avoid confusing the docker-machine client, which\nwill otherwise interpret them as flags you intended to pass to it: $ docker-machine ssh dev -- df -h\nFilesystem Size Used Available Use% Mounted on\nrootfs 899.6M 85.9M 813.7M 10% /\ntmpfs 899.6M 85.9M 813.7M 10% /\ntmpfs 499.8M 0 499.8M 0% /dev/shm\n/dev/sda1 18.2G 58.2M 17.2G 0% /mnt/sda1\ncgroup 499.8M 0 499.8M 0% /sys/fs/cgroup\n/dev/sda1 18.2G 58.2M 17.2G 0%\n/mnt/sda1/var/lib/docker/aufs start Gracefully start a machine. $ docker-machine restart\nINFO[0005] Waiting for VM to start... stop Gracefully stop a machine. $ docker-machine ls\nNAME ACTIVE DRIVER STATE URL\ndev * virtualbox Running tcp://192.168.99.104:2376\n$ docker-machine stop dev\n$ docker-machine ls\nNAME ACTIVE DRIVER STATE URL\ndev * virtualbox Stopped upgrade Upgrade a machine to the latest version of Docker. $ docker-machine upgrade dev url Get the URL of a host $ docker-machine url\ntcp://192.168.99.109:2376",
|
|
"title": "Subcommands"
|
|
},
|
|
{
|
|
"loc": "/machine#drivers",
|
|
"tags": "",
|
|
"text": "TODO: List all possible values (where applicable) for all flags for every\ndriver. Amazon Web Services Create machines on Amazon Web Services . You will need an Access Key ID, Secret Access Key and a VPC ID. To find the VPC ID, login to the AWS console and go to Services - VPC - Your VPCs. Select the one where you would like to launch the instance. Options: --amazonec2-access-key : required Your access key id for the Amazon Web Services API. --amazonec2-ami : The AMI ID of the instance to use Default: ami-4ae27e22 --amazonec2-instance-type : The instance type to run. Default: t2.micro --amazonec2-region : The region to use when launching the instance. Default: us-east-1 --amazonec2-root-size : The root disk size of the instance (in GB). Default: 16 --amazonec2-secret-key : required Your secret access key for the Amazon Web Services API. --amazonec2-security-group : AWS VPC security group name. Default: docker-machine --amazonec2-session-token : Your session token for the Amazon Web Services API. --amazonec2-subnet-id : AWS VPC subnet id --amazonec2-vpc-id : required Your VPC ID to launch the instance in. --amazonec2-zone : The AWS zone launch the instance in (i.e. one of a,b,c,d,e). Default: a By default, the Amazon EC2 driver will use a daily image of Ubuntu 14.04 LTS. Region AMI ID ap-northeast-1 ami-44f1e245 ap-southeast-1 ami-f95875ab ap-southeast-2 ami-890b62b3 cn-north-1 ami-fe7ae8c7 eu-west-1 ami-823686f5 eu-central-1 ami-ac1524b1 sa-east-1 ami-c770c1da us-east-1 ami-4ae27e22 us-west-1 ami-d1180894 us-west-2 ami-898dd9b9 us-gov-west-1 ami-cf5630ec Digital Ocean Create Docker machines on Digital Ocean . You need to create a personal access token under \"Apps API\" in the Digital Ocean\nControl Panel and pass that to docker-machine create with the --digitalocean-access-token option. $ docker-machine create --driver digitalocean --digitalocean-access-token=aa9399a2175a93b17b1c86c807e08d3fc4b79876545432a629602f61cf6ccd6b test-this Options: --digitalocean-access-token : Your personal access token for the Digital Ocean API. --digitalocean-image : The name of the Digital Ocean image to use. Default: docker --digitalocean-region : The region to create the droplet in, see Regions API for how to get a list. Default: nyc3 --digitalocean-size : The size of the Digital Ocean driver (larger than default options are of the form 2gb ). Default: 512mb The DigitalOcean driver will use ubuntu-14-04-x64 as the default image. Google Compute Engine Create machines on Google Compute Engine . You will need a Google account and project name. See https://cloud.google.com/compute/docs/projects for details on projects. The Google driver uses oAuth. When creating the machine, you will have your browser opened to authorize. Once authorized, paste the code given in the prompt to launch the instance. Options: --google-zone : The zone to launch the instance. Default: us-central1-a --google-machine-type : The type of instance. Default: f1-micro --google-username : The username to use for the instance. Default: docker-user --google-instance-name : The name of the instance. Default: docker-machine --google-project : The name of your project to use when launching the instance. The GCE driver will use the ubuntu-1404-trusty-v20141212 instance type unless otherwise specified. IBM Softlayer Create machines on Softlayer . You need to generate an API key in the softlayer control panel. Retrieve your API key Options:\n - --softlayer-api-endpoint= : Change softlayer API endpoint\n - --softlayer-user : required username for your softlayer account, api key needs to match this user.\n - --softlayer-api-key : required API key for your user account\n - --softlayer-cpu : Number of CPU's for the machine.\n - --softlayer-disk-size: Size of the disk in MB. 0 sets the softlayer default.\n - --softlayer-domain : **required** domain name for the machine\n - --softlayer-hostname : hostname for the machine\n - --softlayer-hourly-billing : Sets the hourly billing flag (default), otherwise uses monthly billing\n - --softlayer-image : OS Image to use\n - --softlayer-local-disk : Use local machine disk instead of softlayer SAN.\n - --softlayer-memory : Memory for host in MB\n - --softlayer-private-net-only : Disable public networking\n - --softlayer-region`: softlayer region The SoftLayer driver will use UBUNTU_LATEST as the image type by default. Microsoft Azure Create machines on Microsoft Azure . You need to create a subscription with a cert. Run these commands and answer the questions: $ openssl req -x509 -nodes -days 365 -newkey rsa:1024 -keyout mycert.pem -out mycert.pem\n$ openssl pkcs12 -export -out mycert.pfx -in mycert.pem -name \"My Certificate\"\n$ openssl x509 -inform pem -in mycert.pem -outform der -out mycert.cer Go to the Azure portal, go to the \"Settings\" page (you can find the link at the bottom of the\nleft sidebar - you need to scroll), then \"Management Certificates\" and upload mycert.cer . Grab your subscription ID from the portal, then run docker-machine create with these details: $ docker-machine create -d azure --azure-subscription-id=\"SUB_ID\" --azure-subscription-cert=\"mycert.pem\" A-VERY-UNIQUE-NAME Options: --azure-subscription-id : Your Azure subscription ID (A GUID like d255d8d7-5af0-4f5c-8a3e-1545044b861e ). --azure-subscription-cert : Your Azure subscription cert. The Azure driver uses the b39f27a8b8c64d52b05eac6a62ebad85__Ubuntu-14_04_1-LTS-amd64-server-20140927-en-us-30GB \nimage by default. Note, this image is not available in the Chinese regions. In China you should\n specify b549f4301d0b4295b8e76ceb65df47d4__Ubuntu-14_04_1-LTS-amd64-server-20140927-en-us-30GB . You may need to machine ssh in to the virtual machine and reboot to ensure that the OS is updated. Microsoft Hyper-V Creates a Boot2Docker virtual machine locally on your Windows machine\nusing Hyper-V. See here \nfor instructions to enable Hyper-V. You will need to use an\nAdministrator level account to create and manage Hyper-V machines. Note : You will need an existing virtual switch to use the\ndriver. Hyper-V can share an external network interface (aka\nbridging), see this blog .\nIf you would like to use NAT, create an internal network, and use Internet Connection\nSharing . Options: --hyper-v-boot2docker-location : Location of a local boot2docker iso to use. Overrides the URL option below. --hyper-v-boot2docker-url : The URL of the boot2docker iso. Defaults to the latest available version. --hyper-v-disk-size : Size of disk for the host in MB. Defaults to 20000 . --hyper-v-memory : Size of memory for the host in MB. Defaults to 1024 . The machine is setup to use dynamic memory. --hyper-v-virtual-switch : Name of the virtual switch to use. Defaults to first found. Openstack Create machines on Openstack Mandatory: --openstack-flavor-id : The flavor ID to use when creating the machine --openstack-image-id : The image ID to use when creating the machine. Options: --openstack-auth-url : Keystone service base URL. --openstack-username : User identifer to authenticate with. --openstack-password : User password. It can be omitted if the standard environment variable OS_PASSWORD is set. --openstack-tenant-name or --openstack-tenant-id : Identify the tenant in which the machine will be created. --openstack-region : The region to work on. Can be omitted if there is ony one region on the OpenStack. --openstack-endpoint-type : Endpoint type can be internalURL , adminURL on publicURL . If is a helper for the driver\n to choose the right URL in the OpenStack service catalog. If not provided the default id publicURL --openstack-net-id : The private network id the machine will be connected on. If your OpenStack project project\n contains only one private network it will be use automatically. --openstack-sec-groups : If security groups are available on your OpenStack you can specify a comma separated list\n to use for the machine (e.g. secgrp001,secgrp002 ). --openstack-floatingip-pool : The IP pool that will be used to get a public IP an assign it to the machine. If there is an\n IP address already allocated but not assigned to any machine, this IP will be chosen and assigned to the machine. If\n there is no IP address already allocated a new IP will be allocated and assigned to the machine. --openstack-ssh-user : The username to use for SSH into the machine. If not provided root will be used. --openstack-ssh-port : Customize the SSH port if the SSH server on the machine does not listen on the default port. Environment variables: Here comes the list of the supported variables with the corresponding options. If both environment variable\nand CLI option are provided the CLI option takes the precedence. Environment variable CLI option OS_AUTH_URL --openstack-auth-url OS_USERNAME --openstack-username OS_PASSWORD --openstack-password OS_TENANT_NAME --openstack-tenant-name OS_TENANT_ID --openstack-tenant-id OS_REGION_NAME --openstack-region OS_ENDPOINT_TYPE --openstack-endpoint-type Rackspace Create machines on Rackspace cloud Options: --rackspace-username : Rackspace account username --rackspace-api-key : Rackspace API key --rackspace-region : Rackspace region name --rackspace-endpoint-type : Rackspace endpoint type (adminURL, internalURL or the default publicURL) --rackspace-image-id : Rackspace image ID. Default: Ubuntu 14.10 (Utopic Unicorn) (PVHVM) --rackspace-flavor-id : Rackspace flavor ID. Default: General Purpose 1GB --rackspace-ssh-user : SSH user for the newly booted machine. Set to root by default --rackspace-ssh-port : SSH port for the newly booted machine. Set to 22 by default Environment variables: Here comes the list of the supported variables with the corresponding options. If both environment\nvariable and CLI option are provided the CLI option takes the precedence. Environment variable CLI option OS_USERNAME --rackspace-username OS_API_KEY --rackspace-ap-key OS_REGION_NAME --rackspace-region OS_ENDPOINT_TYPE --rackspace-endpoint-type The Rackspace driver will use 598a4282-f14b-4e50-af4c-b3e52749d9f9 (Ubuntu 14.04 LTS) by default. Oracle VirtualBox Create machines locally using VirtualBox .\nThis driver requires VirtualBox to be installed on your host. $ docker-machine create --driver=virtualbox vbox-test Options: --virtualbox-boot2docker-url : The URL of the boot2docker image. Defaults to the latest available version. --virtualbox-disk-size : Size of disk for the host in MB. Default: 20000 --virtualbox-memory : Size of memory for the host in MB. Default: 1024 The VirtualBox driver uses the latest boot2docker image. VMware Fusion Creates machines locally on VMware Fusion . Requires VMware Fusion to be installed. Options: --vmwarefusion-boot2docker-url : URL for boot2docker image. --vmwarefusion-disk-size : Size of disk for host VM (in MB). Default: 20000 --vmwarefusion-memory-size : Size of memory for host VM (in MB). Default: 1024 The VMware Fusion driver uses the latest boot2docker image. VMware vCloud Air Creates machines on vCloud Air subscription service. You need an account within an existing subscription of vCloud Air VPC or Dedicated Cloud. Options: --vmwarevcloudair-username : vCloud Air Username. --vmwarevcloudair-password : vCloud Air Password. --vmwarevcloudair-catalog : Catalog. Default: Public Catalog --vmwarevcloudair-catalogitem : Catalog Item. Default: Ubuntu Server 12.04 LTS (amd64 20140927) --vmwarevcloudair-computeid : Compute ID (if using Dedicated Cloud). --vmwarevcloudair-cpu-count : VM Cpu Count. Default: 1 --vmwarevcloudair-docker-port : Docker port. Default: 2376 --vmwarevcloudair-edgegateway : Organization Edge Gateway. Default: vdcid --vmwarevcloudair-memory-size : VM Memory Size in MB. Default: 2048 --vmwarevcloudair-name : vApp Name. Default: autogenerated --vmwarevcloudair-orgvdcnetwork : Organization VDC Network to attach. Default: vdcid -default-routed --vmwarevcloudair-provision : Install Docker binaries. Default: true --vmwarevcloudair-publicip : Org Public IP to use. --vmwarevcloudair-ssh-port : SSH port. Default: 22 --vmwarevcloudair-vdcid : Virtual Data Center ID. The VMware vCloud Air driver will use the Ubuntu Server 12.04 LTS (amd64 20140927) image by default. VMware vSphere Creates machines on a VMware vSphere Virtual Infrastructure. Requires a working vSphere (ESXi and optionally vCenter) installation. The vSphere driver depends on govc (must be in path) and has been tested with vmware/govmomi@ c848630 . Options: --vmwarevsphere-username : vSphere Username. --vmwarevsphere-password : vSphere Password. --vmwarevsphere-boot2docker-url : URL for boot2docker image. --vmwarevsphere-compute-ip : Compute host IP where the Docker VM will be instantiated. --vmwarevsphere-cpu-count : CPU number for Docker VM. Default: 2 --vmwarevsphere-datacenter : Datacenter for Docker VM (must be set to ha-datacenter when connecting to a single host). --vmwarevsphere-datastore : Datastore for Docker VM. --vmwarevsphere-disk-size : Size of disk for Docker VM (in MB). Default: 20000 --vmwarevsphere-memory-size : Size of memory for Docker VM (in MB). Default: 2048 --vmwarevsphere-network : Network where the Docker VM will be attached. --vmwarevsphere-pool : Resource pool for Docker VM. --vmwarevsphere-vcenter : IP/hostname for vCenter (or ESXi if connecting directly to a single host). The VMware vSphere driver uses the latest boot2docker image.",
|
|
"title": "Drivers"
|
|
},
|
|
{
|
|
"loc": "/swarm/",
|
|
"tags": "",
|
|
"text": "Docker Swarm\nDocker Swarm is native clustering for Docker. It turns a pool of Docker hosts\ninto a single, virtual host.\nSwarm serves the standard Docker API, so any tool which already communicates\nwith a Docker daemon can use Swarm to transparently scale to multiple hosts:\nDokku, Compose, Krane, Flynn, Deis, DockerUI, Shipyard, Drone, Jenkins... and,\nof course, the Docker client itself.\nLike other Docker projects, Swarm follows the \"batteries included but removable\"\nprinciple. It ships with a simple scheduling backend out of the box, and as\ninitial development settles, an API will develop to enable pluggable backends.\nThe goal is to provide a smooth out-of-box experience for simple use cases, and\nallow swapping in more powerful backends, like Mesos, for large scale production\ndeployments.\nInstallation\n\nNote: The only requirement for Swarm nodes is they all run the same release\nDocker daemon (version 1.4.0 and later), configured to listen to a tcp\nport that the Swarm manager can access.\n\nThe easiest way to get started with Swarm is to use the\nofficial Docker image.\ndocker pull swarm\n\n\nNodes setup\nEach swarm node will run a swarm node agent which will register the referenced\nDocker daemon, and will then monitor it, updating the discovery backend to its\nstatus.\nThe following example uses the Docker Hub based token discovery service:\n# create a cluster\n$ docker run --rm swarm create\n6856663cdefdec325839a4b7e1de38e8 # - this is your unique cluster_id\n\n# on each of your nodes, start the swarm agent\n# node_ip doesn't have to be public (eg. 192.168.0.X),\n# as long as the swarm manager can access it.\n$ docker run -d swarm join --addr=node_ip:2375 token://cluster_id\n\n# start the manager on any machine or your laptop\n$ docker run -d -p swarm_port:2375 swarm manage token://cluster_id\n\n# use the regular docker cli\n$ docker -H tcp://swarm_ip:swarm_port info\n$ docker -H tcp://swarm_ip:swarm_port run ...\n$ docker -H tcp://swarm_ip:swarm_port ps\n$ docker -H tcp://swarm_ip:swarm_port logs ...\n...\n\n# list nodes in your cluster\n$ docker run --rm swarm list token://cluster_id\nnode_ip:2375\n\n\n\nNote: In order for the Swarm manager to be able to communicate with the node agent on\neach node, they must listen to a common network interface. This can be achieved\nby starting with the -H flag (e.g. -H tcp://0.0.0.0:2375).\n\nTLS\nSwarm supports TLS authentication between the CLI and Swarm but also between\nSwarm and the Docker nodes. However, all the Docker daemon certificates and client\ncertificates must be signed using the same CA-certificate.\nIn order to enable TLS for both client and server, the same command line options\nas Docker can be specified:\nswarm manage --tlsverify --tlscacert=CACERT --tlscert=CERT --tlskey=KEY [...]\nPlease refer to the Docker documentation\nfor more information on how to set up TLS authentication on Docker and generating\nthe certificates.\n\nNote: Swarm certificates must be generated withextendedKeyUsage = clientAuth,serverAuth.\n\nDiscovery services\nSee the Discovery service document for more information.\nAdvanced Scheduling\nSee filters and strategies to learn\nmore about advanced scheduling.",
|
|
"title": "Docker Swarm"
|
|
},
|
|
{
|
|
"loc": "/swarm#docker-swarm",
|
|
"tags": "",
|
|
"text": "Docker Swarm is native clustering for Docker. It turns a pool of Docker hosts\ninto a single, virtual host. Swarm serves the standard Docker API, so any tool which already communicates\nwith a Docker daemon can use Swarm to transparently scale to multiple hosts:\nDokku, Compose, Krane, Flynn, Deis, DockerUI, Shipyard, Drone, Jenkins... and,\nof course, the Docker client itself. Like other Docker projects, Swarm follows the \"batteries included but removable\"\nprinciple. It ships with a simple scheduling backend out of the box, and as\ninitial development settles, an API will develop to enable pluggable backends.\nThe goal is to provide a smooth out-of-box experience for simple use cases, and\nallow swapping in more powerful backends, like Mesos, for large scale production\ndeployments.",
|
|
"title": "Docker Swarm"
|
|
},
|
|
{
|
|
"loc": "/swarm#installation",
|
|
"tags": "",
|
|
"text": "Note : The only requirement for Swarm nodes is they all run the same release\nDocker daemon (version 1.4.0 and later), configured to listen to a tcp \nport that the Swarm manager can access. The easiest way to get started with Swarm is to use the official Docker image . docker pull swarm",
|
|
"title": "Installation"
|
|
},
|
|
{
|
|
"loc": "/swarm#nodes-setup",
|
|
"tags": "",
|
|
"text": "Each swarm node will run a swarm node agent which will register the referenced\nDocker daemon, and will then monitor it, updating the discovery backend to its\nstatus. The following example uses the Docker Hub based token discovery service: # create a cluster\n$ docker run --rm swarm create\n6856663cdefdec325839a4b7e1de38e8 # - this is your unique cluster_id \n\n# on each of your nodes, start the swarm agent\n# node_ip doesn't have to be public (eg. 192.168.0.X),\n# as long as the swarm manager can access it.\n$ docker run -d swarm join --addr= node_ip:2375 token:// cluster_id \n\n# start the manager on any machine or your laptop\n$ docker run -d -p swarm_port :2375 swarm manage token:// cluster_id \n\n# use the regular docker cli\n$ docker -H tcp:// swarm_ip:swarm_port info\n$ docker -H tcp:// swarm_ip:swarm_port run ...\n$ docker -H tcp:// swarm_ip:swarm_port ps\n$ docker -H tcp:// swarm_ip:swarm_port logs ...\n...\n\n# list nodes in your cluster\n$ docker run --rm swarm list token:// cluster_id node_ip:2375 Note : In order for the Swarm manager to be able to communicate with the node agent on\neach node, they must listen to a common network interface. This can be achieved\nby starting with the -H flag (e.g. -H tcp://0.0.0.0:2375 ).",
|
|
"title": "Nodes setup"
|
|
},
|
|
{
|
|
"loc": "/swarm#tls",
|
|
"tags": "",
|
|
"text": "Swarm supports TLS authentication between the CLI and Swarm but also between\nSwarm and the Docker nodes. However , all the Docker daemon certificates and client\ncertificates must be signed using the same CA-certificate. In order to enable TLS for both client and server, the same command line options\nas Docker can be specified: swarm manage --tlsverify --tlscacert= CACERT --tlscert= CERT --tlskey= KEY [...] Please refer to the Docker documentation \nfor more information on how to set up TLS authentication on Docker and generating\nthe certificates. Note : Swarm certificates must be generated with extendedKeyUsage = clientAuth,serverAuth .",
|
|
"title": "TLS"
|
|
},
|
|
{
|
|
"loc": "/swarm#discovery-services",
|
|
"tags": "",
|
|
"text": "See the Discovery service document for more information.",
|
|
"title": "Discovery services"
|
|
},
|
|
{
|
|
"loc": "/swarm#advanced-scheduling",
|
|
"tags": "",
|
|
"text": "See filters and strategies to learn\nmore about advanced scheduling.",
|
|
"title": "Advanced Scheduling"
|
|
},
|
|
{
|
|
"loc": "/docker-hub/",
|
|
"tags": "",
|
|
"text": "Docker Hub\n\nAccounts\nLearn how to create a Docker Hub\naccount and manage your organizations and groups.\nRepositories\nFind out how to share your Docker images in Docker Hub\nrepositories and how to store and manage private images.\nAutomated Builds\nLearn how to automate your build and deploy pipeline with Automated\nBuilds",
|
|
"title": "Docker Hub"
|
|
},
|
|
{
|
|
"loc": "/docker-hub#docker-hub",
|
|
"tags": "",
|
|
"text": "",
|
|
"title": "Docker Hub"
|
|
},
|
|
{
|
|
"loc": "/docker-hub#accounts",
|
|
"tags": "",
|
|
"text": "Learn how to create a Docker Hub \naccount and manage your organizations and groups.",
|
|
"title": "Accounts"
|
|
},
|
|
{
|
|
"loc": "/docker-hub#repositories",
|
|
"tags": "",
|
|
"text": "Find out how to share your Docker images in Docker Hub\nrepositories and how to store and manage private images.",
|
|
"title": "Repositories"
|
|
},
|
|
{
|
|
"loc": "/docker-hub#automated-builds",
|
|
"tags": "",
|
|
"text": "Learn how to automate your build and deploy pipeline with Automated\nBuilds",
|
|
"title": "Automated Builds"
|
|
},
|
|
{
|
|
"loc": "/docker-hub/accounts/",
|
|
"tags": "",
|
|
"text": "Accounts on Docker Hub\nDocker Hub Accounts\nYou can search for Docker images and pull them from Docker\nHub without signing in or even having an\naccount. However, in order to push images, leave comments or to star\na repository, you are going to need a Docker\nHub account.\nRegistration for a Docker Hub Account\nYou can get a Docker Hub account by\nsigning up for one here. A valid\nemail address is required to register, which you will need to verify for\naccount activation.\nEmail activation process\nYou need to have at least one verified email address to be able to use your\nDocker Hub account. If you can't find the validation email,\nyou can request another by visiting the Resend Email Confirmation page.\nPassword reset process\nIf you can't access your account for some reason, you can reset your password\nfrom the Password Reset\npage.\nOrganizations Groups\nAlso available on the Docker Hub are organizations and groups that allow\nyou to collaborate across your organization or team. You can see what\norganizations you belong to and add new organizations from the Account Settings\ntab. They are also listed below your user name on your repositories page and in your account profile.\n\nFrom within your organizations you can create groups that allow you to\nfurther manage who can interact with your repositories.\n\nYou can add or invite users to join groups by clicking on the organization and then clicking the edit button for the group to which you want to add members. Enter a user-name (for current Hub users) or email address (if they are not yet Hub users) for the person you want to invite. They will receive an email invitation to join the group.",
|
|
"title": "Accounts"
|
|
},
|
|
{
|
|
"loc": "/docker-hub/accounts#accounts-on-docker-hub",
|
|
"tags": "",
|
|
"text": "",
|
|
"title": "Accounts on Docker Hub"
|
|
},
|
|
{
|
|
"loc": "/docker-hub/accounts#docker-hub-accounts",
|
|
"tags": "",
|
|
"text": "You can search for Docker images and pull them from Docker\nHub without signing in or even having an\naccount. However, in order to push images, leave comments or to star \na repository, you are going to need a Docker\nHub account. Registration for a Docker Hub Account You can get a Docker Hub account by signing up for one here . A valid\nemail address is required to register, which you will need to verify for\naccount activation. Email activation process You need to have at least one verified email address to be able to use your Docker Hub account. If you can't find the validation email,\nyou can request another by visiting the Resend Email Confirmation page. Password reset process If you can't access your account for some reason, you can reset your password\nfrom the Password Reset \npage.",
|
|
"title": "Docker Hub Accounts"
|
|
},
|
|
{
|
|
"loc": "/docker-hub/accounts#organizations-groups",
|
|
"tags": "",
|
|
"text": "Also available on the Docker Hub are organizations and groups that allow\nyou to collaborate across your organization or team. You can see what\norganizations you belong to and add new organizations from the Account Settings\ntab. They are also listed below your user name on your repositories page and in your account profile. From within your organizations you can create groups that allow you to\nfurther manage who can interact with your repositories. You can add or invite users to join groups by clicking on the organization and then clicking the edit button for the group to which you want to add members. Enter a user-name (for current Hub users) or email address (if they are not yet Hub users) for the person you want to invite. They will receive an email invitation to join the group.",
|
|
"title": "Organizations & Groups"
|
|
},
|
|
{
|
|
"loc": "/docker-hub/repos/",
|
|
"tags": "",
|
|
"text": "Repositories and Images on Docker Hub\n\nSearching for repositories and images\nYou can search for all the publicly available repositories and images using\nDocker.\n$ sudo docker search ubuntu\n\nThis will show you a list of the currently available repositories on the\nDocker Hub which match the provided keyword.\nIf a repository is private it won't be listed on the repository search\nresults. To see repository statuses, you can look at your profile\npage on Docker Hub.\nRepositories\nYour Docker Hub repositories have a number of useful features.\nStars\nYour repositories can be starred and you can star repositories in\nreturn. Stars are a way to show that you like a repository. They are\nalso an easy way of bookmarking your favorites.\nComments\nYou can interact with other members of the Docker community and maintainers by\nleaving comments on repositories. If you find any comments that are not\nappropriate, you can flag them for review.\nCollaborators and their role\nA collaborator is someone you want to give access to a private\nrepository. Once designated, they can push and pull to your\nrepositories. They will not be allowed to perform any administrative\ntasks such as deleting the repository or changing its status from\nprivate to public.\n\nNote:\nA collaborator cannot add other collaborators. Only the owner of\nthe repository has administrative access.\n\nYou can also collaborate on Docker Hub with organizations and groups.\nYou can read more about that here.\nOfficial Repositories\nThe Docker Hub contains a number of official\nrepositories. These are\ncertified repositories from vendors and contributors to Docker. They\ncontain Docker images from vendors like Canonical, Oracle, and Red Hat\nthat you can use to build applications and services.\nIf you use Official Repositories you know you're using a supported,\noptimized and up-to-date image to power your applications.\n\nNote:\nIf you would like to contribute an official repository for your\norganization, product or team you can see more information\nhere.\n\nPrivate Repositories\nPrivate repositories allow you to have repositories that contain images\nthat you want to keep private, either to your own account or within an\norganization or group.\nTo work with a private repository on Docker\nHub, you will need to add one via the Add\nRepository\nlink. You get one private repository for free with your Docker Hub\naccount. If you need more accounts you can upgrade your Docker\nHub plan.\nOnce the private repository is created, you can push and pull images\nto and from it using Docker.\n\nNote: You need to be signed in and have access to work with a\nprivate repository.\n\nPrivate repositories are just like public ones. However, it isn't\npossible to browse them or search their content on the public registry.\nThey do not get cached the same way as a public repository either.\nIt is possible to give access to a private repository to those whom you\ndesignate (i.e., collaborators) from its Settings page. From there, you\ncan also switch repository status (public to private, or\nvice-versa). You will need to have an available private repository slot\nopen before you can do such a switch. If you don't have any available,\nyou can always upgrade your Docker\nHub plan.\nWebhooks\nYou can configure webhooks for your repositories on the Repository\nSettings page. A webhook is called only after a successful push is\nmade. The webhook calls are HTTP POST requests with a JSON payload\nsimilar to the example shown below.\nExample webhook JSON payload:\n{\n callback_url: https://registry.hub.docker.com/u/svendowideit/busybox/hook/2141bc0cdec4hebec411i4c1g40242eg110020/,\n push_data: {\n images: [\n 27d47432a69bca5f2700e4dff7de0388ed65f9d3fb1ec645e2bc24c223dc1cc3,\n 51a9c7c1f8bb2fa19bcd09789a34e63f35abb80044bc10196e304f6634cc582c,\n ...\n ],\n pushed_at: 1.417566822e+09,\n pusher: svendowideit\n },\n repository: {\n comment_count: 0,\n date_created: 1.417566665e+09,\n description: ,\n full_description: webhook triggered from a 'docker push',\n is_official: false,\n is_private: false,\n is_trusted: false,\n name: busybox,\n namespace: svendowideit,\n owner: svendowideit,\n repo_name: svendowideit/busybox,\n repo_url: https://registry.hub.docker.com/u/svendowideit/busybox/,\n star_count: 0,\n status: Active\n}\n\n\nWebhooks allow you to notify people, services and other applications of\nnew updates to your images and repositories. To get started adding webhooks,\ngo to the desired repository in the Hub, and click \"Webhooks\" under the \"Settings\"\nbox.\n\nNote: For testing, you can try an HTTP request tool like\nrequestb.in.\nNote: The Docker Hub servers are currently in the IP range\n162.242.195.64 - 162.242.195.127, so you can restrict your webhooks to\naccept webhook requests from that set of IP addresses.\n\nWebhook chains\nWebhook chains allow you to chain calls to multiple services. For example,\nyou can use this to trigger a deployment of your container only after\nit has been successfully tested, then update a separate Changelog once the\ndeployment is complete.\nAfter clicking the \"Add webhook\" button, simply add as many URLs as necessary\nin your chain.\nThe first webhook in a chain will be called after a successful push. Subsequent\nURLs will be contacted after the callback has been validated.\nValidating a callback\nIn order to validate a callback in a webhook chain, you need to\n\nRetrieve the callback_url value in the request's JSON payload.\nSend a POST request to this URL containing a valid JSON body.\n\n\nNote: A chain request will only be considered complete once the last\ncallback has been validated.\n\nTo help you debug or simply view the results of your webhook(s),\nview the \"History\" of the webhook available on its settings page.\nCallback JSON data\nThe following parameters are recognized in callback data:\n\nstate (required): Accepted values are success, failure and error.\n If the state isn't success, the webhook chain will be interrupted.\ndescription: A string containing miscellaneous information that will be\n available on the Docker Hub. Maximum 255 characters.\ncontext: A string containing the context of the operation. Can be retrieved\n from the Docker Hub. Maximum 100 characters.\ntarget_url: The URL where the results of the operation can be found. Can be\n retrieved on the Docker Hub.\n\nExample callback payload:\n{\n \"state\": \"success\",\n \"description\": \"387 tests PASSED\",\n \"context\": \"Continuous integration by Acme CI\",\n \"target_url\": \"http://ci.acme.com/results/afd339c1c3d27\"\n}",
|
|
"title": "Repositories"
|
|
},
|
|
{
|
|
"loc": "/docker-hub/repos#repositories-and-images-on-docker-hub",
|
|
"tags": "",
|
|
"text": "",
|
|
"title": "Repositories and Images on Docker Hub"
|
|
},
|
|
{
|
|
"loc": "/docker-hub/repos#searching-for-repositories-and-images",
|
|
"tags": "",
|
|
"text": "You can search for all the publicly available repositories and images using\nDocker. $ sudo docker search ubuntu This will show you a list of the currently available repositories on the\nDocker Hub which match the provided keyword. If a repository is private it won't be listed on the repository search\nresults. To see repository statuses, you can look at your profile\npage on Docker Hub .",
|
|
"title": "Searching for repositories and images"
|
|
},
|
|
{
|
|
"loc": "/docker-hub/repos#repositories",
|
|
"tags": "",
|
|
"text": "Your Docker Hub repositories have a number of useful features. Stars Your repositories can be starred and you can star repositories in\nreturn. Stars are a way to show that you like a repository. They are\nalso an easy way of bookmarking your favorites. Comments You can interact with other members of the Docker community and maintainers by\nleaving comments on repositories. If you find any comments that are not\nappropriate, you can flag them for review. Collaborators and their role A collaborator is someone you want to give access to a private\nrepository. Once designated, they can push and pull to your\nrepositories. They will not be allowed to perform any administrative\ntasks such as deleting the repository or changing its status from\nprivate to public. Note: \nA collaborator cannot add other collaborators. Only the owner of\nthe repository has administrative access. You can also collaborate on Docker Hub with organizations and groups.\nYou can read more about that here .",
|
|
"title": "Repositories"
|
|
},
|
|
{
|
|
"loc": "/docker-hub/repos#official-repositories",
|
|
"tags": "",
|
|
"text": "The Docker Hub contains a number of official\nrepositories . These are\ncertified repositories from vendors and contributors to Docker. They\ncontain Docker images from vendors like Canonical, Oracle, and Red Hat\nthat you can use to build applications and services. If you use Official Repositories you know you're using a supported,\noptimized and up-to-date image to power your applications. Note: \nIf you would like to contribute an official repository for your\norganization, product or team you can see more information here .",
|
|
"title": "Official Repositories"
|
|
},
|
|
{
|
|
"loc": "/docker-hub/repos#private-repositories",
|
|
"tags": "",
|
|
"text": "Private repositories allow you to have repositories that contain images\nthat you want to keep private, either to your own account or within an\norganization or group. To work with a private repository on Docker\nHub , you will need to add one via the Add\nRepository \nlink. You get one private repository for free with your Docker Hub\naccount. If you need more accounts you can upgrade your Docker\nHub plan. Once the private repository is created, you can push and pull images\nto and from it using Docker. Note: You need to be signed in and have access to work with a\nprivate repository. Private repositories are just like public ones. However, it isn't\npossible to browse them or search their content on the public registry.\nThey do not get cached the same way as a public repository either. It is possible to give access to a private repository to those whom you\ndesignate (i.e., collaborators) from its Settings page. From there, you\ncan also switch repository status ( public to private , or\nvice-versa). You will need to have an available private repository slot\nopen before you can do such a switch. If you don't have any available,\nyou can always upgrade your Docker\nHub plan.",
|
|
"title": "Private Repositories"
|
|
},
|
|
{
|
|
"loc": "/docker-hub/repos#webhooks",
|
|
"tags": "",
|
|
"text": "You can configure webhooks for your repositories on the Repository\nSettings page. A webhook is called only after a successful push is\nmade. The webhook calls are HTTP POST requests with a JSON payload\nsimilar to the example shown below. Example webhook JSON payload: {\n callback_url : https://registry.hub.docker.com/u/svendowideit/busybox/hook/2141bc0cdec4hebec411i4c1g40242eg110020/ ,\n push_data : {\n images : [\n 27d47432a69bca5f2700e4dff7de0388ed65f9d3fb1ec645e2bc24c223dc1cc3 ,\n 51a9c7c1f8bb2fa19bcd09789a34e63f35abb80044bc10196e304f6634cc582c ,\n ...\n ],\n pushed_at : 1.417566822e+09,\n pusher : svendowideit \n },\n repository : {\n comment_count : 0,\n date_created : 1.417566665e+09,\n description : ,\n full_description : webhook triggered from a 'docker push' ,\n is_official : false,\n is_private : false,\n is_trusted : false,\n name : busybox ,\n namespace : svendowideit ,\n owner : svendowideit ,\n repo_name : svendowideit/busybox ,\n repo_url : https://registry.hub.docker.com/u/svendowideit/busybox/ ,\n star_count : 0,\n status : Active \n} Webhooks allow you to notify people, services and other applications of\nnew updates to your images and repositories. To get started adding webhooks,\ngo to the desired repository in the Hub, and click \"Webhooks\" under the \"Settings\"\nbox. Note: For testing, you can try an HTTP request tool like requestb.in . Note : The Docker Hub servers are currently in the IP range 162.242.195.64 - 162.242.195.127 , so you can restrict your webhooks to\naccept webhook requests from that set of IP addresses. Webhook chains Webhook chains allow you to chain calls to multiple services. For example,\nyou can use this to trigger a deployment of your container only after\nit has been successfully tested, then update a separate Changelog once the\ndeployment is complete.\nAfter clicking the \"Add webhook\" button, simply add as many URLs as necessary\nin your chain. The first webhook in a chain will be called after a successful push. Subsequent\nURLs will be contacted after the callback has been validated. Validating a callback In order to validate a callback in a webhook chain, you need to Retrieve the callback_url value in the request's JSON payload. Send a POST request to this URL containing a valid JSON body. Note : A chain request will only be considered complete once the last\ncallback has been validated. To help you debug or simply view the results of your webhook(s),\nview the \"History\" of the webhook available on its settings page. Callback JSON data The following parameters are recognized in callback data: state (required): Accepted values are success , failure and error .\n If the state isn't success , the webhook chain will be interrupted. description : A string containing miscellaneous information that will be\n available on the Docker Hub. Maximum 255 characters. context : A string containing the context of the operation. Can be retrieved\n from the Docker Hub. Maximum 100 characters. target_url : The URL where the results of the operation can be found. Can be\n retrieved on the Docker Hub. Example callback payload: {\n \"state\": \"success\",\n \"description\": \"387 tests PASSED\",\n \"context\": \"Continuous integration by Acme CI\",\n \"target_url\": \"http://ci.acme.com/results/afd339c1c3d27\"\n}",
|
|
"title": "Webhooks"
|
|
},
|
|
{
|
|
"loc": "/docker-hub/builds/",
|
|
"tags": "",
|
|
"text": "Automated Builds on Docker Hub\nAbout Automated Builds\nAutomated Builds are a special feature of Docker Hub which allow you to\nuse Docker Hub's build clusters to automatically\ncreate images from a specified Dockerfile and a GitHub or Bitbucket repository\n(or \"context\"). The system will clone your repository and build the image\ndescribed by the Dockerfile using the repository as the context. The\nresulting automated image will then be uploaded to the Docker Hub registry\nand marked as an Automated Build.\nAutomated Builds have several advantages:\n\n\nUsers of your Automated Build can trust that the resulting\nimage was built exactly as specified.\n\n\nThe Dockerfile will be available to anyone with access to\nyour repository on the Docker Hub registry. \n\n\nBecause the process is automated, Automated Builds help to\nmake sure that your repository is always up to date.\n\n\nAutomated Builds are supported for both public and private repositories\non both GitHub and Bitbucket.\nTo use Automated Builds, you must have an account on Docker Hub\nand on GitHub and/or Bitbucket. In either case, the account needs\nto be properly validated and activated before you can link to it.\nSetting up Automated Builds with GitHub\nIn order to set up an Automated Build, you need to first link your\nDocker Hub account with a GitHub account.\nThis will allow the registry to see your repositories.\n\nNote: \nAutomated Builds currently require read and write access since\nDocker Hub needs to setup a GitHub service\nhook. We have no choice here, this is how GitHub manages permissions, sorry! \nWe do guarantee nothing else will be touched in your account.\n\nTo get started, log into your Docker Hub account and click the\n\"+ Add Repository\" button at the upper right of the screen. Then select\nAutomated Build.\nSelect the GitHub service.\nThen follow the onscreen instructions to authorize and link your\nGitHub account to Docker Hub. Once it is linked, you'll be able to\nchoose a repo from which to create the Automatic Build.\nCreating an Automated Build\nYou can create an Automated Build from any of your\npublic or private GitHub repositories with a Dockerfile.\nGitHub Submodules\nIf your GitHub repository contains links to private submodules, you'll\nneed to add a deploy key from your Docker Hub repository. \nYour Docker Hub deploy key is located under the \"Build Details\"\nmenu on the Automated Build's main page in the Hub. Add this key\nto your GitHub submodule by visiting the Settings page for the\nrepository on GitHub and selecting \"Deploy keys\".\n\n \n \n Step\n Screenshot\n Description\n \n \n \n \n 1.\n \n Your automated build's deploy key is in the \"Build Details\" menu \nunder \"Deploy keys\".\n \n \n 2.\n \n In your GitHub submodule's repository Settings page, add the \ndeploy key from your Docker Hub Automated Build.\n \n \n\n\nGitHub Organizations\nGitHub organizations will appear once your membership to that organization is\nmade public on GitHub. To verify, you can look at the members tab for your\norganization on GitHub.\nGitHub Service Hooks\nFollow the steps below to configure the GitHub service\nhooks for your Automated Build:\n\n \n \n Step\n Screenshot\n Description\n \n \n \n \n 1.\n \n Log in to Github.com, and go to your Repository page. Click on \"Settings\" on\n the right side of the page. You must have admin privileges to the repository in order to do this.\n \n \n 2.\n \n Click on \"Webhooks & Services\" on the left side of the page.\n 3.\n Find the service labeled \"Docker\" and click on it.\n 4.\n Make sure the \"Active\" checkbox is selected and click the \"Update service\" button to save your changes.\n \n \n\n\nSetting up Automated Builds with Bitbucket\nIn order to setup an Automated Build, you need to first link your\nDocker Hub account with a Bitbucket account.\nThis will allow the registry to see your repositories.\nTo get started, log into your Docker Hub account and click the\n\"+ Add Repository\" button at the upper right of the screen. Then\nselect Automated Build.\nSelect the Bitbucket source.\nThen follow the onscreen instructions to authorize and link your\nBitbucket account to Docker Hub. Once it is linked, you'll be able\nto choose a repository from which to create the Automatic Build.\nCreating an Automated Build\nYou can create an Automated Build from any of your\npublic or private Bitbucket repositories with a Dockerfile.\nAdding a Hook\nWhen you link your Docker Hub account, a POST hook should get automatically\nadded to your Bitbucket repository. Follow the steps below to confirm or modify the\nBitbucket hooks for your Automated Build:\n\n \n \n Step\n Screenshot\n Description\n \n \n \n \n 1.\n \n Log in to Bitbucket.org and go to your Repository page. Click on \"Settings\" on\n the far left side of the page, under \"Navigation\". You must have admin privileges\n to the repository in order to do this.\n \n \n 2.\n \n Click on \"Hooks\" on the near left side of the page, under \"Settings\".\n \n 3.\n You should now see a list of hooks associated with the repo, including a POST hook that points at\n registry.hub.docker.com/hooks/bitbucket.\n \n \n\n\nThe Dockerfile and Automated Builds\nDuring the build process, Docker will copy the contents of your Dockerfile.\nIt will also add it to the Docker Hub for the Docker\ncommunity (for public repositories) or approved team members/orgs (for private\nrepositories) to see on the repository page.\nREADME.md\nIf you have a README.md file in your repository, it will be used as the\nrepository's full description.The build process will look for a\nREADME.md in the same directory as your Dockerfile.\n\nWarning:\nIf you change the full description after a build, it will be\nrewritten the next time the Automated Build has been built. To make changes,\nmodify the README.md from the Git repository.\n\nRemote Build triggers\nIf you need a way to trigger Automated Builds outside of GitHub or Bitbucket,\nyou can set up a build trigger. When you turn on the build trigger for an\nAutomated Build, it will give you a URL to which you can send POST requests.\nThis will trigger the Automated Build, much as with a GitHub webhook.\nBuild triggers are available under the Settings menu of each Automated Build\nrepository on the Docker Hub.\n\nYou can use curl to trigger a build:\n$ curl --data build=true -X POST https://registry.hub.docker.com/u/svendowideit/testhook/trigger/be579c\n82-7c0e-11e4-81c4-0242ac110020/\nOK\n\n\n\nNote: \nYou can only trigger one build at a time and no more than one\nevery five minutes. If you already have a build pending, or if you\nrecently submitted a build request, those requests will be ignored.\nTo verify everything is working correctly, check the logs of last\nten triggers on the settings page .\n\nWebhooks\nAutomated Builds also include a Webhooks feature. Webhooks can be called\nafter a successful repository push is made. This includes when a new tag is added\nto an existing image.\nThe webhook call will generate a HTTP POST with the following JSON\npayload:\n{\n callback_url: https://registry.hub.docker.com/u/svendowideit/testhook/hook/2141b5bi5i5b02bec211i4eeih0242eg11000a/,\n push_data: {\n images: [\n 27d47432a69bca5f2700e4dff7de0388ed65f9d3fb1ec645e2bc24c223dc1cc3,\n 51a9c7c1f8bb2fa19bcd09789a34e63f35abb80044bc10196e304f6634cc582c,\n ...\n ],\n pushed_at: 1.417566161e+09,\n pusher: trustedbuilder\n },\n repository: {\n comment_count: 0,\n date_created: 1.417494799e+09,\n description: ,\n dockerfile: #\\n# BUILD\\u0009\\u0009docker build -t svendowideit/apt-cacher .\\n# RUN\\u0009\\u0009docker run -d -p 3142:3142 -name apt-cacher-run apt-cacher\\n#\\n# and then you can run containers with:\\n# \\u0009\\u0009docker run -t -i -rm -e http_proxy http://192.168.1.2:3142/ debian bash\\n#\\nFROM\\u0009\\u0009ubuntu\\nMAINTAINER\\u0009SvenDowideit@home.org.au\\n\\n\\nVOLUME\\u0009\\u0009[\\/var/cache/apt-cacher-ng\\]\\nRUN\\u0009\\u0009apt-get update ; apt-get install -yq apt-cacher-ng\\n\\nEXPOSE \\u0009\\u00093142\\nCMD\\u0009\\u0009chmod 777 /var/cache/apt-cacher-ng ; /etc/init.d/apt-cacher-ng start ; tail -f /var/log/apt-cacher-ng/*\\n,\n full_description: Docker Hub based automated build from a GitHub repo,\n is_official: false,\n is_private: true,\n is_trusted: true,\n name: testhook,\n namespace: svendowideit,\n owner: svendowideit,\n repo_name: svendowideit/testhook,\n repo_url: https://registry.hub.docker.com/u/svendowideit/testhook/,\n star_count: 0,\n status: Active\n }\n}\n\n\nWebhooks are available under the Settings menu of each Repository.\n\nNote: If you want to test your webhook out we recommend using\na tool like requestb.in.\nNote: The Docker Hub servers are currently in the IP range\n162.242.195.64 - 162.242.195.127, so you can restrict your webhooks to\naccept webhook requests from that set of IP addresses.\n\nWebhook chains\nWebhook chains allow you to chain calls to multiple services. For example,\nyou can use this to trigger a deployment of your container only after\nit has been successfully tested, then update a separate Changelog once the\ndeployment is complete.\nAfter clicking the \"Add webhook\" button, simply add as many URLs as necessary\nin your chain.\nThe first webhook in a chain will be called after a successful push. Subsequent\nURLs will be contacted after the callback has been validated.\nValidating a callback\nIn order to validate a callback in a webhook chain, you need to\n\nRetrieve the callback_url value in the request's JSON payload.\nSend a POST request to this URL containing a valid JSON body.\n\n\nNote: A chain request will only be considered complete once the last\ncallback has been validated.\n\nTo help you debug or simply view the results of your webhook(s),\nview the \"History\" of the webhook available on its settings page.\nCallback JSON data\nThe following parameters are recognized in callback data:\n\nstate (required): Accepted values are success, failure and error.\n If the state isn't success, the webhook chain will be interrupted.\ndescription: A string containing miscellaneous information that will be\n available on the Docker Hub. Maximum 255 characters.\ncontext: A string containing the context of the operation. Can be retrieved\n from the Docker Hub. Maximum 100 characters.\ntarget_url: The URL where the results of the operation can be found. Can be\n retrieved on the Docker Hub.\n\nExample callback payload:\n{\n \"state\": \"success\",\n \"description\": \"387 tests PASSED\",\n \"context\": \"Continuous integration by Acme CI\",\n \"target_url\": \"http://ci.acme.com/results/afd339c1c3d27\"\n}\n\nRepository links\nRepository links are a way to associate one Automated Build with\nanother. If one gets updated,the linking system triggers a rebuild\nfor the other Automated Build. This makes it easy to keep all your\nAutomated Builds up to date.\nTo add a link, go to the repository for the Automated Build you want to\nlink to and click on Repository Links under the Settings menu at\nright. Then, enter the name of the repository that you want have linked.\n\nWarning:\nYou can add more than one repository link, however, you should\ndo so very carefully. Creating a two way relationship between Automated Builds will\ncause an endless build loop.",
|
|
"title": "Automated Builds"
|
|
},
|
|
{
|
|
"loc": "/docker-hub/builds#automated-builds-on-docker-hub",
|
|
"tags": "",
|
|
"text": "",
|
|
"title": "Automated Builds on Docker Hub"
|
|
},
|
|
{
|
|
"loc": "/docker-hub/builds#about-automated-builds",
|
|
"tags": "",
|
|
"text": "Automated Builds are a special feature of Docker Hub which allow you to\nuse Docker Hub's build clusters to automatically\ncreate images from a specified Dockerfile and a GitHub or Bitbucket repository\n(or \"context\"). The system will clone your repository and build the image\ndescribed by the Dockerfile using the repository as the context. The\nresulting automated image will then be uploaded to the Docker Hub registry\nand marked as an Automated Build . Automated Builds have several advantages: Users of your Automated Build can trust that the resulting\nimage was built exactly as specified. The Dockerfile will be available to anyone with access to\nyour repository on the Docker Hub registry. Because the process is automated, Automated Builds help to\nmake sure that your repository is always up to date. Automated Builds are supported for both public and private repositories\non both GitHub and Bitbucket . To use Automated Builds, you must have an account on Docker Hub \nand on GitHub and/or Bitbucket. In either case, the account needs\nto be properly validated and activated before you can link to it.",
|
|
"title": "About Automated Builds"
|
|
},
|
|
{
|
|
"loc": "/docker-hub/builds#setting-up-automated-builds-with-github",
|
|
"tags": "",
|
|
"text": "In order to set up an Automated Build, you need to first link your Docker Hub account with a GitHub account.\nThis will allow the registry to see your repositories. Note: \nAutomated Builds currently require read and write access since Docker Hub needs to setup a GitHub service\nhook. We have no choice here, this is how GitHub manages permissions, sorry! \nWe do guarantee nothing else will be touched in your account. To get started, log into your Docker Hub account and click the\n\"+ Add Repository\" button at the upper right of the screen. Then select Automated Build . Select the GitHub service . Then follow the onscreen instructions to authorize and link your\nGitHub account to Docker Hub. Once it is linked, you'll be able to\nchoose a repo from which to create the Automatic Build. Creating an Automated Build You can create an Automated Build from any of your\npublic or private GitHub repositories with a Dockerfile . GitHub Submodules If your GitHub repository contains links to private submodules, you'll\nneed to add a deploy key from your Docker Hub repository. Your Docker Hub deploy key is located under the \"Build Details\"\nmenu on the Automated Build's main page in the Hub. Add this key\nto your GitHub submodule by visiting the Settings page for the\nrepository on GitHub and selecting \"Deploy keys\". \n \n \n Step \n Screenshot \n Description \n \n \n \n \n 1. \n \n Your automated build's deploy key is in the \"Build Details\" menu \nunder \"Deploy keys\". \n \n \n 2. \n \n In your GitHub submodule's repository Settings page, add the \ndeploy key from your Docker Hub Automated Build. \n \n GitHub Organizations GitHub organizations will appear once your membership to that organization is\nmade public on GitHub. To verify, you can look at the members tab for your\norganization on GitHub. GitHub Service Hooks Follow the steps below to configure the GitHub service\nhooks for your Automated Build: \n \n \n Step \n Screenshot \n Description \n \n \n \n \n 1. \n \n Log in to Github.com, and go to your Repository page. Click on \"Settings\" on\n the right side of the page. You must have admin privileges to the repository in order to do this. \n \n \n 2. \n \n Click on \"Webhooks & Services\" on the left side of the page. \n 3. \n Find the service labeled \"Docker\" and click on it. \n 4. \n Make sure the \"Active\" checkbox is selected and click the \"Update service\" button to save your changes.",
|
|
"title": "Setting up Automated Builds with GitHub"
|
|
},
|
|
{
|
|
"loc": "/docker-hub/builds#setting-up-automated-builds-with-bitbucket",
|
|
"tags": "",
|
|
"text": "In order to setup an Automated Build, you need to first link your Docker Hub account with a Bitbucket account.\nThis will allow the registry to see your repositories. To get started, log into your Docker Hub account and click the\n\"+ Add Repository\" button at the upper right of the screen. Then\nselect Automated Build . Select the Bitbucket source . Then follow the onscreen instructions to authorize and link your\nBitbucket account to Docker Hub. Once it is linked, you'll be able\nto choose a repository from which to create the Automatic Build. Creating an Automated Build You can create an Automated Build from any of your\npublic or private Bitbucket repositories with a Dockerfile . Adding a Hook When you link your Docker Hub account, a POST hook should get automatically\nadded to your Bitbucket repository. Follow the steps below to confirm or modify the\nBitbucket hooks for your Automated Build: \n \n \n Step \n Screenshot \n Description \n \n \n \n \n 1. \n \n Log in to Bitbucket.org and go to your Repository page. Click on \"Settings\" on\n the far left side of the page, under \"Navigation\". You must have admin privileges\n to the repository in order to do this. \n \n \n 2. \n \n Click on \"Hooks\" on the near left side of the page, under \"Settings\". \n \n 3. \n You should now see a list of hooks associated with the repo, including a POST hook that points at\n registry.hub.docker.com/hooks/bitbucket.",
|
|
"title": "Setting up Automated Builds with Bitbucket"
|
|
},
|
|
{
|
|
"loc": "/docker-hub/builds#the-dockerfile-and-automated-builds",
|
|
"tags": "",
|
|
"text": "During the build process, Docker will copy the contents of your Dockerfile .\nIt will also add it to the Docker Hub for the Docker\ncommunity (for public repositories) or approved team members/orgs (for private\nrepositories) to see on the repository page. README.md If you have a README.md file in your repository, it will be used as the\nrepository's full description.The build process will look for a README.md in the same directory as your Dockerfile . Warning: \nIf you change the full description after a build, it will be\nrewritten the next time the Automated Build has been built. To make changes,\nmodify the README.md from the Git repository.",
|
|
"title": "The Dockerfile and Automated Builds"
|
|
},
|
|
{
|
|
"loc": "/docker-hub/builds#remote-build-triggers",
|
|
"tags": "",
|
|
"text": "If you need a way to trigger Automated Builds outside of GitHub or Bitbucket,\nyou can set up a build trigger. When you turn on the build trigger for an\nAutomated Build, it will give you a URL to which you can send POST requests.\nThis will trigger the Automated Build, much as with a GitHub webhook. Build triggers are available under the Settings menu of each Automated Build\nrepository on the Docker Hub. You can use curl to trigger a build: $ curl --data build=true -X POST https://registry.hub.docker.com/u/svendowideit/testhook/trigger/be579c\n82-7c0e-11e4-81c4-0242ac110020/\nOK Note: \nYou can only trigger one build at a time and no more than one\nevery five minutes. If you already have a build pending, or if you\nrecently submitted a build request, those requests will be ignored .\nTo verify everything is working correctly, check the logs of last\nten triggers on the settings page .",
|
|
"title": "Remote Build triggers"
|
|
},
|
|
{
|
|
"loc": "/docker-hub/builds#webhooks",
|
|
"tags": "",
|
|
"text": "Automated Builds also include a Webhooks feature. Webhooks can be called\nafter a successful repository push is made. This includes when a new tag is added\nto an existing image. The webhook call will generate a HTTP POST with the following JSON\npayload: {\n callback_url : https://registry.hub.docker.com/u/svendowideit/testhook/hook/2141b5bi5i5b02bec211i4eeih0242eg11000a/ ,\n push_data : {\n images : [\n 27d47432a69bca5f2700e4dff7de0388ed65f9d3fb1ec645e2bc24c223dc1cc3 ,\n 51a9c7c1f8bb2fa19bcd09789a34e63f35abb80044bc10196e304f6634cc582c ,\n ...\n ],\n pushed_at : 1.417566161e+09,\n pusher : trustedbuilder \n },\n repository : {\n comment_count : 0,\n date_created : 1.417494799e+09,\n description : ,\n dockerfile : #\\n# BUILD\\u0009\\u0009docker build -t svendowideit/apt-cacher .\\n# RUN\\u0009\\u0009docker run -d -p 3142:3142 -name apt-cacher-run apt-cacher\\n#\\n# and then you can run containers with:\\n# \\u0009\\u0009docker run -t -i -rm -e http_proxy http://192.168.1.2:3142/ debian bash\\n#\\nFROM\\u0009\\u0009ubuntu\\nMAINTAINER\\u0009SvenDowideit@home.org.au\\n\\n\\nVOLUME\\u0009\\u0009[\\ /var/cache/apt-cacher-ng\\ ]\\nRUN\\u0009\\u0009apt-get update ; apt-get install -yq apt-cacher-ng\\n\\nEXPOSE \\u0009\\u00093142\\nCMD\\u0009\\u0009chmod 777 /var/cache/apt-cacher-ng ; /etc/init.d/apt-cacher-ng start ; tail -f /var/log/apt-cacher-ng/*\\n ,\n full_description : Docker Hub based automated build from a GitHub repo ,\n is_official : false,\n is_private : true,\n is_trusted : true,\n name : testhook ,\n namespace : svendowideit ,\n owner : svendowideit ,\n repo_name : svendowideit/testhook ,\n repo_url : https://registry.hub.docker.com/u/svendowideit/testhook/ ,\n star_count : 0,\n status : Active \n }\n} Webhooks are available under the Settings menu of each Repository. Note: If you want to test your webhook out we recommend using\na tool like requestb.in . Note : The Docker Hub servers are currently in the IP range 162.242.195.64 - 162.242.195.127 , so you can restrict your webhooks to\naccept webhook requests from that set of IP addresses. Webhook chains Webhook chains allow you to chain calls to multiple services. For example,\nyou can use this to trigger a deployment of your container only after\nit has been successfully tested, then update a separate Changelog once the\ndeployment is complete.\nAfter clicking the \"Add webhook\" button, simply add as many URLs as necessary\nin your chain. The first webhook in a chain will be called after a successful push. Subsequent\nURLs will be contacted after the callback has been validated. Validating a callback In order to validate a callback in a webhook chain, you need to Retrieve the callback_url value in the request's JSON payload. Send a POST request to this URL containing a valid JSON body. Note : A chain request will only be considered complete once the last\ncallback has been validated. To help you debug or simply view the results of your webhook(s),\nview the \"History\" of the webhook available on its settings page. Callback JSON data The following parameters are recognized in callback data: state (required): Accepted values are success , failure and error .\n If the state isn't success , the webhook chain will be interrupted. description : A string containing miscellaneous information that will be\n available on the Docker Hub. Maximum 255 characters. context : A string containing the context of the operation. Can be retrieved\n from the Docker Hub. Maximum 100 characters. target_url : The URL where the results of the operation can be found. Can be\n retrieved on the Docker Hub. Example callback payload: {\n \"state\": \"success\",\n \"description\": \"387 tests PASSED\",\n \"context\": \"Continuous integration by Acme CI\",\n \"target_url\": \"http://ci.acme.com/results/afd339c1c3d27\"\n}",
|
|
"title": "Webhooks"
|
|
},
|
|
{
|
|
"loc": "/docker-hub/builds#repository-links",
|
|
"tags": "",
|
|
"text": "Repository links are a way to associate one Automated Build with\nanother. If one gets updated,the linking system triggers a rebuild\nfor the other Automated Build. This makes it easy to keep all your\nAutomated Builds up to date. To add a link, go to the repository for the Automated Build you want to\nlink to and click on Repository Links under the Settings menu at\nright. Then, enter the name of the repository that you want have linked. Warning: \nYou can add more than one repository link, however, you should\ndo so very carefully. Creating a two way relationship between Automated Builds will\ncause an endless build loop.",
|
|
"title": "Repository links"
|
|
},
|
|
{
|
|
"loc": "/docker-hub/official_repos/",
|
|
"tags": "",
|
|
"text": "Guidelines for Creating and Documenting Official Repositories\nIntroduction\nYou\u2019ve been given the job of creating an image for an Official Repository\nhosted on Docker Hub Registry. These are\nour guidelines for getting that task done. Even if you\u2019re not\nplanning to create an Official Repo, you can think of these guidelines as best\npractices for image creation generally.\nThis document consists of two major sections:\n\nA list of expected files, resources and supporting items for your image,\nalong with best practices for creating those items\nExamples embodying those practices\n\nExpected Files Resources\nA Git repository\nYour image needs to live in a Git repository, preferably on GitHub. (If you\u2019d\nlike to use a different provider, please contact us\ndirectly.) Docker strongly recommends that this repo be publicly\naccessible.\nIf the repo is private or has otherwise limited access, you must provide a\nmeans of at least \u201cread-only\u201d access for both general users and for the\ndocker-library maintainers, who need access for review and building purposes.\nA Dockerfile\nComplete information on Dockerfiles can be found in the Reference section.\nWe also have a page discussing best practices for writing Dockerfiles.\nYour Dockerfile should adhere to the following:\n\nIt must be written either by using FROM scratch or be based on another,\nestablished Official Image.\nIt must follow Dockerfile best practices. These are discussed on the\nbest practices page. In addition,\nDocker engineer Michael Crosby has some good tips for Dockerfiles in\nthis blog post.\n\nWhile ONBUILD triggers\nare not required, if you choose to use them you should:\n\nBuild both ONBUILD and non-ONBUILD images, with the ONBUILD image\nbuilt FROM the non-ONBUILD image.\nThe ONBUILD image should be specifically tagged, for example, ruby:\nlatestand ruby:onbuild, or ruby:2 and ruby:2-onbuild\n\nA short description\nInclude a brief description of your image (in plaintext). Only one description\nis required; you don\u2019t need additional descriptions for each tag. The file\nshould also: \n\nBe named README-short.txt\nReside in the repo for the \u201clatest\u201d tag\nNot exceed 100 characters\n\nA logo\nInclude a logo of your company or the product (png format preferred). Only one\nlogo is required; you don\u2019t need additional logo files for each tag. The logo\nfile should have the following characteristics: \n\nBe named logo.png\nShould reside in the repo for the \u201clatest\u201d tag\nShould fit inside a 200px square, maximized in one dimension (preferably the\nwidth)\nSquare or wide (landscape) is preferred over tall (portrait), but exceptions\ncan be made based on the logo needed\n\nA long description\nInclude a comprehensive description of your image (in Markdown format, GitHub \nflavor preferred). Only one description is required; you don\u2019t need additional\ndescriptions for each tag. The file should also: \n\nBe named README.md\nReside in the repo for the \u201clatest\u201d tag\nBe no longer than absolutely necessary, while still addressing all the\ncontent requirements\n\nIn terms of content, the long description must include the following sections:\n\nOverview links\nHow-to/usage\nIssues contributions\n\nOverview links\nThis section should provide:\n\n\nan overview of the software contained in the image, similar to the\nintroduction in a Wikipedia entry\n\n\na selection of links to outside resources that help to describe the software\n\n\na mandatory link to the Dockerfile\n\n\nHow-to/usage\nA section that describes how to run and use the image, including common use\ncases and example Dockerfiles (if applicable). Try to provide clear, step-by-\nstep instructions wherever possible.\nIssues contributions\nIn this section, point users to any resources that can help them contribute to\nthe project. Include contribution guidelines and any specific instructions\nrelated to your development practices. Include a link to\nDocker\u2019s resources for contributors.\nBe sure to include contact info, handles, etc. for official maintainers.\nAlso include information letting users know where they can go for help and how\nthey can file issues with the repo. Point them to any specific IRC channels,\nissue trackers, contacts, additional \u201chow-to\u201d information or other resources.\nLicense\nInclude a file, LICENSE, of any applicable license. Docker recommends using\nthe license of the software contained in the image, provided it allows Docker,\nInc. to legally build and distribute the image. Otherwise, Docker recommends\nadopting the Expat license\n(a.k.a., the MIT or X11 license).\nExamples\nBelow are sample short and long description files for an imaginary image\ncontaining Ruby on Rails.\nShort description\nREADME-short.txt\nRuby on Rails is an open-source application framework written in Ruby. It emphasizes best practices such as convention over configuration, active record pattern, and the model-view-controller pattern.\nLong description\nREADME.md\n# What is Ruby on Rails\n\nRuby on Rails, often simply referred to as Rails, is an open source web application framework which runs via the Ruby programming language. It is a full-stack framework: it allows creating pages and applications that gather information from the web server, talk to or query the database, and render templates out of the box. As a result, Rails features a routing system that is independent of the web server.\n\n [wikipedia.org/wiki/Ruby_on_Rails](https://en.wikipedia.org/wiki/Ruby_on_Rails)\n\n# How to use this image\n\n## Create a `Dockerfile` in your rails app project\n\n FROM rails:onbuild\n\nPut this file in the root of your app, next to the `Gemfile`.\n\nThis image includes multiple `ONBUILD` triggers so that should be all that you need for most applications. The build will `ADD . /usr/src/app`, `RUN bundle install`, `EXPOSE 3000`, and set the default command to `rails server`.\n\nThen build and run the Docker image.\n\n docker build -t my-rails-app .\n docker run --name some-rails-app -d my-rails-app\n\nTest it by visiting `http://container-ip:3000` in a browser. On the other hand, if you need access outside the host on port 8080:\n\n docker run --name some-rails-app -p 8080:3000 -d my-rails-app\n\nThen go to `http://localhost:8080` or `http://host-ip:8080` in a browser.\n\n\nFor more examples, take a look at these repos: \n\nGo\nPostgreSQL\nBuildpack-deps\n\"Hello World\" minimal container\nNode\n\nSubmit your repo\nOnce you've checked off everything in these guidelines, and are confident your\nimage is ready for primetime, please contact us at\npartners@docker.com to have your project\nconsidered for the Official Repos program.",
|
|
"title": "Official Repo Guidelines"
|
|
},
|
|
{
|
|
"loc": "/docker-hub/official_repos#guidelines-for-creating-and-documenting-official-repositories",
|
|
"tags": "",
|
|
"text": "",
|
|
"title": "Guidelines for Creating and Documenting Official Repositories"
|
|
},
|
|
{
|
|
"loc": "/docker-hub/official_repos#introduction",
|
|
"tags": "",
|
|
"text": "You\u2019ve been given the job of creating an image for an Official Repository\nhosted on Docker Hub Registry . These are\nour guidelines for getting that task done. Even if you\u2019re not\nplanning to create an Official Repo, you can think of these guidelines as best\npractices for image creation generally. This document consists of two major sections: A list of expected files, resources and supporting items for your image,\nalong with best practices for creating those items Examples embodying those practices",
|
|
"title": "Introduction"
|
|
},
|
|
{
|
|
"loc": "/docker-hub/official_repos#expected-files-resources",
|
|
"tags": "",
|
|
"text": "A Git repository Your image needs to live in a Git repository, preferably on GitHub. (If you\u2019d\nlike to use a different provider, please contact us \ndirectly.) Docker strongly recommends that this repo be publicly\naccessible. If the repo is private or has otherwise limited access, you must provide a\nmeans of at least \u201cread-only\u201d access for both general users and for the\ndocker-library maintainers, who need access for review and building purposes. A Dockerfile Complete information on Dockerfile s can be found in the Reference section .\nWe also have a page discussing best practices for writing Dockerfile s .\nYour Dockerfile should adhere to the following: It must be written either by using FROM scratch or be based on another,\nestablished Official Image. It must follow Dockerfile best practices. These are discussed on the best practices page . In addition,\nDocker engineer Michael Crosby has some good tips for Dockerfiles in\nthis blog post . While ONBUILD triggers \nare not required, if you choose to use them you should: Build both ONBUILD and non- ONBUILD images, with the ONBUILD image\nbuilt FROM the non- ONBUILD image. The ONBUILD image should be specifically tagged, for example, ruby:\nlatest and ruby:onbuild , or ruby:2 and ruby:2-onbuild A short description Include a brief description of your image (in plaintext). Only one description\nis required; you don\u2019t need additional descriptions for each tag. The file\nshould also: Be named README-short.txt Reside in the repo for the \u201clatest\u201d tag Not exceed 100 characters A logo Include a logo of your company or the product (png format preferred). Only one\nlogo is required; you don\u2019t need additional logo files for each tag. The logo\nfile should have the following characteristics: Be named logo.png Should reside in the repo for the \u201clatest\u201d tag Should fit inside a 200px square, maximized in one dimension (preferably the\nwidth) Square or wide (landscape) is preferred over tall (portrait), but exceptions\ncan be made based on the logo needed A long description Include a comprehensive description of your image (in Markdown format, GitHub \nflavor preferred). Only one description is required; you don\u2019t need additional\ndescriptions for each tag. The file should also: Be named README.md Reside in the repo for the \u201clatest\u201d tag Be no longer than absolutely necessary, while still addressing all the\ncontent requirements In terms of content, the long description must include the following sections: Overview links How-to/usage Issues contributions Overview links This section should provide: an overview of the software contained in the image, similar to the\nintroduction in a Wikipedia entry a selection of links to outside resources that help to describe the software a mandatory link to the Dockerfile How-to/usage A section that describes how to run and use the image, including common use\ncases and example Dockerfile s (if applicable). Try to provide clear, step-by-\nstep instructions wherever possible. Issues contributions In this section, point users to any resources that can help them contribute to\nthe project. Include contribution guidelines and any specific instructions\nrelated to your development practices. Include a link to Docker\u2019s resources for contributors .\nBe sure to include contact info, handles, etc. for official maintainers. Also include information letting users know where they can go for help and how\nthey can file issues with the repo. Point them to any specific IRC channels,\nissue trackers, contacts, additional \u201chow-to\u201d information or other resources. License Include a file, LICENSE , of any applicable license. Docker recommends using\nthe license of the software contained in the image, provided it allows Docker,\nInc. to legally build and distribute the image. Otherwise, Docker recommends\nadopting the Expat license \n(a.k.a., the MIT or X11 license).",
|
|
"title": "Expected Files & Resources"
|
|
},
|
|
{
|
|
"loc": "/docker-hub/official_repos#examples",
|
|
"tags": "",
|
|
"text": "Below are sample short and long description files for an imaginary image\ncontaining Ruby on Rails. Short description README-short.txt Ruby on Rails is an open-source application framework written in Ruby. It emphasizes best practices such as convention over configuration, active record pattern, and the model-view-controller pattern. Long description README.md # What is Ruby on Rails\n\nRuby on Rails, often simply referred to as Rails, is an open source web application framework which runs via the Ruby programming language. It is a full-stack framework: it allows creating pages and applications that gather information from the web server, talk to or query the database, and render templates out of the box. As a result, Rails features a routing system that is independent of the web server. [wikipedia.org/wiki/Ruby_on_Rails](https://en.wikipedia.org/wiki/Ruby_on_Rails)\n\n# How to use this image\n\n## Create a `Dockerfile` in your rails app project\n\n FROM rails:onbuild\n\nPut this file in the root of your app, next to the `Gemfile`.\n\nThis image includes multiple `ONBUILD` triggers so that should be all that you need for most applications. The build will `ADD . /usr/src/app`, `RUN bundle install`, `EXPOSE 3000`, and set the default command to `rails server`.\n\nThen build and run the Docker image.\n\n docker build -t my-rails-app .\n docker run --name some-rails-app -d my-rails-app\n\nTest it by visiting `http://container-ip:3000` in a browser. On the other hand, if you need access outside the host on port 8080:\n\n docker run --name some-rails-app -p 8080:3000 -d my-rails-app\n\nThen go to `http://localhost:8080` or `http://host-ip:8080` in a browser. For more examples, take a look at these repos: Go PostgreSQL Buildpack-deps \"Hello World\" minimal container Node",
|
|
"title": "Examples"
|
|
},
|
|
{
|
|
"loc": "/docker-hub/official_repos#submit-your-repo",
|
|
"tags": "",
|
|
"text": "Once you've checked off everything in these guidelines, and are confident your\nimage is ready for primetime, please contact us at partners@docker.com to have your project\nconsidered for the Official Repos program.",
|
|
"title": "Submit your repo"
|
|
},
|
|
{
|
|
"loc": "/examples/",
|
|
"tags": "",
|
|
"text": "Table of Contents\nAbout\n\n\nDocker\n\n\nRelease Notes\n\n\nUnderstanding Docker\n\n\nInstallation\n\n\nUbuntu\n\n\nMac OS X\n\n\nMicrosoft Windows\n\n\nAmazon EC2\n\n\nArch Linux\n\n\nBinaries\n\n\nCentOS\n\n\nCRUX Linux\n\n\nDebian\n\n\nFedora\n\n\nFrugalWare\n\n\nGoogle Cloud Platform\n\n\nGentoo\n\n\nIBM Softlayer\n\n\nRackspace Cloud\n\n\nRed Hat Enterprise Linux\n\n\nOracle Linux\n\n\nSUSE\n\n\nDocker Compose\n\n\nUser Guide\n\n\nThe Docker User Guide\n\n\nGetting Started with Docker Hub\n\n\nDockerizing Applications\n\n\nWorking with Containers\n\n\nWorking with Docker Images\n\n\nLinking containers together\n\n\nManaging data in containers\n\n\nWorking with Docker Hub\n\n\nDocker Compose\n\n\nDocker Machine\n\n\nDocker Swarm\n\n\nDocker Hub\n\n\nDocker Hub\n\n\nAccounts\n\n\nRepositories\n\n\nAutomated Builds\n\n\nOfficial Repo Guidelines\n\n\nExamples\n\n\nDockerizing a Node.js web application\n\n\nDockerizing MongoDB\n\n\nDockerizing a Redis service\n\n\nDockerizing a PostgreSQL service\n\n\nDockerizing a Riak service\n\n\nDockerizing an SSH service\n\n\nDockerizing a CouchDB service\n\n\nDockerizing an Apt-Cacher-ng service\n\n\nGetting started with Compose and Django\n\n\nGetting started with Compose and Rails\n\n\nGetting started with Compose and Wordpress\n\n\nArticles\n\n\nDocker basics\n\n\nAdvanced networking\n\n\nSecurity\n\n\nRunning Docker with HTTPS\n\n\nRun a local registry mirror\n\n\nAutomatically starting containers\n\n\nCreating a base image\n\n\nBest practices for writing Dockerfiles\n\n\nUsing certificates for repository client verification\n\n\nUsing Supervisor\n\n\nProcess management with CFEngine\n\n\nUsing Puppet\n\n\nUsing Chef\n\n\nUsing PowerShell DSC\n\n\nCross-Host linking using ambassador containers\n\n\nRuntime metrics\n\n\nIncreasing a Boot2Docker volume\n\n\nControlling and configuring Docker using Systemd\n\n\nReference\n\n\nCommand line\n\n\nDockerfile\n\n\nFAQ\n\n\nRun Reference\n\n\nCompose command line\n\n\nCompose yml\n\n\nCompose ENV variables\n\n\nCompose commandline completion\n\n\nSwarm discovery\n\n\nSwarm strategies\n\n\nSwarm filters\n\n\nSwarm API\n\n\nDocker Hub API\n\n\nDocker Registry API\n\n\nDocker Registry API Client Libraries\n\n\nDocker Hub and Registry Spec\n\n\nDocker Remote API\n\n\nDocker Remote API v1.17\n\n\nDocker Remote API v1.16\n\n\nDocker Remote API Client Libraries\n\n\nDocker Hub Accounts API\n\n\nContributor Guide\n\n\nREADME first\n\n\nGet required software\n\n\nConfigure Git for contributing\n\n\nWork with a development container\n\n\nRun tests and test documentation\n\n\nUnderstand contribution workflow\n\n\nFind an issue\n\n\nWork on an issue\n\n\nCreate a pull request\n\n\nParticipate in the PR review\n\n\nAdvanced contributing\n\n\nWhere to get help\n\n\nCoding style guide\n\n\nDocumentation style guide",
|
|
"title": "**HIDDEN**"
|
|
},
|
|
{
|
|
"loc": "/examples#table-of-contents",
|
|
"tags": "",
|
|
"text": "",
|
|
"title": "Table of Contents"
|
|
},
|
|
{
|
|
"loc": "/examples#about",
|
|
"tags": "",
|
|
"text": "Docker Release Notes Understanding Docker",
|
|
"title": "About"
|
|
},
|
|
{
|
|
"loc": "/examples#installation",
|
|
"tags": "",
|
|
"text": "Ubuntu Mac OS X Microsoft Windows Amazon EC2 Arch Linux Binaries CentOS CRUX Linux Debian Fedora FrugalWare Google Cloud Platform Gentoo IBM Softlayer Rackspace Cloud Red Hat Enterprise Linux Oracle Linux SUSE Docker Compose",
|
|
"title": "Installation"
|
|
},
|
|
{
|
|
"loc": "/examples#user-guide",
|
|
"tags": "",
|
|
"text": "The Docker User Guide Getting Started with Docker Hub Dockerizing Applications Working with Containers Working with Docker Images Linking containers together Managing data in containers Working with Docker Hub Docker Compose Docker Machine Docker Swarm",
|
|
"title": "User Guide"
|
|
},
|
|
{
|
|
"loc": "/examples#docker-hub",
|
|
"tags": "",
|
|
"text": "Docker Hub Accounts Repositories Automated Builds Official Repo Guidelines",
|
|
"title": "Docker Hub"
|
|
},
|
|
{
|
|
"loc": "/examples#examples",
|
|
"tags": "",
|
|
"text": "Dockerizing a Node.js web application Dockerizing MongoDB Dockerizing a Redis service Dockerizing a PostgreSQL service Dockerizing a Riak service Dockerizing an SSH service Dockerizing a CouchDB service Dockerizing an Apt-Cacher-ng service Getting started with Compose and Django Getting started with Compose and Rails Getting started with Compose and Wordpress",
|
|
"title": "Examples"
|
|
},
|
|
{
|
|
"loc": "/examples#articles",
|
|
"tags": "",
|
|
"text": "Docker basics Advanced networking Security Running Docker with HTTPS Run a local registry mirror Automatically starting containers Creating a base image Best practices for writing Dockerfiles Using certificates for repository client verification Using Supervisor Process management with CFEngine Using Puppet Using Chef Using PowerShell DSC Cross-Host linking using ambassador containers Runtime metrics Increasing a Boot2Docker volume Controlling and configuring Docker using Systemd",
|
|
"title": "Articles"
|
|
},
|
|
{
|
|
"loc": "/examples#reference",
|
|
"tags": "",
|
|
"text": "Command line Dockerfile FAQ Run Reference Compose command line Compose yml Compose ENV variables Compose commandline completion Swarm discovery Swarm strategies Swarm filters Swarm API Docker Hub API Docker Registry API Docker Registry API Client Libraries Docker Hub and Registry Spec Docker Remote API Docker Remote API v1.17 Docker Remote API v1.16 Docker Remote API Client Libraries Docker Hub Accounts API",
|
|
"title": "Reference"
|
|
},
|
|
{
|
|
"loc": "/examples#contributor-guide",
|
|
"tags": "",
|
|
"text": "README first Get required software Configure Git for contributing Work with a development container Run tests and test documentation Understand contribution workflow Find an issue Work on an issue Create a pull request Participate in the PR review Advanced contributing Where to get help Coding style guide Documentation style guide",
|
|
"title": "Contributor Guide"
|
|
},
|
|
{
|
|
"loc": "/examples/nodejs_web_app/",
|
|
"tags": "",
|
|
"text": "Dockerizing a Node.js Web App\n\nNote: \n- If you don't like sudo then see Giving non-root\n access\n\nThe goal of this example is to show you how you can build your own\nDocker images from a parent image using a Dockerfile\n. We will do that by making a simple Node.js hello world web\napplication running on CentOS. You can get the full source code at\nhttps://github.com/enokd/docker-node-hello/.\nCreate Node.js app\nFirst, create a directory src where all the files\nwould live. Then create a package.json file that\ndescribes your app and its dependencies:\n{\n \"name\": \"docker-centos-hello\",\n \"private\": true,\n \"version\": \"0.0.1\",\n \"description\": \"Node.js Hello world app on CentOS using docker\",\n \"author\": \"Daniel Gasienica daniel@gasienica.ch\",\n \"dependencies\": {\n \"express\": \"3.2.4\"\n }\n}\n\nThen, create an index.js file that defines a web\napp using the Express.js framework:\nvar express = require('express');\n\n// Constants\nvar PORT = 8080;\n\n// App\nvar app = express();\napp.get('/', function (req, res) {\n res.send('Hello world\\n');\n});\n\napp.listen(PORT);\nconsole.log('Running on http://localhost:' + PORT);\n\nIn the next steps, we'll look at how you can run this app inside a\nCentOS container using Docker. First, you'll need to build a Docker\nimage of your app.\nCreating a Dockerfile\nCreate an empty file called Dockerfile:\ntouch Dockerfile\n\nOpen the Dockerfile in your favorite text editor\nDefine the parent image you want to use to build your own image on\ntop of. Here, we'll use\nCentOS (tag: centos6)\navailable on the Docker Hub:\nFROM centos:centos6\n\nSince we're building a Node.js app, you'll have to install Node.js as\nwell as npm on your CentOS image. Node.js is required to run your app\nand npm to install your app's dependencies defined in\npackage.json. To install the right package for\nCentOS, we'll use the instructions from the Node.js wiki:\n# Enable EPEL for Node.js\nRUN rpm -Uvh http://download.fedoraproject.org/pub/epel/6/i386/epel-release-6-8.noarch.rpm\n# Install Node.js and npm\nRUN yum install -y npm\n\nTo bundle your app's source code inside the Docker image, use the COPY\ninstruction:\n# Bundle app source\nCOPY . /src\n\nInstall your app dependencies using the npm binary:\n# Install app dependencies\nRUN cd /src; npm install\n\nYour app binds to port 8080 so you'll use theEXPOSE instruction to have\nit mapped by the docker daemon:\nEXPOSE 8080\n\nLast but not least, define the command to run your app using CMD which\ndefines your runtime, i.e. node, and the path to our app, i.e. src/index.js\n(see the step where we added the source to the container):\nCMD [\"node\", \"/src/index.js\"]\n\nYour Dockerfile should now look like this:\nFROM centos:centos6\n\n# Enable EPEL for Node.js\nRUN rpm -Uvh http://download.fedoraproject.org/pub/epel/6/i386/epel-release-6-8.noarch.rpm\n# Install Node.js and npm\nRUN yum install -y npm\n\n# Bundle app source\nCOPY . /src\n# Install app dependencies\nRUN cd /src; npm install\n\nEXPOSE 8080\nCMD [\"node\", \"/src/index.js\"]\n\nBuilding your image\nGo to the directory that has your Dockerfile and run the following command\nto build a Docker image. The -t flag lets you tag your image so it's easier\nto find later using the docker images command:\n$ sudo docker build -t your username/centos-node-hello .\n\nYour image will now be listed by Docker:\n$ sudo docker images\n\n# Example\nREPOSITORY TAG ID CREATED\ncentos centos6 539c0211cd76 8 weeks ago\nyour username/centos-node-hello latest d64d3505b0d2 2 hours ago\n\nRun the image\nRunning your image with -d runs the container in detached mode, leaving the\ncontainer running in the background. The -p flag redirects a public port to\na private port in the container. Run the image you previously built:\n$ sudo docker run -p 49160:8080 -d your username/centos-node-hello\n\nPrint the output of your app:\n# Get container ID\n$ sudo docker ps\n\n# Print app output\n$ sudo docker logs container id\n\n# Example\nRunning on http://localhost:8080\n\nTest\nTo test your app, get the port of your app that Docker mapped:\n$ sudo docker ps\n\n# Example\nID IMAGE COMMAND ... PORTS\necce33b30ebf your username/centos-node-hello:latest node /src/index.js 49160-8080\n\nIn the example above, Docker mapped the 8080 port of the container to 49160.\nNow you can call your app using curl (install if needed via:\nsudo apt-get install curl):\n$ curl -i localhost:49160\n\nHTTP/1.1 200 OK\nX-Powered-By: Express\nContent-Type: text/html; charset=utf-8\nContent-Length: 12\nDate: Sun, 02 Jun 2013 03:53:22 GMT\nConnection: keep-alive\n\nHello world\n\nIf you use Boot2docker on OS X, the port is actually mapped to the Docker host VM,\nand you should use the following command:\n$ curl $(boot2docker ip):49160\n\nWe hope this tutorial helped you get up and running with Node.js and\nCentOS on Docker. You can get the full source code at\nhttps://github.com/enokd/docker-node-hello/.",
|
|
"title": "Dockerizing a Node.js web application"
|
|
},
|
|
{
|
|
"loc": "/examples/nodejs_web_app#dockerizing-a-nodejs-web-app",
|
|
"tags": "",
|
|
"text": "Note : \n- If you don't like sudo then see Giving non-root\n access The goal of this example is to show you how you can build your own\nDocker images from a parent image using a Dockerfile \n. We will do that by making a simple Node.js hello world web\napplication running on CentOS. You can get the full source code at https://github.com/enokd/docker-node-hello/ .",
|
|
"title": "Dockerizing a Node.js Web App"
|
|
},
|
|
{
|
|
"loc": "/examples/nodejs_web_app#create-nodejs-app",
|
|
"tags": "",
|
|
"text": "First, create a directory src where all the files\nwould live. Then create a package.json file that\ndescribes your app and its dependencies: {\n \"name\": \"docker-centos-hello\",\n \"private\": true,\n \"version\": \"0.0.1\",\n \"description\": \"Node.js Hello world app on CentOS using docker\",\n \"author\": \"Daniel Gasienica daniel@gasienica.ch \",\n \"dependencies\": {\n \"express\": \"3.2.4\"\n }\n} Then, create an index.js file that defines a web\napp using the Express.js framework: var express = require('express');\n\n// Constants\nvar PORT = 8080;\n\n// App\nvar app = express();\napp.get('/', function (req, res) {\n res.send('Hello world\\n');\n});\n\napp.listen(PORT);\nconsole.log('Running on http://localhost:' + PORT); In the next steps, we'll look at how you can run this app inside a\nCentOS container using Docker. First, you'll need to build a Docker\nimage of your app.",
|
|
"title": "Create Node.js app"
|
|
},
|
|
{
|
|
"loc": "/examples/nodejs_web_app#creating-a-dockerfile",
|
|
"tags": "",
|
|
"text": "Create an empty file called Dockerfile : touch Dockerfile Open the Dockerfile in your favorite text editor Define the parent image you want to use to build your own image on\ntop of. Here, we'll use CentOS (tag: centos6 )\navailable on the Docker Hub : FROM centos:centos6 Since we're building a Node.js app, you'll have to install Node.js as\nwell as npm on your CentOS image. Node.js is required to run your app\nand npm to install your app's dependencies defined in package.json . To install the right package for\nCentOS, we'll use the instructions from the Node.js wiki : # Enable EPEL for Node.js\nRUN rpm -Uvh http://download.fedoraproject.org/pub/epel/6/i386/epel-release-6-8.noarch.rpm\n# Install Node.js and npm\nRUN yum install -y npm To bundle your app's source code inside the Docker image, use the COPY \ninstruction: # Bundle app source\nCOPY . /src Install your app dependencies using the npm binary: # Install app dependencies\nRUN cd /src; npm install Your app binds to port 8080 so you'll use the EXPOSE instruction to have\nit mapped by the docker daemon: EXPOSE 8080 Last but not least, define the command to run your app using CMD which\ndefines your runtime, i.e. node , and the path to our app, i.e. src/index.js \n(see the step where we added the source to the container): CMD [\"node\", \"/src/index.js\"] Your Dockerfile should now look like this: FROM centos:centos6\n\n# Enable EPEL for Node.js\nRUN rpm -Uvh http://download.fedoraproject.org/pub/epel/6/i386/epel-release-6-8.noarch.rpm\n# Install Node.js and npm\nRUN yum install -y npm\n\n# Bundle app source\nCOPY . /src\n# Install app dependencies\nRUN cd /src; npm install\n\nEXPOSE 8080\nCMD [\"node\", \"/src/index.js\"]",
|
|
"title": "Creating a Dockerfile"
|
|
},
|
|
{
|
|
"loc": "/examples/nodejs_web_app#building-your-image",
|
|
"tags": "",
|
|
"text": "Go to the directory that has your Dockerfile and run the following command\nto build a Docker image. The -t flag lets you tag your image so it's easier\nto find later using the docker images command: $ sudo docker build -t your username /centos-node-hello . Your image will now be listed by Docker: $ sudo docker images\n\n# Example\nREPOSITORY TAG ID CREATED\ncentos centos6 539c0211cd76 8 weeks ago your username /centos-node-hello latest d64d3505b0d2 2 hours ago",
|
|
"title": "Building your image"
|
|
},
|
|
{
|
|
"loc": "/examples/nodejs_web_app#run-the-image",
|
|
"tags": "",
|
|
"text": "Running your image with -d runs the container in detached mode, leaving the\ncontainer running in the background. The -p flag redirects a public port to\na private port in the container. Run the image you previously built: $ sudo docker run -p 49160:8080 -d your username /centos-node-hello Print the output of your app: # Get container ID\n$ sudo docker ps\n\n# Print app output\n$ sudo docker logs container id \n\n# Example\nRunning on http://localhost:8080",
|
|
"title": "Run the image"
|
|
},
|
|
{
|
|
"loc": "/examples/nodejs_web_app#test",
|
|
"tags": "",
|
|
"text": "To test your app, get the port of your app that Docker mapped: $ sudo docker ps\n\n# Example\nID IMAGE COMMAND ... PORTS\necce33b30ebf your username /centos-node-hello:latest node /src/index.js 49160- 8080 In the example above, Docker mapped the 8080 port of the container to 49160 . Now you can call your app using curl (install if needed via: sudo apt-get install curl ): $ curl -i localhost:49160\n\nHTTP/1.1 200 OK\nX-Powered-By: Express\nContent-Type: text/html; charset=utf-8\nContent-Length: 12\nDate: Sun, 02 Jun 2013 03:53:22 GMT\nConnection: keep-alive\n\nHello world If you use Boot2docker on OS X, the port is actually mapped to the Docker host VM,\nand you should use the following command: $ curl $(boot2docker ip):49160 We hope this tutorial helped you get up and running with Node.js and\nCentOS on Docker. You can get the full source code at https://github.com/enokd/docker-node-hello/ .",
|
|
"title": "Test"
|
|
},
|
|
{
|
|
"loc": "/examples/mongodb/",
|
|
"tags": "",
|
|
"text": "Dockerizing MongoDB\nIntroduction\nIn this example, we are going to learn how to build a Docker image with\nMongoDB pre-installed. We'll also see how to push that image to the\nDocker Hub registry and share it with others!\nUsing Docker and containers for deploying MongoDB\ninstances will bring several benefits, such as:\n\nEasy to maintain, highly configurable MongoDB instances;\nReady to run and start working within milliseconds;\nBased on globally accessible and shareable images.\n\n\nNote:\nIf you do not like sudo, you might want to check out: \nGiving non-root access.\n\nCreating a Dockerfile for MongoDB\nLet's create our Dockerfile and start building it:\n$ nano Dockerfile\n\nAlthough optional, it is handy to have comments at the beginning of a\nDockerfile explaining its purpose:\n# Dockerizing MongoDB: Dockerfile for building MongoDB images\n# Based on ubuntu:latest, installs MongoDB following the instructions from:\n# http://docs.mongodb.org/manual/tutorial/install-mongodb-on-ubuntu/\n\n\nTip: Dockerfiles are flexible. However, they need to follow a certain\nformat. The first item to be defined is the name of an image, which becomes\nthe parent of your Dockerized MongoDB image.\n\nWe will build our image using the latest version of Ubuntu from the\nDocker Hub Ubuntu repository.\n# Format: FROM repository[:version]\nFROM ubuntu:latest\n\nContinuing, we will declare the MAINTAINER of the Dockerfile:\n# Format: MAINTAINER Name email@addr.ess\nMAINTAINER M.Y. Name myname@addr.ess\n\n\nNote: Although Ubuntu systems have MongoDB packages, they are likely to\nbe outdated. Therefore in this example, we will use the official MongoDB\npackages.\n\nWe will begin with importing the MongoDB public GPG key. We will also create\na MongoDB repository file for the package manager.\n# Installation:\n# Import MongoDB public GPG key AND create a MongoDB list file\nRUN apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv 7F0CEB10\nRUN echo 'deb http://downloads-distro.mongodb.org/repo/ubuntu-upstart dist 10gen' | tee /etc/apt/sources.list.d/10gen.list\n\nAfter this initial preparation we can update our packages and install MongoDB.\n# Update apt-get sources AND install MongoDB\nRUN apt-get update apt-get install -y mongodb-org\n\n\nTip: You can install a specific version of MongoDB by using a list\nof required packages with versions, e.g.:\nRUN apt-get update apt-get install -y mongodb-org=2.6.1 mongodb-org-server=2.6.1 mongodb-org-shell=2.6.1 mongodb-org-mongos=2.6.1 mongodb-org-tools=2.6.1\n\n\nMongoDB requires a data directory. Let's create it as the final step of our\ninstallation instructions.\n# Create the MongoDB data directory\nRUN mkdir -p /data/db\n\nLastly we set the ENTRYPOINT which will tell Docker to run mongod inside\nthe containers launched from our MongoDB image. And for ports, we will use\nthe EXPOSE instruction.\n# Expose port 27017 from the container to the host\nEXPOSE 27017\n\n# Set usr/bin/mongod as the dockerized entry-point application\nENTRYPOINT usr/bin/mongod\n\nNow save the file and let's build our image.\n\nNote:\nThe full version of this Dockerfile can be found here.\n\nBuilding the MongoDB Docker image\nWith our Dockerfile, we can now build the MongoDB image using Docker. Unless\nexperimenting, it is always a good practice to tag Docker images by passing the\n--tag option to docker build command.\n# Format: sudo docker build --tag/-t user-name/repository .\n# Example:\n$ sudo docker build --tag my/repo .\n\nOnce this command is issued, Docker will go through the Dockerfile and build\nthe image. The final image will be tagged my/repo.\nPushing the MongoDB image to Docker Hub\nAll Docker image repositories can be hosted and shared on\nDocker Hub with the docker push command. For this,\nyou need to be logged-in.\n# Log-in\n$ sudo docker login\nUsername:\n..\n\n# Push the image\n# Format: sudo docker push user-name/repository\n$ sudo docker push my/repo\nThe push refers to a repository [my/repo] (len: 1)\nSending image list\nPushing repository my/repo (1 tags)\n..\n\nUsing the MongoDB image\nUsing the MongoDB image we created, we can run one or more MongoDB instances\nas daemon process(es).\n# Basic way\n# Usage: sudo docker run --name name for container -d user-name/repository\n$ sudo docker run --name mongo_instance_001 -d my/repo\n\n# Dockerized MongoDB, lean and mean!\n# Usage: sudo docker run --name name for container -d user-name/repository --noprealloc --smallfiles\n$ sudo docker run --name mongo_instance_001 -d my/repo --noprealloc --smallfiles\n\n# Checking out the logs of a MongoDB container\n# Usage: sudo docker logs name for container\n$ sudo docker logs mongo_instance_001\n\n# Playing with MongoDB\n# Usage: mongo --port port you get from `docker ps` \n$ mongo --port 12345\n\n\nLinking containers\nCross-host linking containers\nCreating an Automated Build",
|
|
"title": "Dockerizing MongoDB"
|
|
},
|
|
{
|
|
"loc": "/examples/mongodb#dockerizing-mongodb",
|
|
"tags": "",
|
|
"text": "",
|
|
"title": "Dockerizing MongoDB"
|
|
},
|
|
{
|
|
"loc": "/examples/mongodb#introduction",
|
|
"tags": "",
|
|
"text": "In this example, we are going to learn how to build a Docker image with\nMongoDB pre-installed. We'll also see how to push that image to the Docker Hub registry and share it with others! Using Docker and containers for deploying MongoDB \ninstances will bring several benefits, such as: Easy to maintain, highly configurable MongoDB instances; Ready to run and start working within milliseconds; Based on globally accessible and shareable images. Note: If you do not like sudo , you might want to check out: Giving non-root access .",
|
|
"title": "Introduction"
|
|
},
|
|
{
|
|
"loc": "/examples/mongodb#creating-a-dockerfile-for-mongodb",
|
|
"tags": "",
|
|
"text": "Let's create our Dockerfile and start building it: $ nano Dockerfile Although optional, it is handy to have comments at the beginning of a Dockerfile explaining its purpose: # Dockerizing MongoDB: Dockerfile for building MongoDB images\n# Based on ubuntu:latest, installs MongoDB following the instructions from:\n# http://docs.mongodb.org/manual/tutorial/install-mongodb-on-ubuntu/ Tip: Dockerfile s are flexible. However, they need to follow a certain\nformat. The first item to be defined is the name of an image, which becomes\nthe parent of your Dockerized MongoDB image. We will build our image using the latest version of Ubuntu from the Docker Hub Ubuntu repository. # Format: FROM repository[:version]\nFROM ubuntu:latest Continuing, we will declare the MAINTAINER of the Dockerfile : # Format: MAINTAINER Name email@addr.ess \nMAINTAINER M.Y. Name myname@addr.ess Note: Although Ubuntu systems have MongoDB packages, they are likely to\nbe outdated. Therefore in this example, we will use the official MongoDB\npackages. We will begin with importing the MongoDB public GPG key. We will also create\na MongoDB repository file for the package manager. # Installation:\n# Import MongoDB public GPG key AND create a MongoDB list file\nRUN apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv 7F0CEB10\nRUN echo 'deb http://downloads-distro.mongodb.org/repo/ubuntu-upstart dist 10gen' | tee /etc/apt/sources.list.d/10gen.list After this initial preparation we can update our packages and install MongoDB. # Update apt-get sources AND install MongoDB\nRUN apt-get update apt-get install -y mongodb-org Tip: You can install a specific version of MongoDB by using a list\nof required packages with versions, e.g.: RUN apt-get update apt-get install -y mongodb-org=2.6.1 mongodb-org-server=2.6.1 mongodb-org-shell=2.6.1 mongodb-org-mongos=2.6.1 mongodb-org-tools=2.6.1 MongoDB requires a data directory. Let's create it as the final step of our\ninstallation instructions. # Create the MongoDB data directory\nRUN mkdir -p /data/db Lastly we set the ENTRYPOINT which will tell Docker to run mongod inside\nthe containers launched from our MongoDB image. And for ports, we will use\nthe EXPOSE instruction. # Expose port 27017 from the container to the host\nEXPOSE 27017\n\n# Set usr/bin/mongod as the dockerized entry-point application\nENTRYPOINT usr/bin/mongod Now save the file and let's build our image. Note: The full version of this Dockerfile can be found here .",
|
|
"title": "Creating a Dockerfile for MongoDB"
|
|
},
|
|
{
|
|
"loc": "/examples/mongodb#building-the-mongodb-docker-image",
|
|
"tags": "",
|
|
"text": "With our Dockerfile , we can now build the MongoDB image using Docker. Unless\nexperimenting, it is always a good practice to tag Docker images by passing the --tag option to docker build command. # Format: sudo docker build --tag/-t user-name / repository .\n# Example:\n$ sudo docker build --tag my/repo . Once this command is issued, Docker will go through the Dockerfile and build\nthe image. The final image will be tagged my/repo .",
|
|
"title": "Building the MongoDB Docker image"
|
|
},
|
|
{
|
|
"loc": "/examples/mongodb#pushing-the-mongodb-image-to-docker-hub",
|
|
"tags": "",
|
|
"text": "All Docker image repositories can be hosted and shared on Docker Hub with the docker push command. For this,\nyou need to be logged-in. # Log-in\n$ sudo docker login\nUsername:\n..\n\n# Push the image\n# Format: sudo docker push user-name / repository \n$ sudo docker push my/repo\nThe push refers to a repository [my/repo] (len: 1)\nSending image list\nPushing repository my/repo (1 tags)\n..",
|
|
"title": "Pushing the MongoDB image to Docker Hub"
|
|
},
|
|
{
|
|
"loc": "/examples/mongodb#using-the-mongodb-image",
|
|
"tags": "",
|
|
"text": "Using the MongoDB image we created, we can run one or more MongoDB instances\nas daemon process(es). # Basic way\n# Usage: sudo docker run --name name for container -d user-name / repository \n$ sudo docker run --name mongo_instance_001 -d my/repo\n\n# Dockerized MongoDB, lean and mean!\n# Usage: sudo docker run --name name for container -d user-name / repository --noprealloc --smallfiles\n$ sudo docker run --name mongo_instance_001 -d my/repo --noprealloc --smallfiles\n\n# Checking out the logs of a MongoDB container\n# Usage: sudo docker logs name for container \n$ sudo docker logs mongo_instance_001\n\n# Playing with MongoDB\n# Usage: mongo --port port you get from `docker ps` \n$ mongo --port 12345 Linking containers Cross-host linking containers Creating an Automated Build",
|
|
"title": "Using the MongoDB image"
|
|
},
|
|
{
|
|
"loc": "/examples/running_redis_service/",
|
|
"tags": "",
|
|
"text": "Dockerizing a Redis Service\nVery simple, no frills, Redis service attached to a web application\nusing a link.\nCreate a docker container for Redis\nFirstly, we create a Dockerfile for our new Redis\nimage.\nFROM ubuntu:14.04\nRUN apt-get update apt-get install -y redis-server\nEXPOSE 6379\nENTRYPOINT [\"/usr/bin/redis-server\"]\n\nNext we build an image from our Dockerfile.\nReplace your username with your own user name.\n$ sudo docker build -t your username/redis .\n\nRun the service\nUse the image we've just created and name your container redis.\nRunning the service with -d runs the container in detached mode, leaving\nthe container running in the background.\nImportantly, we're not exposing any ports on our container. Instead\nwe're going to use a container link to provide access to our Redis\ndatabase.\n$ sudo docker run --name redis -d your username/redis\n\nCreate your web application container\nNext we can create a container for our application. We're going to use\nthe -link flag to create a link to the redis container we've just\ncreated with an alias of db. This will create a secure tunnel to the\nredis container and expose the Redis instance running inside that\ncontainer to only this container.\n$ sudo docker run --link redis:db -i -t ubuntu:14.04 /bin/bash\n\nOnce inside our freshly created container we need to install Redis to\nget the redis-cli binary to test our connection.\n$ sudo apt-get update\n$ sudo apt-get install redis-server\n$ sudo service redis-server stop\n\nAs we've used the --link redis:db option, Docker\nhas created some environment variables in our web application container.\n$ env | grep DB_\n\n# Should return something similar to this with your values\nDB_NAME=/violet_wolf/db\nDB_PORT_6379_TCP_PORT=6379\nDB_PORT=tcp://172.17.0.33:6379\nDB_PORT_6379_TCP=tcp://172.17.0.33:6379\nDB_PORT_6379_TCP_ADDR=172.17.0.33\nDB_PORT_6379_TCP_PROTO=tcp\n\nWe can see that we've got a small list of environment variables prefixed\nwith DB. The DB comes from the link alias specified when we launched\nthe container. Let's use the DB_PORT_6379_TCP_ADDR variable to connect to\nour Redis container.\n$ redis-cli -h $DB_PORT_6379_TCP_ADDR\n$ redis 172.17.0.33:6379\n$ redis 172.17.0.33:6379 set docker awesome\nOK\n$ redis 172.17.0.33:6379 get docker\n\"awesome\"\n$ redis 172.17.0.33:6379 exit\n\nWe could easily use this or other environment variables in our web\napplication to make a connection to our redis\ncontainer.",
|
|
"title": "Dockerizing a Redis service"
|
|
},
|
|
{
|
|
"loc": "/examples/running_redis_service#dockerizing-a-redis-service",
|
|
"tags": "",
|
|
"text": "Very simple, no frills, Redis service attached to a web application\nusing a link.",
|
|
"title": "Dockerizing a Redis Service"
|
|
},
|
|
{
|
|
"loc": "/examples/running_redis_service#create-a-docker-container-for-redis",
|
|
"tags": "",
|
|
"text": "Firstly, we create a Dockerfile for our new Redis\nimage. FROM ubuntu:14.04\nRUN apt-get update apt-get install -y redis-server\nEXPOSE 6379\nENTRYPOINT [\"/usr/bin/redis-server\"] Next we build an image from our Dockerfile .\nReplace your username with your own user name. $ sudo docker build -t your username /redis .",
|
|
"title": "Create a docker container for Redis"
|
|
},
|
|
{
|
|
"loc": "/examples/running_redis_service#run-the-service",
|
|
"tags": "",
|
|
"text": "Use the image we've just created and name your container redis . Running the service with -d runs the container in detached mode, leaving\nthe container running in the background. Importantly, we're not exposing any ports on our container. Instead\nwe're going to use a container link to provide access to our Redis\ndatabase. $ sudo docker run --name redis -d your username /redis",
|
|
"title": "Run the service"
|
|
},
|
|
{
|
|
"loc": "/examples/running_redis_service#create-your-web-application-container",
|
|
"tags": "",
|
|
"text": "Next we can create a container for our application. We're going to use\nthe -link flag to create a link to the redis container we've just\ncreated with an alias of db . This will create a secure tunnel to the redis container and expose the Redis instance running inside that\ncontainer to only this container. $ sudo docker run --link redis:db -i -t ubuntu:14.04 /bin/bash Once inside our freshly created container we need to install Redis to\nget the redis-cli binary to test our connection. $ sudo apt-get update\n$ sudo apt-get install redis-server\n$ sudo service redis-server stop As we've used the --link redis:db option, Docker\nhas created some environment variables in our web application container. $ env | grep DB_\n\n# Should return something similar to this with your values\nDB_NAME=/violet_wolf/db\nDB_PORT_6379_TCP_PORT=6379\nDB_PORT=tcp://172.17.0.33:6379\nDB_PORT_6379_TCP=tcp://172.17.0.33:6379\nDB_PORT_6379_TCP_ADDR=172.17.0.33\nDB_PORT_6379_TCP_PROTO=tcp We can see that we've got a small list of environment variables prefixed\nwith DB . The DB comes from the link alias specified when we launched\nthe container. Let's use the DB_PORT_6379_TCP_ADDR variable to connect to\nour Redis container. $ redis-cli -h $DB_PORT_6379_TCP_ADDR\n$ redis 172.17.0.33:6379 \n$ redis 172.17.0.33:6379 set docker awesome\nOK\n$ redis 172.17.0.33:6379 get docker\n\"awesome\"\n$ redis 172.17.0.33:6379 exit We could easily use this or other environment variables in our web\napplication to make a connection to our redis \ncontainer.",
|
|
"title": "Create your web application container"
|
|
},
|
|
{
|
|
"loc": "/examples/postgresql_service/",
|
|
"tags": "",
|
|
"text": "Dockerizing PostgreSQL\n\nNote: \n- If you don't like sudo then see Giving non-root\n access\n\nInstalling PostgreSQL on Docker\nAssuming there is no Docker image that suits your needs on the Docker\nHub, you can create one yourself.\nStart by creating a new Dockerfile:\n\nNote: \nThis PostgreSQL setup is for development-only purposes. Refer to the\nPostgreSQL documentation to fine-tune these settings so that it is\nsuitably secure.\n\n#\n# example Dockerfile for http://docs.docker.com/examples/postgresql_service/\n#\n\nFROM ubuntu\nMAINTAINER SvenDowideit@docker.com\n\n# Add the PostgreSQL PGP key to verify their Debian packages.\n# It should be the same key as https://www.postgresql.org/media/keys/ACCC4CF8.asc\nRUN apt-key adv --keyserver keyserver.ubuntu.com --recv-keys B97B0AFCAA1A47F044F244A07FCC7D46ACCC4CF8\n\n# Add PostgreSQL's repository. It contains the most recent stable release\n# of PostgreSQL, ``9.3``.\nRUN echo \"deb http://apt.postgresql.org/pub/repos/apt/ precise-pgdg main\" /etc/apt/sources.list.d/pgdg.list\n\n# Install ``python-software-properties``, ``software-properties-common`` and PostgreSQL 9.3\n# There are some warnings (in red) that show up during the build. You can hide\n# them by prefixing each apt-get statement with DEBIAN_FRONTEND=noninteractive\nRUN apt-get update apt-get install -y python-software-properties software-properties-common postgresql-9.3 postgresql-client-9.3 postgresql-contrib-9.3\n\n# Note: The official Debian and Ubuntu images automatically ``apt-get clean``\n# after each ``apt-get``\n\n# Run the rest of the commands as the ``postgres`` user created by the ``postgres-9.3`` package when it was ``apt-get installed``\nUSER postgres\n\n# Create a PostgreSQL role named ``docker`` with ``docker`` as the password and\n# then create a database `docker` owned by the ``docker`` role.\n# Note: here we use ``\\`` to run commands one after the other - the ``\\``\n# allows the RUN command to span multiple lines.\nRUN /etc/init.d/postgresql start \\\n psql --command \"CREATE USER docker WITH SUPERUSER PASSWORD 'docker';\" \\\n createdb -O docker docker\n\n# Adjust PostgreSQL configuration so that remote connections to the\n# database are possible. \nRUN echo \"host all all 0.0.0.0/0 md5\" /etc/postgresql/9.3/main/pg_hba.conf\n\n# And add ``listen_addresses`` to ``/etc/postgresql/9.3/main/postgresql.conf``\nRUN echo \"listen_addresses='*'\" /etc/postgresql/9.3/main/postgresql.conf\n\n# Expose the PostgreSQL port\nEXPOSE 5432\n\n# Add VOLUMEs to allow backup of config, logs and databases\nVOLUME [\"/etc/postgresql\", \"/var/log/postgresql\", \"/var/lib/postgresql\"]\n\n# Set the default command to run when starting the container\nCMD [\"/usr/lib/postgresql/9.3/bin/postgres\", \"-D\", \"/var/lib/postgresql/9.3/main\", \"-c\", \"config_file=/etc/postgresql/9.3/main/postgresql.conf\"]\n\nBuild an image from the Dockerfile assign it a name.\n$ sudo docker build -t eg_postgresql .\n\nAnd run the PostgreSQL server container (in the foreground):\n$ sudo docker run --rm -P --name pg_test eg_postgresql\n\nThere are 2 ways to connect to the PostgreSQL server. We can use Link\nContainers, or we can access it from our host\n(or the network).\n\nNote: \nThe --rm removes the container and its image when\nthe container exits successfully.\n\nUsing container linking\nContainers can be linked to another container's ports directly using\n-link remote_name:local_alias in the client's\ndocker run. This will set a number of environment\nvariables that can then be used to connect:\n$ sudo docker run --rm -t -i --link pg_test:pg eg_postgresql bash\n\npostgres@7ef98b1b7243:/$ psql -h $PG_PORT_5432_TCP_ADDR -p $PG_PORT_5432_TCP_PORT -d docker -U docker --password\n\nConnecting from your host system\nAssuming you have the postgresql-client installed, you can use the\nhost-mapped port to test as well. You need to use docker ps\nto find out what local host port the container is mapped to\nfirst:\n$ sudo docker ps\nCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES\n5e24362f27f6 eg_postgresql:latest /usr/lib/postgresql/ About an hour ago Up About an hour 0.0.0.0:49153-5432/tcp pg_test\n$ psql -h localhost -p 49153 -d docker -U docker --password\n\nTesting the database\nOnce you have authenticated and have a docker =#\nprompt, you can create a table and populate it.\npsql (9.3.1)\nType \"help\" for help.\n\n$ docker=# CREATE TABLE cities (\ndocker(# name varchar(80),\ndocker(# location point\ndocker(# );\nCREATE TABLE\n$ docker=# INSERT INTO cities VALUES ('San Francisco', '(-194.0, 53.0)');\nINSERT 0 1\n$ docker=# select * from cities;\n name | location\n---------------+-----------\n San Francisco | (-194,53)\n(1 row)\n\nUsing the container volumes\nYou can use the defined volumes to inspect the PostgreSQL log files and\nto backup your configuration and data:\n$ sudo docker run --rm --volumes-from pg_test -t -i busybox sh\n\n/ # ls\nbin etc lib linuxrc mnt proc run sys usr\ndev home lib64 media opt root sbin tmp var\n/ # ls /etc/postgresql/9.3/main/\nenvironment pg_hba.conf postgresql.conf\npg_ctl.conf pg_ident.conf start.conf\n/tmp # ls /var/log\nldconfig postgresql",
|
|
"title": "Dockerizing a PostgreSQL service"
|
|
},
|
|
{
|
|
"loc": "/examples/postgresql_service#dockerizing-postgresql",
|
|
"tags": "",
|
|
"text": "Note : \n- If you don't like sudo then see Giving non-root\n access",
|
|
"title": "Dockerizing PostgreSQL"
|
|
},
|
|
{
|
|
"loc": "/examples/postgresql_service#installing-postgresql-on-docker",
|
|
"tags": "",
|
|
"text": "Assuming there is no Docker image that suits your needs on the Docker\nHub , you can create one yourself. Start by creating a new Dockerfile : Note : \nThis PostgreSQL setup is for development-only purposes. Refer to the\nPostgreSQL documentation to fine-tune these settings so that it is\nsuitably secure. #\n# example Dockerfile for http://docs.docker.com/examples/postgresql_service/\n#\n\nFROM ubuntu\nMAINTAINER SvenDowideit@docker.com\n\n# Add the PostgreSQL PGP key to verify their Debian packages.\n# It should be the same key as https://www.postgresql.org/media/keys/ACCC4CF8.asc\nRUN apt-key adv --keyserver keyserver.ubuntu.com --recv-keys B97B0AFCAA1A47F044F244A07FCC7D46ACCC4CF8\n\n# Add PostgreSQL's repository. It contains the most recent stable release\n# of PostgreSQL, ``9.3``.\nRUN echo \"deb http://apt.postgresql.org/pub/repos/apt/ precise-pgdg main\" /etc/apt/sources.list.d/pgdg.list\n\n# Install ``python-software-properties``, ``software-properties-common`` and PostgreSQL 9.3\n# There are some warnings (in red) that show up during the build. You can hide\n# them by prefixing each apt-get statement with DEBIAN_FRONTEND=noninteractive\nRUN apt-get update apt-get install -y python-software-properties software-properties-common postgresql-9.3 postgresql-client-9.3 postgresql-contrib-9.3\n\n# Note: The official Debian and Ubuntu images automatically ``apt-get clean``\n# after each ``apt-get``\n\n# Run the rest of the commands as the ``postgres`` user created by the ``postgres-9.3`` package when it was ``apt-get installed``\nUSER postgres\n\n# Create a PostgreSQL role named ``docker`` with ``docker`` as the password and\n# then create a database `docker` owned by the ``docker`` role.\n# Note: here we use `` \\`` to run commands one after the other - the ``\\``\n# allows the RUN command to span multiple lines.\nRUN /etc/init.d/postgresql start \\\n psql --command \"CREATE USER docker WITH SUPERUSER PASSWORD 'docker';\" \\\n createdb -O docker docker\n\n# Adjust PostgreSQL configuration so that remote connections to the\n# database are possible. \nRUN echo \"host all all 0.0.0.0/0 md5\" /etc/postgresql/9.3/main/pg_hba.conf\n\n# And add ``listen_addresses`` to ``/etc/postgresql/9.3/main/postgresql.conf``\nRUN echo \"listen_addresses='*'\" /etc/postgresql/9.3/main/postgresql.conf\n\n# Expose the PostgreSQL port\nEXPOSE 5432\n\n# Add VOLUMEs to allow backup of config, logs and databases\nVOLUME [\"/etc/postgresql\", \"/var/log/postgresql\", \"/var/lib/postgresql\"]\n\n# Set the default command to run when starting the container\nCMD [\"/usr/lib/postgresql/9.3/bin/postgres\", \"-D\", \"/var/lib/postgresql/9.3/main\", \"-c\", \"config_file=/etc/postgresql/9.3/main/postgresql.conf\"] Build an image from the Dockerfile assign it a name. $ sudo docker build -t eg_postgresql . And run the PostgreSQL server container (in the foreground): $ sudo docker run --rm -P --name pg_test eg_postgresql There are 2 ways to connect to the PostgreSQL server. We can use Link\nContainers , or we can access it from our host\n(or the network). Note : \nThe --rm removes the container and its image when\nthe container exits successfully. Using container linking Containers can be linked to another container's ports directly using -link remote_name:local_alias in the client's docker run . This will set a number of environment\nvariables that can then be used to connect: $ sudo docker run --rm -t -i --link pg_test:pg eg_postgresql bash\n\npostgres@7ef98b1b7243:/$ psql -h $PG_PORT_5432_TCP_ADDR -p $PG_PORT_5432_TCP_PORT -d docker -U docker --password Connecting from your host system Assuming you have the postgresql-client installed, you can use the\nhost-mapped port to test as well. You need to use docker ps \nto find out what local host port the container is mapped to\nfirst: $ sudo docker ps\nCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES\n5e24362f27f6 eg_postgresql:latest /usr/lib/postgresql/ About an hour ago Up About an hour 0.0.0.0:49153- 5432/tcp pg_test\n$ psql -h localhost -p 49153 -d docker -U docker --password Testing the database Once you have authenticated and have a docker =# \nprompt, you can create a table and populate it. psql (9.3.1)\nType \"help\" for help.\n\n$ docker=# CREATE TABLE cities (\ndocker(# name varchar(80),\ndocker(# location point\ndocker(# );\nCREATE TABLE\n$ docker=# INSERT INTO cities VALUES ('San Francisco', '(-194.0, 53.0)');\nINSERT 0 1\n$ docker=# select * from cities;\n name | location\n---------------+-----------\n San Francisco | (-194,53)\n(1 row) Using the container volumes You can use the defined volumes to inspect the PostgreSQL log files and\nto backup your configuration and data: $ sudo docker run --rm --volumes-from pg_test -t -i busybox sh\n\n/ # ls\nbin etc lib linuxrc mnt proc run sys usr\ndev home lib64 media opt root sbin tmp var\n/ # ls /etc/postgresql/9.3/main/\nenvironment pg_hba.conf postgresql.conf\npg_ctl.conf pg_ident.conf start.conf\n/tmp # ls /var/log\nldconfig postgresql",
|
|
"title": "Installing PostgreSQL on Docker"
|
|
},
|
|
{
|
|
"loc": "/examples/running_riak_service/",
|
|
"tags": "",
|
|
"text": "Dockerizing a Riak Service\nThe goal of this example is to show you how to build a Docker image with\nRiak pre-installed.\nCreating a Dockerfile\nCreate an empty file called Dockerfile:\n$ touch Dockerfile\n\nNext, define the parent image you want to use to build your image on top\nof. We'll use Ubuntu (tag:\nlatest), which is available on Docker Hub:\n# Riak\n#\n# VERSION 0.1.0\n\n# Use the Ubuntu base image provided by dotCloud\nFROM ubuntu:latest\nMAINTAINER Hector Castro hector@basho.com\n\nAfter that, we install and setup a few dependencies:\n\ncurl is used to download Basho's APT\n repository key\nlsb-release helps us derive the Ubuntu release\n codename\nopenssh-server allows us to login to\n containers remotely and join Riak nodes to form a cluster\nsupervisor is used manage the OpenSSH and Riak\n processes\n\n\n\n# Install and setup project dependencies\nRUN apt-get update apt-get install -y curl lsb-release supervisor openssh-server\n\nRUN mkdir -p /var/run/sshd\nRUN mkdir -p /var/log/supervisor\n\nRUN locale-gen en_US en_US.UTF-8\n\nCOPY supervisord.conf /etc/supervisor/conf.d/supervisord.conf\n\nRUN echo 'root:basho' | chpasswd\n\nNext, we add Basho's APT repository:\nRUN curl -sSL http://apt.basho.com/gpg/basho.apt.key | apt-key add --\nRUN echo \"deb http://apt.basho.com $(lsb_release -cs) main\" /etc/apt/sources.list.d/basho.list\n\nAfter that, we install Riak and alter a few defaults:\n# Install Riak and prepare it to run\nRUN apt-get update apt-get install -y riak\nRUN sed -i.bak 's/127.0.0.1/0.0.0.0/' /etc/riak/app.config\nRUN echo \"ulimit -n 4096\" /etc/default/riak\n\nThen, we expose the Riak Protocol Buffers and HTTP interfaces, along\nwith SSH:\n# Expose Riak Protocol Buffers and HTTP interfaces, along with SSH\nEXPOSE 8087 8098 22\n\nFinally, run supervisord so that Riak and OpenSSH\nare started:\nCMD [\"/usr/bin/supervisord\"]\n\nCreate a supervisord configuration file\nCreate an empty file called supervisord.conf. Make\nsure it's at the same directory level as your Dockerfile:\ntouch supervisord.conf\n\nPopulate it with the following program definitions:\n[supervisord]\nnodaemon=true\n\n[program:sshd]\ncommand=/usr/sbin/sshd -D\nstdout_logfile=/var/log/supervisor/%(program_name)s.log\nstderr_logfile=/var/log/supervisor/%(program_name)s.log\nautorestart=true\n\n[program:riak]\ncommand=bash -c \". /etc/default/riak /usr/sbin/riak console\"\npidfile=/var/log/riak/riak.pid\nstdout_logfile=/var/log/supervisor/%(program_name)s.log\nstderr_logfile=/var/log/supervisor/%(program_name)s.log\n\nBuild the Docker image for Riak\nNow you should be able to build a Docker image for Riak:\n$ sudo docker build -t \"yourname/riak\" .\n\nNext steps\nRiak is a distributed database. Many production deployments consist of\nat least five nodes.\nSee the docker-riak project\ndetails on how to deploy a Riak cluster using Docker and Pipework.",
|
|
"title": "Dockerizing a Riak service"
|
|
},
|
|
{
|
|
"loc": "/examples/running_riak_service#dockerizing-a-riak-service",
|
|
"tags": "",
|
|
"text": "The goal of this example is to show you how to build a Docker image with\nRiak pre-installed.",
|
|
"title": "Dockerizing a Riak Service"
|
|
},
|
|
{
|
|
"loc": "/examples/running_riak_service#creating-a-dockerfile",
|
|
"tags": "",
|
|
"text": "Create an empty file called Dockerfile : $ touch Dockerfile Next, define the parent image you want to use to build your image on top\nof. We'll use Ubuntu (tag: latest ), which is available on Docker Hub : # Riak\n#\n# VERSION 0.1.0\n\n# Use the Ubuntu base image provided by dotCloud\nFROM ubuntu:latest\nMAINTAINER Hector Castro hector@basho.com After that, we install and setup a few dependencies: curl is used to download Basho's APT\n repository key lsb-release helps us derive the Ubuntu release\n codename openssh-server allows us to login to\n containers remotely and join Riak nodes to form a cluster supervisor is used manage the OpenSSH and Riak\n processes # Install and setup project dependencies\nRUN apt-get update apt-get install -y curl lsb-release supervisor openssh-server\n\nRUN mkdir -p /var/run/sshd\nRUN mkdir -p /var/log/supervisor\n\nRUN locale-gen en_US en_US.UTF-8\n\nCOPY supervisord.conf /etc/supervisor/conf.d/supervisord.conf\n\nRUN echo 'root:basho' | chpasswd Next, we add Basho's APT repository: RUN curl -sSL http://apt.basho.com/gpg/basho.apt.key | apt-key add --\nRUN echo \"deb http://apt.basho.com $(lsb_release -cs) main\" /etc/apt/sources.list.d/basho.list After that, we install Riak and alter a few defaults: # Install Riak and prepare it to run\nRUN apt-get update apt-get install -y riak\nRUN sed -i.bak 's/127.0.0.1/0.0.0.0/' /etc/riak/app.config\nRUN echo \"ulimit -n 4096\" /etc/default/riak Then, we expose the Riak Protocol Buffers and HTTP interfaces, along\nwith SSH: # Expose Riak Protocol Buffers and HTTP interfaces, along with SSH\nEXPOSE 8087 8098 22 Finally, run supervisord so that Riak and OpenSSH\nare started: CMD [\"/usr/bin/supervisord\"]",
|
|
"title": "Creating a Dockerfile"
|
|
},
|
|
{
|
|
"loc": "/examples/running_riak_service#create-a-supervisord-configuration-file",
|
|
"tags": "",
|
|
"text": "Create an empty file called supervisord.conf . Make\nsure it's at the same directory level as your Dockerfile : touch supervisord.conf Populate it with the following program definitions: [supervisord]\nnodaemon=true\n\n[program:sshd]\ncommand=/usr/sbin/sshd -D\nstdout_logfile=/var/log/supervisor/%(program_name)s.log\nstderr_logfile=/var/log/supervisor/%(program_name)s.log\nautorestart=true\n\n[program:riak]\ncommand=bash -c \". /etc/default/riak /usr/sbin/riak console\"\npidfile=/var/log/riak/riak.pid\nstdout_logfile=/var/log/supervisor/%(program_name)s.log\nstderr_logfile=/var/log/supervisor/%(program_name)s.log",
|
|
"title": "Create a supervisord configuration file"
|
|
},
|
|
{
|
|
"loc": "/examples/running_riak_service#build-the-docker-image-for-riak",
|
|
"tags": "",
|
|
"text": "Now you should be able to build a Docker image for Riak: $ sudo docker build -t \" yourname /riak\" .",
|
|
"title": "Build the Docker image for Riak"
|
|
},
|
|
{
|
|
"loc": "/examples/running_riak_service#next-steps",
|
|
"tags": "",
|
|
"text": "Riak is a distributed database. Many production deployments consist of at least five nodes .\nSee the docker-riak project\ndetails on how to deploy a Riak cluster using Docker and Pipework.",
|
|
"title": "Next steps"
|
|
},
|
|
{
|
|
"loc": "/examples/running_ssh_service/",
|
|
"tags": "",
|
|
"text": "Dockerizing an SSH Daemon Service\nBuild an eg_sshd image\nThe following Dockerfile sets up an SSHd service in a container that you\ncan use to connect to and inspect other container's volumes, or to get\nquick access to a test container.\n# sshd\n#\n# VERSION 0.0.2\n\nFROM ubuntu:14.04\nMAINTAINER Sven Dowideit SvenDowideit@docker.com\n\nRUN apt-get update apt-get install -y openssh-server\nRUN mkdir /var/run/sshd\nRUN echo 'root:screencast' | chpasswd\nRUN sed -i 's/PermitRootLogin without-password/PermitRootLogin yes/' /etc/ssh/sshd_config\n\n# SSH login fix. Otherwise user is kicked off after login\nRUN sed 's@session\\s*required\\s*pam_loginuid.so@session optional pam_loginuid.so@g' -i /etc/pam.d/sshd\n\nENV NOTVISIBLE \"in users profile\"\nRUN echo \"export VISIBLE=now\" /etc/profile\n\nEXPOSE 22\nCMD [\"/usr/sbin/sshd\", \"-D\"]\n\nBuild the image using:\n$ sudo docker build -t eg_sshd .\n\nRun a test_sshd container\nThen run it. You can then use docker port to find out what host port\nthe container's port 22 is mapped to:\n$ sudo docker run -d -P --name test_sshd eg_sshd\n$ sudo docker port test_sshd 22\n0.0.0.0:49154\n\nAnd now you can ssh as root on the container's IP address (you can find it\nwith docker inspect) or on port 49154 of the Docker daemon's host IP address\n(ip address or ifconfig can tell you that) or localhost if on the\nDocker daemon host:\n$ ssh root@192.168.1.2 -p 49154\n# The password is ``screencast``.\n$$\n\nEnvironment variables\nUsing the sshd daemon to spawn shells makes it complicated to pass environment\nvariables to the user's shell via the normal Docker mechanisms, as sshd scrubs\nthe environment before it starts the shell.\nIf you're setting values in the Dockerfile using ENV, you'll need to push them\nto a shell initialization file like the /etc/profile example in the Dockerfile\nabove.\nIf you need to passdocker run -e ENV=value values, you will need to write a\nshort script to do the same before you start sshd -D and then replace the\nCMD with that script.\nClean up\nFinally, clean up after your test by stopping and removing the\ncontainer, and then removing the image.\n$ sudo docker stop test_sshd\n$ sudo docker rm test_sshd\n$ sudo docker rmi eg_sshd",
|
|
"title": "Dockerizing an SSH service"
|
|
},
|
|
{
|
|
"loc": "/examples/running_ssh_service#dockerizing-an-ssh-daemon-service",
|
|
"tags": "",
|
|
"text": "",
|
|
"title": "Dockerizing an SSH Daemon Service"
|
|
},
|
|
{
|
|
"loc": "/examples/running_ssh_service#build-an-eg_sshd-image",
|
|
"tags": "",
|
|
"text": "The following Dockerfile sets up an SSHd service in a container that you\ncan use to connect to and inspect other container's volumes, or to get\nquick access to a test container. # sshd\n#\n# VERSION 0.0.2\n\nFROM ubuntu:14.04\nMAINTAINER Sven Dowideit SvenDowideit@docker.com \n\nRUN apt-get update apt-get install -y openssh-server\nRUN mkdir /var/run/sshd\nRUN echo 'root:screencast' | chpasswd\nRUN sed -i 's/PermitRootLogin without-password/PermitRootLogin yes/' /etc/ssh/sshd_config\n\n# SSH login fix. Otherwise user is kicked off after login\nRUN sed 's@session\\s*required\\s*pam_loginuid.so@session optional pam_loginuid.so@g' -i /etc/pam.d/sshd\n\nENV NOTVISIBLE \"in users profile\"\nRUN echo \"export VISIBLE=now\" /etc/profile\n\nEXPOSE 22\nCMD [\"/usr/sbin/sshd\", \"-D\"] Build the image using: $ sudo docker build -t eg_sshd .",
|
|
"title": "Build an eg_sshd image"
|
|
},
|
|
{
|
|
"loc": "/examples/running_ssh_service#run-a-test_sshd-container",
|
|
"tags": "",
|
|
"text": "Then run it. You can then use docker port to find out what host port\nthe container's port 22 is mapped to: $ sudo docker run -d -P --name test_sshd eg_sshd\n$ sudo docker port test_sshd 22\n0.0.0.0:49154 And now you can ssh as root on the container's IP address (you can find it\nwith docker inspect ) or on port 49154 of the Docker daemon's host IP address\n( ip address or ifconfig can tell you that) or localhost if on the\nDocker daemon host: $ ssh root@192.168.1.2 -p 49154\n# The password is ``screencast``.\n$$",
|
|
"title": "Run a test_sshd container"
|
|
},
|
|
{
|
|
"loc": "/examples/running_ssh_service#environment-variables",
|
|
"tags": "",
|
|
"text": "Using the sshd daemon to spawn shells makes it complicated to pass environment\nvariables to the user's shell via the normal Docker mechanisms, as sshd scrubs\nthe environment before it starts the shell. If you're setting values in the Dockerfile using ENV , you'll need to push them\nto a shell initialization file like the /etc/profile example in the Dockerfile \nabove. If you need to pass docker run -e ENV=value values, you will need to write a\nshort script to do the same before you start sshd -D and then replace the CMD with that script.",
|
|
"title": "Environment variables"
|
|
},
|
|
{
|
|
"loc": "/examples/running_ssh_service#clean-up",
|
|
"tags": "",
|
|
"text": "Finally, clean up after your test by stopping and removing the\ncontainer, and then removing the image. $ sudo docker stop test_sshd\n$ sudo docker rm test_sshd\n$ sudo docker rmi eg_sshd",
|
|
"title": "Clean up"
|
|
},
|
|
{
|
|
"loc": "/examples/couchdb_data_volumes/",
|
|
"tags": "",
|
|
"text": "Dockerizing a CouchDB Service\n\nNote: \n- If you don't like sudo then see Giving non-root\n access\n\nHere's an example of using data volumes to share the same data between\ntwo CouchDB containers. This could be used for hot upgrades, testing\ndifferent versions of CouchDB on the same data, etc.\nCreate first database\nNote that we're marking /var/lib/couchdb as a data volume.\n$ COUCH1=$(sudo docker run -d -p 5984 -v /var/lib/couchdb shykes/couchdb:2013-05-03)\n\nAdd data to the first database\nWe're assuming your Docker host is reachable at localhost. If not,\nreplace localhost with the public IP of your Docker host.\n$ HOST=localhost\n$ URL=\"http://$HOST:$(sudo docker port $COUCH1 5984 | grep -o '[1-9][0-9]*$')/_utils/\"\n$ echo \"Navigate to $URL in your browser, and use the couch interface to add data\"\n\nCreate second database\nThis time, we're requesting shared access to $COUCH1's volumes.\n$ COUCH2=$(sudo docker run -d -p 5984 --volumes-from $COUCH1 shykes/couchdb:2013-05-03)\n\nBrowse data on the second database\n$ HOST=localhost\n$ URL=\"http://$HOST:$(sudo docker port $COUCH2 5984 | grep -o '[1-9][0-9]*$')/_utils/\"\n$ echo \"Navigate to $URL in your browser. You should see the same data as in the first database\"'!'\n\nCongratulations, you are now running two Couchdb containers, completely\nisolated from each other except for their data.",
|
|
"title": "Dockerizing a CouchDB service"
|
|
},
|
|
{
|
|
"loc": "/examples/couchdb_data_volumes#dockerizing-a-couchdb-service",
|
|
"tags": "",
|
|
"text": "Note : \n- If you don't like sudo then see Giving non-root\n access Here's an example of using data volumes to share the same data between\ntwo CouchDB containers. This could be used for hot upgrades, testing\ndifferent versions of CouchDB on the same data, etc.",
|
|
"title": "Dockerizing a CouchDB Service"
|
|
},
|
|
{
|
|
"loc": "/examples/couchdb_data_volumes#create-first-database",
|
|
"tags": "",
|
|
"text": "Note that we're marking /var/lib/couchdb as a data volume. $ COUCH1=$(sudo docker run -d -p 5984 -v /var/lib/couchdb shykes/couchdb:2013-05-03)",
|
|
"title": "Create first database"
|
|
},
|
|
{
|
|
"loc": "/examples/couchdb_data_volumes#add-data-to-the-first-database",
|
|
"tags": "",
|
|
"text": "We're assuming your Docker host is reachable at localhost . If not,\nreplace localhost with the public IP of your Docker host. $ HOST=localhost\n$ URL=\"http://$HOST:$(sudo docker port $COUCH1 5984 | grep -o '[1-9][0-9]*$')/_utils/\"\n$ echo \"Navigate to $URL in your browser, and use the couch interface to add data\"",
|
|
"title": "Add data to the first database"
|
|
},
|
|
{
|
|
"loc": "/examples/couchdb_data_volumes#create-second-database",
|
|
"tags": "",
|
|
"text": "This time, we're requesting shared access to $COUCH1 's volumes. $ COUCH2=$(sudo docker run -d -p 5984 --volumes-from $COUCH1 shykes/couchdb:2013-05-03)",
|
|
"title": "Create second database"
|
|
},
|
|
{
|
|
"loc": "/examples/couchdb_data_volumes#browse-data-on-the-second-database",
|
|
"tags": "",
|
|
"text": "$ HOST=localhost\n$ URL=\"http://$HOST:$(sudo docker port $COUCH2 5984 | grep -o '[1-9][0-9]*$')/_utils/\"\n$ echo \"Navigate to $URL in your browser. You should see the same data as in the first database\"'!' Congratulations, you are now running two Couchdb containers, completely\nisolated from each other except for their data.",
|
|
"title": "Browse data on the second database"
|
|
},
|
|
{
|
|
"loc": "/examples/apt-cacher-ng/",
|
|
"tags": "",
|
|
"text": "Dockerizing an Apt-Cacher-ng Service\n\nNote: \n- If you don't like sudo then see Giving non-root\n access.\n- If you're using OS X or docker via TCP then you shouldn't use\n sudo.\n\nWhen you have multiple Docker servers, or build unrelated Docker\ncontainers which can't make use of the Docker build cache, it can be\nuseful to have a caching proxy for your packages. This container makes\nthe second download of any package almost instant.\nUse the following Dockerfile:\n#\n# Build: docker build -t apt-cacher .\n# Run: docker run -d -p 3142:3142 --name apt-cacher-run apt-cacher\n#\n# and then you can run containers with:\n# docker run -t -i --rm -e http_proxy http://dockerhost:3142/ debian bash\n#\nFROM ubuntu\nMAINTAINER SvenDowideit@docker.com\n\nVOLUME [\"/var/cache/apt-cacher-ng\"]\nRUN apt-get update apt-get install -y apt-cacher-ng\n\nEXPOSE 3142\nCMD chmod 777 /var/cache/apt-cacher-ng /etc/init.d/apt-cacher-ng start tail -f /var/log/apt-cacher-ng/*\n\nTo build the image using:\n$ sudo docker build -t eg_apt_cacher_ng .\n\nThen run it, mapping the exposed port to one on the host\n$ sudo docker run -d -p 3142:3142 --name test_apt_cacher_ng eg_apt_cacher_ng\n\nTo see the logfiles that are tailed in the default command, you can\nuse:\n$ sudo docker logs -f test_apt_cacher_ng\n\nTo get your Debian-based containers to use the proxy, you can do one of\nthree things\n\nAdd an apt Proxy setting\n echo 'Acquire::http { Proxy \"http://dockerhost:3142\"; };' /etc/apt/conf.d/01proxy\nSet an environment variable:\n http_proxy=http://dockerhost:3142/\nChange your sources.list entries to start with\n http://dockerhost:3142/\n\nOption 1 injects the settings safely into your apt configuration in\na local version of a common base:\nFROM ubuntu\nRUN echo 'Acquire::http { Proxy \"http://dockerhost:3142\"; };' /etc/apt/apt.conf.d/01proxy\nRUN apt-get update apt-get install -y vim git\n\n# docker build -t my_ubuntu .\n\nOption 2 is good for testing, but will break other HTTP clients\nwhich obey http_proxy, such as curl, wget and others:\n$ sudo docker run --rm -t -i -e http_proxy=http://dockerhost:3142/ debian bash\n\nOption 3 is the least portable, but there will be times when you\nmight need to do it and you can do it from your Dockerfile\ntoo.\nApt-cacher-ng has some tools that allow you to manage the repository,\nand they can be used by leveraging the VOLUME\ninstruction, and the image we built to run the service:\n$ sudo docker run --rm -t -i --volumes-from test_apt_cacher_ng eg_apt_cacher_ng bash\n\n$$ /usr/lib/apt-cacher-ng/distkill.pl\nScanning /var/cache/apt-cacher-ng, please wait...\nFound distributions:\nbla, taggedcount: 0\n 1. precise-security (36 index files)\n 2. wheezy (25 index files)\n 3. precise-updates (36 index files)\n 4. precise (36 index files)\n 5. wheezy-updates (18 index files)\n\nFound architectures:\n 6. amd64 (36 index files)\n 7. i386 (24 index files)\n\nWARNING: The removal action may wipe out whole directories containing\n index files. Select d to see detailed list.\n\n(Number nn: tag distribution or architecture nn; 0: exit; d: show details; r: remove tagged; q: quit): q\n\nFinally, clean up after your test by stopping and removing the\ncontainer, and then removing the image.\n$ sudo docker stop test_apt_cacher_ng\n$ sudo docker rm test_apt_cacher_ng\n$ sudo docker rmi eg_apt_cacher_ng",
|
|
"title": "Dockerizing an Apt-Cacher-ng service"
|
|
},
|
|
{
|
|
"loc": "/examples/apt-cacher-ng#dockerizing-an-apt-cacher-ng-service",
|
|
"tags": "",
|
|
"text": "Note : \n- If you don't like sudo then see Giving non-root\n access .\n- If you're using OS X or docker via TCP then you shouldn't use\n sudo. When you have multiple Docker servers, or build unrelated Docker\ncontainers which can't make use of the Docker build cache, it can be\nuseful to have a caching proxy for your packages. This container makes\nthe second download of any package almost instant. Use the following Dockerfile: #\n# Build: docker build -t apt-cacher .\n# Run: docker run -d -p 3142:3142 --name apt-cacher-run apt-cacher\n#\n# and then you can run containers with:\n# docker run -t -i --rm -e http_proxy http://dockerhost:3142/ debian bash\n#\nFROM ubuntu\nMAINTAINER SvenDowideit@docker.com\n\nVOLUME [\"/var/cache/apt-cacher-ng\"]\nRUN apt-get update apt-get install -y apt-cacher-ng\n\nEXPOSE 3142\nCMD chmod 777 /var/cache/apt-cacher-ng /etc/init.d/apt-cacher-ng start tail -f /var/log/apt-cacher-ng/* To build the image using: $ sudo docker build -t eg_apt_cacher_ng . Then run it, mapping the exposed port to one on the host $ sudo docker run -d -p 3142:3142 --name test_apt_cacher_ng eg_apt_cacher_ng To see the logfiles that are tailed in the default command, you can\nuse: $ sudo docker logs -f test_apt_cacher_ng To get your Debian-based containers to use the proxy, you can do one of\nthree things Add an apt Proxy setting\n echo 'Acquire::http { Proxy \"http://dockerhost:3142\"; };' /etc/apt/conf.d/01proxy Set an environment variable:\n http_proxy=http://dockerhost:3142/ Change your sources.list entries to start with\n http://dockerhost:3142/ Option 1 injects the settings safely into your apt configuration in\na local version of a common base: FROM ubuntu\nRUN echo 'Acquire::http { Proxy \"http://dockerhost:3142\"; };' /etc/apt/apt.conf.d/01proxy\nRUN apt-get update apt-get install -y vim git\n\n# docker build -t my_ubuntu . Option 2 is good for testing, but will break other HTTP clients\nwhich obey http_proxy , such as curl , wget and others: $ sudo docker run --rm -t -i -e http_proxy=http://dockerhost:3142/ debian bash Option 3 is the least portable, but there will be times when you\nmight need to do it and you can do it from your Dockerfile \ntoo. Apt-cacher-ng has some tools that allow you to manage the repository,\nand they can be used by leveraging the VOLUME \ninstruction, and the image we built to run the service: $ sudo docker run --rm -t -i --volumes-from test_apt_cacher_ng eg_apt_cacher_ng bash\n\n$$ /usr/lib/apt-cacher-ng/distkill.pl\nScanning /var/cache/apt-cacher-ng, please wait...\nFound distributions:\nbla, taggedcount: 0\n 1. precise-security (36 index files)\n 2. wheezy (25 index files)\n 3. precise-updates (36 index files)\n 4. precise (36 index files)\n 5. wheezy-updates (18 index files)\n\nFound architectures:\n 6. amd64 (36 index files)\n 7. i386 (24 index files)\n\nWARNING: The removal action may wipe out whole directories containing\n index files. Select d to see detailed list.\n\n(Number nn: tag distribution or architecture nn; 0: exit; d: show details; r: remove tagged; q: quit): q Finally, clean up after your test by stopping and removing the\ncontainer, and then removing the image. $ sudo docker stop test_apt_cacher_ng\n$ sudo docker rm test_apt_cacher_ng\n$ sudo docker rmi eg_apt_cacher_ng",
|
|
"title": "Dockerizing an Apt-Cacher-ng Service"
|
|
},
|
|
{
|
|
"loc": "/compose/django/",
|
|
"tags": "",
|
|
"text": "Getting started with Compose and Django\nLet's use Compose to set up and run a Django/PostgreSQL app. Before starting, you'll need to have Compose installed.\nLet's set up the three files that'll get us started. First, our app is going to be running inside a Docker container which contains all of its dependencies. We can define what goes inside that Docker container using a file called Dockerfile. It'll contain this to start with:\nFROM python:2.7\nENV PYTHONUNBUFFERED 1\nRUN mkdir /code\nWORKDIR /code\nADD requirements.txt /code/\nRUN pip install -r requirements.txt\nADD . /code/\n\nThat'll install our application inside an image with Python installed alongside all of our Python dependencies. For more information on how to write Dockerfiles, see the Docker user guide and the Dockerfile reference.\nSecond, we define our Python dependencies in a file called requirements.txt:\nDjango\npsycopg2\n\nSimple enough. Finally, this is all tied together with a file called docker-compose.yml. It describes the services that our app comprises of (a web server and database), what Docker images they use, how they link together, what volumes will be mounted inside the containers and what ports they expose.\ndb:\n image: postgres\nweb:\n build: .\n command: python manage.py runserver 0.0.0.0:8000\n volumes:\n - .:/code\n ports:\n - \"8000:8000\"\n links:\n - db\n\nSee the docker-compose.yml reference for more information on how it works.\nWe can now start a Django project using docker-compose run:\n$ docker-compose run web django-admin.py startproject composeexample .\n\nFirst, Compose will build an image for the web service using the Dockerfile. It will then run django-admin.py startproject composeexample . inside a container using that image.\nThis will generate a Django app inside the current directory:\n$ ls\nDockerfile docker-compose.yml composeexample manage.py requirements.txt\n\nFirst thing we need to do is set up the database connection. Replace the DATABASES = ... definition in composeexample/settings.py to read:\nDATABASES = {\n 'default': {\n 'ENGINE': 'django.db.backends.postgresql_psycopg2',\n 'NAME': 'postgres',\n 'USER': 'postgres',\n 'HOST': 'db',\n 'PORT': 5432,\n }\n}\n\nThese settings are determined by the postgres Docker image we are using.\nThen, run docker-compose up:\nRecreating myapp_db_1...\nRecreating myapp_web_1...\nAttaching to myapp_db_1, myapp_web_1\nmyapp_db_1 |\nmyapp_db_1 | PostgreSQL stand-alone backend 9.1.11\nmyapp_db_1 | 2014-01-27 12:17:03 UTC LOG: database system is ready to accept connections\nmyapp_db_1 | 2014-01-27 12:17:03 UTC LOG: autovacuum launcher started\nmyapp_web_1 | Validating models...\nmyapp_web_1 |\nmyapp_web_1 | 0 errors found\nmyapp_web_1 | January 27, 2014 - 12:12:40\nmyapp_web_1 | Django version 1.6.1, using settings 'composeexample.settings'\nmyapp_web_1 | Starting development server at http://0.0.0.0:8000/\nmyapp_web_1 | Quit the server with CONTROL-C.\n\nAnd your Django app should be running at port 8000 on your docker daemon (if you're using boot2docker, boot2docker ip will tell you its address).\nYou can also run management commands with Docker. To set up your database, for example, run docker-compose up and in another terminal run:\n$ docker-compose run web python manage.py syncdb\n\nCompose documentation\n\nInstalling Compose\nUser guide\nCommand line reference\nYaml file reference\nCompose environment variables\nCompose command line completion",
|
|
"title": "Getting started with Compose and Django"
|
|
},
|
|
{
|
|
"loc": "/compose/django#getting-started-with-compose-and-django",
|
|
"tags": "",
|
|
"text": "Let's use Compose to set up and run a Django/PostgreSQL app. Before starting, you'll need to have Compose installed . Let's set up the three files that'll get us started. First, our app is going to be running inside a Docker container which contains all of its dependencies. We can define what goes inside that Docker container using a file called Dockerfile . It'll contain this to start with: FROM python:2.7\nENV PYTHONUNBUFFERED 1\nRUN mkdir /code\nWORKDIR /code\nADD requirements.txt /code/\nRUN pip install -r requirements.txt\nADD . /code/ That'll install our application inside an image with Python installed alongside all of our Python dependencies. For more information on how to write Dockerfiles, see the Docker user guide and the Dockerfile reference . Second, we define our Python dependencies in a file called requirements.txt : Django\npsycopg2 Simple enough. Finally, this is all tied together with a file called docker-compose.yml . It describes the services that our app comprises of (a web server and database), what Docker images they use, how they link together, what volumes will be mounted inside the containers and what ports they expose. db:\n image: postgres\nweb:\n build: .\n command: python manage.py runserver 0.0.0.0:8000\n volumes:\n - .:/code\n ports:\n - \"8000:8000\"\n links:\n - db See the docker-compose.yml reference for more information on how it works. We can now start a Django project using docker-compose run : $ docker-compose run web django-admin.py startproject composeexample . First, Compose will build an image for the web service using the Dockerfile . It will then run django-admin.py startproject composeexample . inside a container using that image. This will generate a Django app inside the current directory: $ ls\nDockerfile docker-compose.yml composeexample manage.py requirements.txt First thing we need to do is set up the database connection. Replace the DATABASES = ... definition in composeexample/settings.py to read: DATABASES = {\n 'default': {\n 'ENGINE': 'django.db.backends.postgresql_psycopg2',\n 'NAME': 'postgres',\n 'USER': 'postgres',\n 'HOST': 'db',\n 'PORT': 5432,\n }\n} These settings are determined by the postgres Docker image we are using. Then, run docker-compose up : Recreating myapp_db_1...\nRecreating myapp_web_1...\nAttaching to myapp_db_1, myapp_web_1\nmyapp_db_1 |\nmyapp_db_1 | PostgreSQL stand-alone backend 9.1.11\nmyapp_db_1 | 2014-01-27 12:17:03 UTC LOG: database system is ready to accept connections\nmyapp_db_1 | 2014-01-27 12:17:03 UTC LOG: autovacuum launcher started\nmyapp_web_1 | Validating models...\nmyapp_web_1 |\nmyapp_web_1 | 0 errors found\nmyapp_web_1 | January 27, 2014 - 12:12:40\nmyapp_web_1 | Django version 1.6.1, using settings 'composeexample.settings'\nmyapp_web_1 | Starting development server at http://0.0.0.0:8000/\nmyapp_web_1 | Quit the server with CONTROL-C. And your Django app should be running at port 8000 on your docker daemon (if you're using boot2docker, boot2docker ip will tell you its address). You can also run management commands with Docker. To set up your database, for example, run docker-compose up and in another terminal run: $ docker-compose run web python manage.py syncdb",
|
|
"title": "Getting started with Compose and Django"
|
|
},
|
|
{
|
|
"loc": "/compose/django#compose-documentation",
|
|
"tags": "",
|
|
"text": "Installing Compose User guide Command line reference Yaml file reference Compose environment variables Compose command line completion",
|
|
"title": "Compose documentation"
|
|
},
|
|
{
|
|
"loc": "/compose/rails/",
|
|
"tags": "",
|
|
"text": "Getting started with Compose and Rails\nWe're going to use Compose to set up and run a Rails/PostgreSQL app. Before starting, you'll need to have Compose installed.\nLet's set up the three files that'll get us started. First, our app is going to be running inside a Docker container which contains all of its dependencies. We can define what goes inside that Docker container using a file called Dockerfile. It'll contain this to start with:\nFROM ruby:2.2.0\nRUN apt-get update -qq apt-get install -y build-essential libpq-dev\nRUN mkdir /myapp\nWORKDIR /myapp\nADD Gemfile /myapp/Gemfile\nRUN bundle install\nADD . /myapp\n\nThat'll put our application code inside an image with Ruby, Bundler and all our dependencies. For more information on how to write Dockerfiles, see the Docker user guide and the Dockerfile reference.\nNext, we have a bootstrap Gemfile which just loads Rails. It'll be overwritten in a moment by rails new.\nsource 'https://rubygems.org'\ngem 'rails', '4.2.0'\n\nFinally, docker-compose.yml is where the magic happens. It describes what services our app comprises (a database and a web app), how to get each one's Docker image (the database just runs on a pre-made PostgreSQL image, and the web app is built from the current directory), and the configuration we need to link them together and expose the web app's port.\ndb:\n image: postgres\n ports:\n - \"5432\"\nweb:\n build: .\n command: bundle exec rails s -p 3000 -b '0.0.0.0'\n volumes:\n - .:/myapp\n ports:\n - \"3000:3000\"\n links:\n - db\n\nWith those files in place, we can now generate the Rails skeleton app using docker-compose run:\n$ docker-compose run web rails new . --force --database=postgresql --skip-bundle\n\nFirst, Compose will build the image for the web service using the Dockerfile. Then it'll run rails new inside a new container, using that image. Once it's done, you should have a fresh app generated:\n$ ls\nDockerfile app docker-compose.yml tmp\nGemfile bin lib vendor\nGemfile.lock config log\nREADME.rdoc config.ru public\nRakefile db test\n\nUncomment the line in your new Gemfile which loads therubyracer, so we've got a Javascript runtime:\ngem 'therubyracer', platforms: :ruby\n\nNow that we've got a new Gemfile, we need to build the image again. (This, and changes to the Dockerfile itself, should be the only times you'll need to rebuild).\n$ docker-compose build\n\nThe app is now bootable, but we're not quite there yet. By default, Rails expects a database to be running on localhost - we need to point it at the db container instead. We also need to change the database and username to align with the defaults set by the postgres image.\nOpen up your newly-generated database.yml. Replace its contents with the following:\ndevelopment: default\n adapter: postgresql\n encoding: unicode\n database: postgres\n pool: 5\n username: postgres\n password:\n host: db\n\ntest:\n : *default\n database: myapp_test\n\nWe can now boot the app.\n$ docker-compose up\n\nIf all's well, you should see some PostgreSQL output, and then\u2014after a few seconds\u2014the familiar refrain:\nmyapp_web_1 | [2014-01-17 17:16:29] INFO WEBrick 1.3.1\nmyapp_web_1 | [2014-01-17 17:16:29] INFO ruby 2.2.0 (2014-12-25) [x86_64-linux-gnu]\nmyapp_web_1 | [2014-01-17 17:16:29] INFO WEBrick::HTTPServer#start: pid=1 port=3000\n\nFinally, we just need to create the database. In another terminal, run:\n$ docker-compose run web rake db:create\n\nAnd we're rolling\u2014your app should now be running on port 3000 on your docker daemon (if you're using boot2docker, boot2docker ip will tell you its address).\nCompose documentation\n\nInstalling Compose\nUser guide\nCommand line reference\nYaml file reference\nCompose environment variables\nCompose command line completion",
|
|
"title": "Getting started with Compose and Rails"
|
|
},
|
|
{
|
|
"loc": "/compose/rails#getting-started-with-compose-and-rails",
|
|
"tags": "",
|
|
"text": "We're going to use Compose to set up and run a Rails/PostgreSQL app. Before starting, you'll need to have Compose installed . Let's set up the three files that'll get us started. First, our app is going to be running inside a Docker container which contains all of its dependencies. We can define what goes inside that Docker container using a file called Dockerfile . It'll contain this to start with: FROM ruby:2.2.0\nRUN apt-get update -qq apt-get install -y build-essential libpq-dev\nRUN mkdir /myapp\nWORKDIR /myapp\nADD Gemfile /myapp/Gemfile\nRUN bundle install\nADD . /myapp That'll put our application code inside an image with Ruby, Bundler and all our dependencies. For more information on how to write Dockerfiles, see the Docker user guide and the Dockerfile reference . Next, we have a bootstrap Gemfile which just loads Rails. It'll be overwritten in a moment by rails new . source 'https://rubygems.org'\ngem 'rails', '4.2.0' Finally, docker-compose.yml is where the magic happens. It describes what services our app comprises (a database and a web app), how to get each one's Docker image (the database just runs on a pre-made PostgreSQL image, and the web app is built from the current directory), and the configuration we need to link them together and expose the web app's port. db:\n image: postgres\n ports:\n - \"5432\"\nweb:\n build: .\n command: bundle exec rails s -p 3000 -b '0.0.0.0'\n volumes:\n - .:/myapp\n ports:\n - \"3000:3000\"\n links:\n - db With those files in place, we can now generate the Rails skeleton app using docker-compose run : $ docker-compose run web rails new . --force --database=postgresql --skip-bundle First, Compose will build the image for the web service using the Dockerfile . Then it'll run rails new inside a new container, using that image. Once it's done, you should have a fresh app generated: $ ls\nDockerfile app docker-compose.yml tmp\nGemfile bin lib vendor\nGemfile.lock config log\nREADME.rdoc config.ru public\nRakefile db test Uncomment the line in your new Gemfile which loads therubyracer , so we've got a Javascript runtime: gem 'therubyracer', platforms: :ruby Now that we've got a new Gemfile , we need to build the image again. (This, and changes to the Dockerfile itself, should be the only times you'll need to rebuild). $ docker-compose build The app is now bootable, but we're not quite there yet. By default, Rails expects a database to be running on localhost - we need to point it at the db container instead. We also need to change the database and username to align with the defaults set by the postgres image. Open up your newly-generated database.yml . Replace its contents with the following: development: default\n adapter: postgresql\n encoding: unicode\n database: postgres\n pool: 5\n username: postgres\n password:\n host: db\n\ntest:\n : *default\n database: myapp_test We can now boot the app. $ docker-compose up If all's well, you should see some PostgreSQL output, and then\u2014after a few seconds\u2014the familiar refrain: myapp_web_1 | [2014-01-17 17:16:29] INFO WEBrick 1.3.1\nmyapp_web_1 | [2014-01-17 17:16:29] INFO ruby 2.2.0 (2014-12-25) [x86_64-linux-gnu]\nmyapp_web_1 | [2014-01-17 17:16:29] INFO WEBrick::HTTPServer#start: pid=1 port=3000 Finally, we just need to create the database. In another terminal, run: $ docker-compose run web rake db:create And we're rolling\u2014your app should now be running on port 3000 on your docker daemon (if you're using boot2docker, boot2docker ip will tell you its address).",
|
|
"title": "Getting started with Compose and Rails"
|
|
},
|
|
{
|
|
"loc": "/compose/rails#compose-documentation",
|
|
"tags": "",
|
|
"text": "Installing Compose User guide Command line reference Yaml file reference Compose environment variables Compose command line completion",
|
|
"title": "Compose documentation"
|
|
},
|
|
{
|
|
"loc": "/compose/wordpress/",
|
|
"tags": "",
|
|
"text": "Getting started with Compose and Wordpress\nCompose makes it nice and easy to run Wordpress in an isolated environment. Install Compose, then download Wordpress into the current directory:\n$ curl https://wordpress.org/latest.tar.gz | tar -xvzf -\n\nThis will create a directory called wordpress, which you can rename to the name of your project if you wish. Inside that directory, we create Dockerfile, a file that defines what environment your app is going to run in:\nFROM orchardup/php5\nADD . /code\n\n\nThis instructs Docker on how to build an image that contains PHP and Wordpress. For more information on how to write Dockerfiles, see the Docker user guide and the Dockerfile reference.\nNext up, docker-compose.yml starts our web service and a separate MySQL instance:\nweb:\n build: .\n command: php -S 0.0.0.0:8000 -t /code\n ports:\n - 8000:8000\n links:\n - db\n volumes:\n - .:/code\ndb:\n image: orchardup/mysql\n environment:\n MYSQL_DATABASE: wordpress\n\n\nTwo supporting files are needed to get this working - first up, wp-config.php is the standard Wordpress config file with a single change to point the database configuration at the db container:\n?php\ndefine('DB_NAME', 'wordpress');\ndefine('DB_USER', 'root');\ndefine('DB_PASSWORD', '');\ndefine('DB_HOST', db:3306);\ndefine('DB_CHARSET', 'utf8');\ndefine('DB_COLLATE', '');\n\ndefine('AUTH_KEY', 'put your unique phrase here');\ndefine('SECURE_AUTH_KEY', 'put your unique phrase here');\ndefine('LOGGED_IN_KEY', 'put your unique phrase here');\ndefine('NONCE_KEY', 'put your unique phrase here');\ndefine('AUTH_SALT', 'put your unique phrase here');\ndefine('SECURE_AUTH_SALT', 'put your unique phrase here');\ndefine('LOGGED_IN_SALT', 'put your unique phrase here');\ndefine('NONCE_SALT', 'put your unique phrase here');\n\n$table_prefix = 'wp_';\ndefine('WPLANG', '');\ndefine('WP_DEBUG', false);\n\nif ( !defined('ABSPATH') )\n define('ABSPATH', dirname(__FILE__) . '/');\n\nrequire_once(ABSPATH . 'wp-settings.php');\n\n\nFinally, router.php tells PHP's built-in web server how to run Wordpress:\n?php\n\n$root = $_SERVER['DOCUMENT_ROOT'];\nchdir($root);\n$path = '/'.ltrim(parse_url($_SERVER['REQUEST_URI'])['path'],'/');\nset_include_path(get_include_path().':'.__DIR__);\nif(file_exists($root.$path))\n{\n if(is_dir($root.$path) substr($path,strlen($path) - 1, 1) !== '/')\n $path = rtrim($path,'/').'/index.php';\n if(strpos($path,'.php') === false) return false;\n else {\n chdir(dirname($root.$path));\n require_once $root.$path;\n }\n}else include_once 'index.php';\n\n\nWith those four files in place, run docker-compose up inside your Wordpress directory and it'll pull and build the images we need, and then start the web and database containers. You'll then be able to visit Wordpress at port 8000 on your docker daemon (if you're using boot2docker, boot2docker ip will tell you its address).\nCompose documentation\n\nInstalling Compose\nUser guide\nCommand line reference\nYaml file reference\nCompose environment variables\nCompose command line completion",
|
|
"title": "Getting started with Compose and Wordpress"
|
|
},
|
|
{
|
|
"loc": "/compose/wordpress#getting-started-with-compose-and-wordpress",
|
|
"tags": "",
|
|
"text": "Compose makes it nice and easy to run Wordpress in an isolated environment. Install Compose , then download Wordpress into the current directory: $ curl https://wordpress.org/latest.tar.gz | tar -xvzf - This will create a directory called wordpress , which you can rename to the name of your project if you wish. Inside that directory, we create Dockerfile , a file that defines what environment your app is going to run in: FROM orchardup/php5\nADD . /code This instructs Docker on how to build an image that contains PHP and Wordpress. For more information on how to write Dockerfiles, see the Docker user guide and the Dockerfile reference . Next up, docker-compose.yml starts our web service and a separate MySQL instance: web:\n build: .\n command: php -S 0.0.0.0:8000 -t /code\n ports:\n - 8000:8000 \n links:\n - db\n volumes:\n - .:/code\ndb:\n image: orchardup/mysql\n environment:\n MYSQL_DATABASE: wordpress Two supporting files are needed to get this working - first up, wp-config.php is the standard Wordpress config file with a single change to point the database configuration at the db container: ?php\ndefine('DB_NAME', 'wordpress');\ndefine('DB_USER', 'root');\ndefine('DB_PASSWORD', '');\ndefine('DB_HOST', db:3306 );\ndefine('DB_CHARSET', 'utf8');\ndefine('DB_COLLATE', '');\n\ndefine('AUTH_KEY', 'put your unique phrase here');\ndefine('SECURE_AUTH_KEY', 'put your unique phrase here');\ndefine('LOGGED_IN_KEY', 'put your unique phrase here');\ndefine('NONCE_KEY', 'put your unique phrase here');\ndefine('AUTH_SALT', 'put your unique phrase here');\ndefine('SECURE_AUTH_SALT', 'put your unique phrase here');\ndefine('LOGGED_IN_SALT', 'put your unique phrase here');\ndefine('NONCE_SALT', 'put your unique phrase here');\n\n$table_prefix = 'wp_';\ndefine('WPLANG', '');\ndefine('WP_DEBUG', false);\n\nif ( !defined('ABSPATH') )\n define('ABSPATH', dirname(__FILE__) . '/');\n\nrequire_once(ABSPATH . 'wp-settings.php'); Finally, router.php tells PHP's built-in web server how to run Wordpress: ?php\n\n$root = $_SERVER['DOCUMENT_ROOT'];\nchdir($root);\n$path = '/'.ltrim(parse_url($_SERVER['REQUEST_URI'])['path'],'/');\nset_include_path(get_include_path().':'.__DIR__);\nif(file_exists($root.$path))\n{\n if(is_dir($root.$path) substr($path,strlen($path) - 1, 1) !== '/')\n $path = rtrim($path,'/').'/index.php';\n if(strpos($path,'.php') === false) return false;\n else {\n chdir(dirname($root.$path));\n require_once $root.$path;\n }\n}else include_once 'index.php'; With those four files in place, run docker-compose up inside your Wordpress directory and it'll pull and build the images we need, and then start the web and database containers. You'll then be able to visit Wordpress at port 8000 on your docker daemon (if you're using boot2docker, boot2docker ip will tell you its address).",
|
|
"title": "Getting started with Compose and Wordpress"
|
|
},
|
|
{
|
|
"loc": "/compose/wordpress#compose-documentation",
|
|
"tags": "",
|
|
"text": "Installing Compose User guide Command line reference Yaml file reference Compose environment variables Compose command line completion",
|
|
"title": "Compose documentation"
|
|
},
|
|
{
|
|
"loc": "/articles/",
|
|
"tags": "",
|
|
"text": "Table of Contents\nAbout\n\n\nDocker\n\n\nRelease Notes\n\n\nUnderstanding Docker\n\n\nInstallation\n\n\nUbuntu\n\n\nMac OS X\n\n\nMicrosoft Windows\n\n\nAmazon EC2\n\n\nArch Linux\n\n\nBinaries\n\n\nCentOS\n\n\nCRUX Linux\n\n\nDebian\n\n\nFedora\n\n\nFrugalWare\n\n\nGoogle Cloud Platform\n\n\nGentoo\n\n\nIBM Softlayer\n\n\nRackspace Cloud\n\n\nRed Hat Enterprise Linux\n\n\nOracle Linux\n\n\nSUSE\n\n\nDocker Compose\n\n\nUser Guide\n\n\nThe Docker User Guide\n\n\nGetting Started with Docker Hub\n\n\nDockerizing Applications\n\n\nWorking with Containers\n\n\nWorking with Docker Images\n\n\nLinking containers together\n\n\nManaging data in containers\n\n\nWorking with Docker Hub\n\n\nDocker Compose\n\n\nDocker Machine\n\n\nDocker Swarm\n\n\nDocker Hub\n\n\nDocker Hub\n\n\nAccounts\n\n\nRepositories\n\n\nAutomated Builds\n\n\nOfficial Repo Guidelines\n\n\nExamples\n\n\nDockerizing a Node.js web application\n\n\nDockerizing MongoDB\n\n\nDockerizing a Redis service\n\n\nDockerizing a PostgreSQL service\n\n\nDockerizing a Riak service\n\n\nDockerizing an SSH service\n\n\nDockerizing a CouchDB service\n\n\nDockerizing an Apt-Cacher-ng service\n\n\nGetting started with Compose and Django\n\n\nGetting started with Compose and Rails\n\n\nGetting started with Compose and Wordpress\n\n\nArticles\n\n\nDocker basics\n\n\nAdvanced networking\n\n\nSecurity\n\n\nRunning Docker with HTTPS\n\n\nRun a local registry mirror\n\n\nAutomatically starting containers\n\n\nCreating a base image\n\n\nBest practices for writing Dockerfiles\n\n\nUsing certificates for repository client verification\n\n\nUsing Supervisor\n\n\nProcess management with CFEngine\n\n\nUsing Puppet\n\n\nUsing Chef\n\n\nUsing PowerShell DSC\n\n\nCross-Host linking using ambassador containers\n\n\nRuntime metrics\n\n\nIncreasing a Boot2Docker volume\n\n\nControlling and configuring Docker using Systemd\n\n\nReference\n\n\nCommand line\n\n\nDockerfile\n\n\nFAQ\n\n\nRun Reference\n\n\nCompose command line\n\n\nCompose yml\n\n\nCompose ENV variables\n\n\nCompose commandline completion\n\n\nSwarm discovery\n\n\nSwarm strategies\n\n\nSwarm filters\n\n\nSwarm API\n\n\nDocker Hub API\n\n\nDocker Registry API\n\n\nDocker Registry API Client Libraries\n\n\nDocker Hub and Registry Spec\n\n\nDocker Remote API\n\n\nDocker Remote API v1.17\n\n\nDocker Remote API v1.16\n\n\nDocker Remote API Client Libraries\n\n\nDocker Hub Accounts API\n\n\nContributor Guide\n\n\nREADME first\n\n\nGet required software\n\n\nConfigure Git for contributing\n\n\nWork with a development container\n\n\nRun tests and test documentation\n\n\nUnderstand contribution workflow\n\n\nFind an issue\n\n\nWork on an issue\n\n\nCreate a pull request\n\n\nParticipate in the PR review\n\n\nAdvanced contributing\n\n\nWhere to get help\n\n\nCoding style guide\n\n\nDocumentation style guide",
|
|
"title": "**HIDDEN**"
|
|
},
|
|
{
|
|
"loc": "/articles#table-of-contents",
|
|
"tags": "",
|
|
"text": "",
|
|
"title": "Table of Contents"
|
|
},
|
|
{
|
|
"loc": "/articles#about",
|
|
"tags": "",
|
|
"text": "Docker Release Notes Understanding Docker",
|
|
"title": "About"
|
|
},
|
|
{
|
|
"loc": "/articles#installation",
|
|
"tags": "",
|
|
"text": "Ubuntu Mac OS X Microsoft Windows Amazon EC2 Arch Linux Binaries CentOS CRUX Linux Debian Fedora FrugalWare Google Cloud Platform Gentoo IBM Softlayer Rackspace Cloud Red Hat Enterprise Linux Oracle Linux SUSE Docker Compose",
|
|
"title": "Installation"
|
|
},
|
|
{
|
|
"loc": "/articles#user-guide",
|
|
"tags": "",
|
|
"text": "The Docker User Guide Getting Started with Docker Hub Dockerizing Applications Working with Containers Working with Docker Images Linking containers together Managing data in containers Working with Docker Hub Docker Compose Docker Machine Docker Swarm",
|
|
"title": "User Guide"
|
|
},
|
|
{
|
|
"loc": "/articles#docker-hub",
|
|
"tags": "",
|
|
"text": "Docker Hub Accounts Repositories Automated Builds Official Repo Guidelines",
|
|
"title": "Docker Hub"
|
|
},
|
|
{
|
|
"loc": "/articles#examples",
|
|
"tags": "",
|
|
"text": "Dockerizing a Node.js web application Dockerizing MongoDB Dockerizing a Redis service Dockerizing a PostgreSQL service Dockerizing a Riak service Dockerizing an SSH service Dockerizing a CouchDB service Dockerizing an Apt-Cacher-ng service Getting started with Compose and Django Getting started with Compose and Rails Getting started with Compose and Wordpress",
|
|
"title": "Examples"
|
|
},
|
|
{
|
|
"loc": "/articles#articles",
|
|
"tags": "",
|
|
"text": "Docker basics Advanced networking Security Running Docker with HTTPS Run a local registry mirror Automatically starting containers Creating a base image Best practices for writing Dockerfiles Using certificates for repository client verification Using Supervisor Process management with CFEngine Using Puppet Using Chef Using PowerShell DSC Cross-Host linking using ambassador containers Runtime metrics Increasing a Boot2Docker volume Controlling and configuring Docker using Systemd",
|
|
"title": "Articles"
|
|
},
|
|
{
|
|
"loc": "/articles#reference",
|
|
"tags": "",
|
|
"text": "Command line Dockerfile FAQ Run Reference Compose command line Compose yml Compose ENV variables Compose commandline completion Swarm discovery Swarm strategies Swarm filters Swarm API Docker Hub API Docker Registry API Docker Registry API Client Libraries Docker Hub and Registry Spec Docker Remote API Docker Remote API v1.17 Docker Remote API v1.16 Docker Remote API Client Libraries Docker Hub Accounts API",
|
|
"title": "Reference"
|
|
},
|
|
{
|
|
"loc": "/articles#contributor-guide",
|
|
"tags": "",
|
|
"text": "README first Get required software Configure Git for contributing Work with a development container Run tests and test documentation Understand contribution workflow Find an issue Work on an issue Create a pull request Participate in the PR review Advanced contributing Where to get help Coding style guide Documentation style guide",
|
|
"title": "Contributor Guide"
|
|
},
|
|
{
|
|
"loc": "/articles/basics/",
|
|
"tags": "",
|
|
"text": "First steps with Docker\nCheck your Docker install\nThis guide assumes you have a working installation of Docker. To check\nyour Docker install, run the following command:\n# Check that you have a working install\n$ sudo docker info\n\nIf you get docker: command not found or something like\n/var/lib/docker/repositories: permission denied you may have an\nincomplete Docker installation or insufficient privileges to access\nDocker on your machine.\nPlease refer to Installation\nfor installation instructions.\nDownload a pre-built image\n# Download an ubuntu image\n$ sudo docker pull ubuntu\n\nThis will find the ubuntu image by name on\nDocker Hub\nand download it from Docker Hub to a local\nimage cache.\n\nNote:\nWhen the image has successfully downloaded, you will see a 12 character\nhash 539c0211cd76: Download complete which is the\nshort form of the image ID. These short image IDs are the first 12\ncharacters of the full image ID - which can be found using\ndocker inspect or docker images --no-trunc=true\nNote: if you are using a remote Docker daemon, such as Boot2Docker, \nthen do not type the sudo before the docker commands shown in the\ndocumentation's examples.\n\nRunning an interactive shell\n# Run an interactive shell in the ubuntu image,\n# allocate a tty, attach stdin and stdout\n# To detach the tty without exiting the shell,\n# use the escape sequence Ctrl-p + Ctrl-q\n# note: This will continue to exist in a stopped state once exited (see \"docker ps -a\")\n$ sudo docker run -i -t ubuntu /bin/bash\n\nBind Docker to another host/port or a Unix socket\n\nWarning:\nChanging the default docker daemon binding to a\nTCP port or Unix docker user group will increase your security risks\nby allowing non-root users to gain root access on the host. Make sure\nyou control access to docker. If you are binding\nto a TCP port, anyone with access to that port has full Docker access;\nso it is not advisable on an open network.\n\nWith -H it is possible to make the Docker daemon to listen on a\nspecific IP and port. By default, it will listen on\nunix:///var/run/docker.sock to allow only local connections by the\nroot user. You could set it to 0.0.0.0:2375 or a specific host IP\nto give access to everybody, but that is not recommended because\nthen it is trivial for someone to gain root access to the host where the\ndaemon is running.\nSimilarly, the Docker client can use -H to connect to a custom port.\n-H accepts host and port assignment in the following format:\ntcp://[host][:port]` or `unix://path\n\nFor example:\n\ntcp://host:2375 - TCP connection on\n host:2375\nunix://path/to/socket - Unix socket located\n at path/to/socket\n\n-H, when empty, will default to the same value as\nwhen no -H was passed in.\n-H also accepts short form for TCP bindings:\nhost[:port]` or `:port\n\nRun Docker in daemon mode:\n$ sudo path to/docker -H 0.0.0.0:5555 -d \n\nDownload an ubuntu image:\n$ sudo docker -H :5555 pull ubuntu\n\nYou can use multiple -H, for example, if you want to listen on both\nTCP and a Unix socket\n# Run docker in daemon mode\n$ sudo path to/docker -H tcp://127.0.0.1:2375 -H unix:///var/run/docker.sock -d \n# Download an ubuntu image, use default Unix socket\n$ sudo docker pull ubuntu\n# OR use the TCP port\n$ sudo docker -H tcp://127.0.0.1:2375 pull ubuntu\n\nStarting a long-running worker process\n# Start a very useful long-running process\n$ JOB=$(sudo docker run -d ubuntu /bin/sh -c \"while true; do echo Hello world; sleep 1; done\")\n\n# Collect the output of the job so far\n$ sudo docker logs $JOB\n\n# Kill the job\n$ sudo docker kill $JOB\n\nListing containers\n$ sudo docker ps # Lists only running containers\n$ sudo docker ps -a # Lists all containers\n\nControlling containers\n# Start a new container\n$ JOB=$(sudo docker run -d ubuntu /bin/sh -c \"while true; do echo Hello world; sleep 1; done\")\n\n# Stop the container\n$ sudo docker stop $JOB\n\n# Start the container\n$ sudo docker start $JOB\n\n# Restart the container\n$ sudo docker restart $JOB\n\n# SIGKILL a container\n$ sudo docker kill $JOB\n\n# Remove a container\n$ sudo docker stop $JOB # Container must be stopped to remove it\n$ sudo docker rm $JOB\n\nBind a service on a TCP port\n# Bind port 4444 of this container, and tell netcat to listen on it\n$ JOB=$(sudo docker run -d -p 4444 ubuntu:12.10 /bin/nc -l 4444)\n\n# Which public port is NATed to my container?\n$ PORT=$(sudo docker port $JOB 4444 | awk -F: '{ print $2 }')\n\n# Connect to the public port\n$ echo hello world | nc 127.0.0.1 $PORT\n\n# Verify that the network connection worked\n$ echo \"Daemon received: $(sudo docker logs $JOB)\"\n\nCommitting (saving) a container state\nSave your containers state to an image, so the state can be\nre-used.\nWhen you commit your container only the differences between the image\nthe container was created from and the current state of the container\nwill be stored (as a diff). See which images you already have using the\ndocker images command.\n# Commit your container to a new named image\n$ sudo docker commit container_id some_name\n\n# List your containers\n$ sudo docker images\n\nYou now have an image state from which you can create new instances.\nRead more about Share Images via\nRepositories or\ncontinue to the complete Command\nLine",
|
|
"title": "Docker basics"
|
|
},
|
|
{
|
|
"loc": "/articles/basics#first-steps-with-docker",
|
|
"tags": "",
|
|
"text": "",
|
|
"title": "First steps with Docker"
|
|
},
|
|
{
|
|
"loc": "/articles/basics#check-your-docker-install",
|
|
"tags": "",
|
|
"text": "This guide assumes you have a working installation of Docker. To check\nyour Docker install, run the following command: # Check that you have a working install\n$ sudo docker info If you get docker: command not found or something like /var/lib/docker/repositories: permission denied you may have an\nincomplete Docker installation or insufficient privileges to access\nDocker on your machine. Please refer to Installation \nfor installation instructions.",
|
|
"title": "Check your Docker install"
|
|
},
|
|
{
|
|
"loc": "/articles/basics#download-a-pre-built-image",
|
|
"tags": "",
|
|
"text": "# Download an ubuntu image\n$ sudo docker pull ubuntu This will find the ubuntu image by name on Docker Hub \nand download it from Docker Hub to a local\nimage cache. Note :\nWhen the image has successfully downloaded, you will see a 12 character\nhash 539c0211cd76: Download complete which is the\nshort form of the image ID. These short image IDs are the first 12\ncharacters of the full image ID - which can be found using docker inspect or docker images --no-trunc=true Note: if you are using a remote Docker daemon, such as Boot2Docker, \nthen do not type the sudo before the docker commands shown in the\ndocumentation's examples.",
|
|
"title": "Download a pre-built image"
|
|
},
|
|
{
|
|
"loc": "/articles/basics#running-an-interactive-shell",
|
|
"tags": "",
|
|
"text": "# Run an interactive shell in the ubuntu image,\n# allocate a tty, attach stdin and stdout\n# To detach the tty without exiting the shell,\n# use the escape sequence Ctrl-p + Ctrl-q\n# note: This will continue to exist in a stopped state once exited (see \"docker ps -a\")\n$ sudo docker run -i -t ubuntu /bin/bash",
|
|
"title": "Running an interactive shell"
|
|
},
|
|
{
|
|
"loc": "/articles/basics#bind-docker-to-another-hostport-or-a-unix-socket",
|
|
"tags": "",
|
|
"text": "Warning :\nChanging the default docker daemon binding to a\nTCP port or Unix docker user group will increase your security risks\nby allowing non-root users to gain root access on the host. Make sure\nyou control access to docker . If you are binding\nto a TCP port, anyone with access to that port has full Docker access;\nso it is not advisable on an open network. With -H it is possible to make the Docker daemon to listen on a\nspecific IP and port. By default, it will listen on unix:///var/run/docker.sock to allow only local connections by the root user. You could set it to 0.0.0.0:2375 or a specific host IP\nto give access to everybody, but that is not recommended because\nthen it is trivial for someone to gain root access to the host where the\ndaemon is running. Similarly, the Docker client can use -H to connect to a custom port. -H accepts host and port assignment in the following format: tcp://[host][:port]` or `unix://path For example: tcp://host:2375 - TCP connection on\n host:2375 unix://path/to/socket - Unix socket located\n at path/to/socket -H , when empty, will default to the same value as\nwhen no -H was passed in. -H also accepts short form for TCP bindings: host[:port]` or `:port Run Docker in daemon mode: $ sudo path to /docker -H 0.0.0.0:5555 -d Download an ubuntu image: $ sudo docker -H :5555 pull ubuntu You can use multiple -H , for example, if you want to listen on both\nTCP and a Unix socket # Run docker in daemon mode\n$ sudo path to /docker -H tcp://127.0.0.1:2375 -H unix:///var/run/docker.sock -d \n# Download an ubuntu image, use default Unix socket\n$ sudo docker pull ubuntu\n# OR use the TCP port\n$ sudo docker -H tcp://127.0.0.1:2375 pull ubuntu",
|
|
"title": "Bind Docker to another host/port or a Unix socket"
|
|
},
|
|
{
|
|
"loc": "/articles/basics#starting-a-long-running-worker-process",
|
|
"tags": "",
|
|
"text": "# Start a very useful long-running process\n$ JOB=$(sudo docker run -d ubuntu /bin/sh -c \"while true; do echo Hello world; sleep 1; done\")\n\n# Collect the output of the job so far\n$ sudo docker logs $JOB\n\n# Kill the job\n$ sudo docker kill $JOB",
|
|
"title": "Starting a long-running worker process"
|
|
},
|
|
{
|
|
"loc": "/articles/basics#listing-containers",
|
|
"tags": "",
|
|
"text": "$ sudo docker ps # Lists only running containers\n$ sudo docker ps -a # Lists all containers",
|
|
"title": "Listing containers"
|
|
},
|
|
{
|
|
"loc": "/articles/basics#controlling-containers",
|
|
"tags": "",
|
|
"text": "# Start a new container\n$ JOB=$(sudo docker run -d ubuntu /bin/sh -c \"while true; do echo Hello world; sleep 1; done\")\n\n# Stop the container\n$ sudo docker stop $JOB\n\n# Start the container\n$ sudo docker start $JOB\n\n# Restart the container\n$ sudo docker restart $JOB\n\n# SIGKILL a container\n$ sudo docker kill $JOB\n\n# Remove a container\n$ sudo docker stop $JOB # Container must be stopped to remove it\n$ sudo docker rm $JOB",
|
|
"title": "Controlling containers"
|
|
},
|
|
{
|
|
"loc": "/articles/basics#bind-a-service-on-a-tcp-port",
|
|
"tags": "",
|
|
"text": "# Bind port 4444 of this container, and tell netcat to listen on it\n$ JOB=$(sudo docker run -d -p 4444 ubuntu:12.10 /bin/nc -l 4444)\n\n# Which public port is NATed to my container?\n$ PORT=$(sudo docker port $JOB 4444 | awk -F: '{ print $2 }')\n\n# Connect to the public port\n$ echo hello world | nc 127.0.0.1 $PORT\n\n# Verify that the network connection worked\n$ echo \"Daemon received: $(sudo docker logs $JOB)\"",
|
|
"title": "Bind a service on a TCP port"
|
|
},
|
|
{
|
|
"loc": "/articles/basics#committing-saving-a-container-state",
|
|
"tags": "",
|
|
"text": "Save your containers state to an image, so the state can be\nre-used. When you commit your container only the differences between the image\nthe container was created from and the current state of the container\nwill be stored (as a diff). See which images you already have using the docker images command. # Commit your container to a new named image\n$ sudo docker commit container_id some_name \n\n# List your containers\n$ sudo docker images You now have an image state from which you can create new instances. Read more about Share Images via\nRepositories or\ncontinue to the complete Command\nLine",
|
|
"title": "Committing (saving) a container state"
|
|
},
|
|
{
|
|
"loc": "/articles/networking/",
|
|
"tags": "",
|
|
"text": "Network Configuration\nTL;DR\nWhen Docker starts, it creates a virtual interface named docker0 on\nthe host machine. It randomly chooses an address and subnet from the\nprivate range defined by RFC 1918\nthat are not in use on the host machine, and assigns it to docker0.\nDocker made the choice 172.17.42.1/16 when I started it a few minutes\nago, for example \u2014 a 16-bit netmask providing 65,534 addresses for the\nhost machine and its containers. The MAC address is generated using the\nIP address allocated to the container to avoid ARP collisions, using a\nrange from 02:42:ac:11:00:00 to 02:42:ac:11:ff:ff.\n\nNote:\nThis document discusses advanced networking configuration\nand options for Docker. In most cases you won't need this information.\nIf you're looking to get started with a simpler explanation of Docker\nnetworking and an introduction to the concept of container linking see\nthe Docker User Guide.\n\nBut docker0 is no ordinary interface. It is a virtual Ethernet\nbridge that automatically forwards packets between any other network\ninterfaces that are attached to it. This lets containers communicate\nboth with the host machine and with each other. Every time Docker\ncreates a container, it creates a pair of \u201cpeer\u201d interfaces that are\nlike opposite ends of a pipe \u2014 a packet sent on one will be received on\nthe other. It gives one of the peers to the container to become its\neth0 interface and keeps the other peer, with a unique name like\nvethAQI2QT, out in the namespace of the host machine. By binding\nevery veth* interface to the docker0 bridge, Docker creates a\nvirtual subnet shared between the host machine and every Docker\ncontainer.\nThe remaining sections of this document explain all of the ways that you\ncan use Docker options and \u2014 in advanced cases \u2014 raw Linux networking\ncommands to tweak, supplement, or entirely replace Docker's default\nnetworking configuration.\nQuick Guide to the Options\nHere is a quick list of the networking-related Docker command-line\noptions, in case it helps you find the section below that you are\nlooking for.\nSome networking command-line options can only be supplied to the Docker\nserver when it starts up, and cannot be changed once it is running:\n\n\n-b BRIDGE or --bridge=BRIDGE \u2014 see\n Building your own bridge\n\n\n--bip=CIDR \u2014 see\n Customizing docker0\n\n\n--fixed-cidr \u2014 see\n Customizing docker0\n\n\n--fixed-cidr-v6 \u2014 see\n IPv6\n\n\n-H SOCKET... or --host=SOCKET... \u2014\n This might sound like it would affect container networking,\n but it actually faces in the other direction:\n it tells the Docker server over what channels\n it should be willing to receive commands\n like \u201crun container\u201d and \u201cstop container.\u201d\n\n\n--icc=true|false \u2014 see\n Communication between containers\n\n\n--ip=IP_ADDRESS \u2014 see\n Binding container ports\n\n\n--ipv6=true|false \u2014 see\n IPv6\n\n\n--ip-forward=true|false \u2014 see\n Communication between containers and the wider world\n\n\n--iptables=true|false \u2014 see\n Communication between containers\n\n\n--mtu=BYTES \u2014 see\n Customizing docker0\n\n\nThere are two networking options that can be supplied either at startup\nor when docker run is invoked. When provided at startup, set the\ndefault value that docker run will later use if the options are not\nspecified:\n\n\n--dns=IP_ADDRESS... \u2014 see\n Configuring DNS\n\n\n--dns-search=DOMAIN... \u2014 see\n Configuring DNS\n\n\nFinally, several networking options can only be provided when calling\ndocker run because they specify something specific to one container:\n\n\n-h HOSTNAME or --hostname=HOSTNAME \u2014 see\n Configuring DNS and\n How Docker networks a container\n\n\n--link=CONTAINER_NAME_or_ID:ALIAS \u2014 see\n Configuring DNS and\n Communication between containers\n\n\n--net=bridge|none|container:NAME_or_ID|host \u2014 see\n How Docker networks a container\n\n\n--mac-address=MACADDRESS... \u2014 see\n How Docker networks a container\n\n\n-p SPEC or --publish=SPEC \u2014 see\n Binding container ports\n\n\n-P or --publish-all=true|false \u2014 see\n Binding container ports\n\n\nThe following sections tackle all of the above topics in an order that\nmoves roughly from simplest to most complex.\nConfiguring DNS\n\nHow can Docker supply each container with a hostname and DNS\nconfiguration, without having to build a custom image with the hostname\nwritten inside? Its trick is to overlay three crucial /etc files\ninside the container with virtual files where it can write fresh\ninformation. You can see this by running mount inside a container:\n$$ mount\n...\n/dev/disk/by-uuid/1fec...ebdf on /etc/hostname type ext4 ...\n/dev/disk/by-uuid/1fec...ebdf on /etc/hosts type ext4 ...\n/dev/disk/by-uuid/1fec...ebdf on /etc/resolv.conf type ext4 ...\n...\n\nThis arrangement allows Docker to do clever things like keep\nresolv.conf up to date across all containers when the host machine\nreceives new configuration over\u00a0DHCP later. The exact details of how\nDocker maintains these files inside the container can change from one\nDocker version to the next, so you should leave the files themselves\nalone and use the following Docker options instead.\nFour different options affect container domain name services.\n\n\n-h HOSTNAME or --hostname=HOSTNAME \u2014 sets the hostname by which\n the container knows itself. This is written into /etc/hostname,\n into /etc/hosts as the name of the container's host-facing IP\n address, and is the name that /bin/bash inside the container will\n display inside its prompt. But the hostname is not easy to see from\n outside the container. It will not appear in docker ps nor in the\n /etc/hosts file of any other container.\n\n\n--link=CONTAINER_NAME_or_ID:ALIAS \u2014 using this option as you run a\n container gives the new container's /etc/hosts an extra entry\n named ALIAS that points to the IP address of the container identified by\n CONTAINER_NAME_or_ID. This lets processes inside the new container\n connect to the hostname ALIAS without having to know its IP. The\n --link= option is discussed in more detail below, in the section\n Communication between containers. Because\n Docker may assign a different IP address to the linked containers\n on restart, Docker updates the ALIAS entry in the /etc/hosts file\n of the recipient containers.\n\n\n--dns=IP_ADDRESS... \u2014 sets the IP addresses added as server\n lines to the container's /etc/resolv.conf file. Processes in the\n container, when confronted with a hostname not in /etc/hosts, will\n connect to these IP addresses on port 53 looking for name resolution\n services.\n\n\n--dns-search=DOMAIN... \u2014 sets the domain names that are searched\n when a bare unqualified hostname is used inside of the container, by\n writing search lines into the container's /etc/resolv.conf.\n When a container process attempts to access host and the search\n domain example.com is set, for instance, the DNS logic will not\n only look up host but also host.example.com.\n Use --dns-search=. if you don't wish to set the search domain.\n\n\nNote that Docker, in the absence of either of the last two options\nabove, will make /etc/resolv.conf inside of each container look like\nthe /etc/resolv.conf of the host machine where the docker daemon is\nrunning. You might wonder what happens when the host machine's\n/etc/resolv.conf file changes. The docker daemon has a file change\nnotifier active which will watch for changes to the host DNS configuration.\nWhen the host file changes, all stopped containers which have a matching\nresolv.conf to the host will be updated immediately to this newest host\nconfiguration. Containers which are running when the host configuration\nchanges will need to stop and start to pick up the host changes due to lack\nof a facility to ensure atomic writes of the resolv.conf file while the\ncontainer is running. If the container's resolv.conf has been edited since\nit was started with the default configuration, no replacement will be\nattempted as it would overwrite the changes performed by the container.\nIf the options (--dns or --dns-search) have been used to modify the \ndefault host configuration, then the replacement with an updated host's\n/etc/resolv.conf will not happen as well.\n\nNote:\nFor containers which were created prior to the implementation of\nthe /etc/resolv.conf update feature in Docker 1.5.0: those\ncontainers will not receive updates when the host resolv.conf\nfile changes. Only containers created with Docker 1.5.0 and above\nwill utilize this auto-update feature.\n\nCommunication between containers and the wider world\n\nWhether a container can talk to the world is governed by two factors.\n\n\nIs the host machine willing to forward IP packets? This is governed\n by the ip_forward system parameter. Packets can only pass between\n containers if this parameter is 1. Usually you will simply leave\n the Docker server at its default setting --ip-forward=true and\n Docker will go set ip_forward to 1 for you when the server\n starts up. To check the setting or turn it on manually:\n$ cat /proc/sys/net/ipv4/ip_forward\n0\n$ echo 1 /proc/sys/net/ipv4/ip_forward\n$ cat /proc/sys/net/ipv4/ip_forward\n1\nMany using Docker will want ip_forward to be on, to at\nleast make communication possible between containers and\nthe wider world.\nMay also be needed for inter-container communication if you are\nin a multiple bridge setup.\n\n\nDo your iptables allow this particular connection? Docker will\n never make changes to your system iptables rules if you set\n --iptables=false when the daemon starts. Otherwise the Docker\n server will append forwarding rules to the DOCKER filter chain.\n\n\nDocker will not delete or modify any pre-existing rules from the DOCKER\nfilter chain. This allows the user to create in advance any rules required\nto further restrict access to the containers.\nDocker's forward rules permit all external source IPs by default. To allow\nonly a specific IP or network to access the containers, insert a negated\nrule at the top of the DOCKER filter chain. For example, to restrict\nexternal access such that only source IP 8.8.8.8 can access the\ncontainers, the following rule could be added:\n$ iptables -I DOCKER -i ext_if ! -s 8.8.8.8 -j DROP\n\nCommunication between containers\n\nWhether two containers can communicate is governed, at the operating\nsystem level, by two factors.\n\n\nDoes the network topology even connect the containers' network\n interfaces? By default Docker will attach all containers to a\n single docker0 bridge, providing a path for packets to travel\n between them. See the later sections of this document for other\n possible topologies.\n\n\nDo your iptables allow this particular connection? Docker will never\n make changes to your system iptables rules if you set\n --iptables=false when the daemon starts. Otherwise the Docker server\n will add a default rule to the FORWARD chain with a blanket ACCEPT\n policy if you retain the default --icc=true, or else will set the\n policy to DROP if --icc=false.\n\n\nIt is a strategic question whether to leave --icc=true or change it to\n--icc=false (on Ubuntu, by editing the DOCKER_OPTS variable in\n/etc/default/docker and restarting the Docker server) so that\niptables will protect other containers \u2014 and the main host \u2014 from\nhaving arbitrary ports probed or accessed by a container that gets\ncompromised.\nIf you choose the most secure setting of --icc=false, then how can\ncontainers communicate in those cases where you want them to provide\neach other services?\nThe answer is the --link=CONTAINER_NAME_or_ID:ALIAS option, which was\nmentioned in the previous section because of its effect upon name\nservices. If the Docker daemon is running with both --icc=false and\n--iptables=true then, when it sees docker run invoked with the\n--link= option, the Docker server will insert a pair of iptables\nACCEPT rules so that the new container can connect to the ports\nexposed by the other container \u2014 the ports that it mentioned in the\nEXPOSE lines of its Dockerfile. Docker has more documentation on\nthis subject \u2014 see the linking Docker containers\npage for further details.\n\nNote:\nThe value CONTAINER_NAME in --link= must either be an\nauto-assigned Docker name like stupefied_pare or else the name you\nassigned with --name= when you ran docker run. It cannot be a\nhostname, which Docker will not recognize in the context of the\n--link= option.\n\nYou can run the iptables command on your Docker host to see whether\nthe FORWARD chain has a default policy of ACCEPT or DROP:\n# When --icc=false, you should see a DROP rule:\n\n$ sudo iptables -L -n\n...\nChain FORWARD (policy ACCEPT)\ntarget prot opt source destination\nDOCKER all -- 0.0.0.0/0 0.0.0.0/0\nDROP all -- 0.0.0.0/0 0.0.0.0/0\n...\n\n# When a --link= has been created under --icc=false,\n# you should see port-specific ACCEPT rules overriding\n# the subsequent DROP policy for all other packets:\n\n$ sudo iptables -L -n\n...\nChain FORWARD (policy ACCEPT)\ntarget prot opt source destination\nDOCKER all -- 0.0.0.0/0 0.0.0.0/0\nDROP all -- 0.0.0.0/0 0.0.0.0/0\n\nChain DOCKER (1 references)\ntarget prot opt source destination\nACCEPT tcp -- 172.17.0.2 172.17.0.3 tcp spt:80\nACCEPT tcp -- 172.17.0.3 172.17.0.2 tcp dpt:80\n\n\nNote:\nDocker is careful that its host-wide iptables rules fully expose\ncontainers to each other's raw IP addresses, so connections from one\ncontainer to another should always appear to be originating from the\nfirst container's own IP address.\n\nBinding container ports to the host\n\nBy default Docker containers can make connections to the outside world,\nbut the outside world cannot connect to containers. Each outgoing\nconnection will appear to originate from one of the host machine's own\nIP addresses thanks to an iptables masquerading rule on the host\nmachine that the Docker server creates when it starts:\n# You can see that the Docker server creates a\n# masquerade rule that let containers connect\n# to IP addresses in the outside world:\n\n$ sudo iptables -t nat -L -n\n...\nChain POSTROUTING (policy ACCEPT)\ntarget prot opt source destination\nMASQUERADE all -- 172.17.0.0/16 !172.17.0.0/16\n...\n\nBut if you want containers to accept incoming connections, you will need\nto provide special options when invoking docker run. These options\nare covered in more detail in the Docker User Guide\npage. There are two approaches.\nFirst, you can supply -P or --publish-all=true|false to docker run\nwhich is a blanket operation that identifies every port with an EXPOSE\nline in the image's Dockerfile and maps it to a host port somewhere in\nthe range 49153\u201365535. This tends to be a bit inconvenient, since you\nthen have to run other docker sub-commands to learn which external\nport a given service was mapped to.\nMore convenient is the -p SPEC or --publish=SPEC option which lets\nyou be explicit about exactly which external port on the Docker server \u2014\nwhich can be any port at all, not just those in the 49153-65535 block \u2014\nyou want mapped to which port in the container.\nEither way, you should be able to peek at what Docker has accomplished\nin your network stack by examining your NAT tables.\n# What your NAT rules might look like when Docker\n# is finished setting up a -P forward:\n\n$ iptables -t nat -L -n\n...\nChain DOCKER (2 references)\ntarget prot opt source destination\nDNAT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:49153 to:172.17.0.2:80\n\n# What your NAT rules might look like when Docker\n# is finished setting up a -p 80:80 forward:\n\nChain DOCKER (2 references)\ntarget prot opt source destination\nDNAT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:80 to:172.17.0.2:80\n\nYou can see that Docker has exposed these container ports on 0.0.0.0,\nthe wildcard IP address that will match any possible incoming port on\nthe host machine. If you want to be more restrictive and only allow\ncontainer services to be contacted through a specific external interface\non the host machine, you have two choices. When you invoke docker run\nyou can use either -p IP:host_port:container_port or -p IP::port to\nspecify the external interface for one particular binding.\nOr if you always want Docker port forwards to bind to one specific IP\naddress, you can edit your system-wide Docker server settings (on\nUbuntu, by editing DOCKER_OPTS in /etc/default/docker) and add the\noption --ip=IP_ADDRESS. Remember to restart your Docker server after\nediting this setting.\nAgain, this topic is covered without all of these low-level networking\ndetails in the Docker User Guide document if you\nwould like to use that as your port redirection reference instead.\nIPv6\n\nAs we are running out of IPv4 addresses\nthe IETF has standardized an IPv4 successor, Internet Protocol Version 6\n, in RFC 2460. Both protocols, IPv4 and\nIPv6, reside on layer 3 of the OSI model.\nIPv6 with Docker\nBy default, the Docker server configures the container network for IPv4 only.\nYou can enable IPv4/IPv6 dualstack support by running the Docker daemon with the\n--ipv6 flag. Docker will set up the bridge docker0 with the IPv6\nlink-local address fe80::1.\nBy default, containers that are created will only get a link-local IPv6 address.\nTo assign globally routable IPv6 addresses to your containers you have to\nspecify an IPv6 subnet to pick the addresses from. Set the IPv6 subnet via the\n--fixed-cidr-v6 parameter when starting Docker daemon:\ndocker -d --ipv6 --fixed-cidr-v6=\"2001:db8:1::/64\"\n\nThe subnet for Docker containers should at least have a size of /80. This way\nan IPv6 address can end with the container's MAC address and you prevent NDP\nneighbor cache invalidation issues in the Docker layer.\nWith the --fixed-cidr-v6 parameter set Docker will add a new route to the\nrouting table. Further IPv6 routing will be enabled (you may prevent this by\nstarting Docker daemon with --ip-forward=false):\n$ ip -6 route add 2001:db8:1::/64 dev docker0\n$ sysctl net.ipv6.conf.default.forwarding=1\n$ sysctl net.ipv6.conf.all.forwarding=1\n\nAll traffic to the subnet 2001:db8:1::/64 will now be routed\nvia the docker0 interface.\nBe aware that IPv6 forwarding may interfere with your existing IPv6\nconfiguration: If you are using Router Advertisements to get IPv6 settings for\nyour host's interfaces you should set accept_ra to 2. Otherwise IPv6\nenabled forwarding will result in rejecting Router Advertisements. E.g., if you\nwant to configure eth0 via Router Advertisements you should set:\n```\n$ sysctl net.ipv6.conf.eth0.accept_ra=2\n```\n\n\nEvery new container will get an IPv6 address from the defined subnet. Further\na default route will be added via the gateway fe80::1 on eth0:\ndocker run -it ubuntu bash -c \"ip -6 addr show dev eth0; ip -6 route show\"\n\n15: eth0: BROADCAST,UP,LOWER_UP mtu 1500\n inet6 2001:db8:1:0:0:242:ac11:3/64 scope global\n valid_lft forever preferred_lft forever\n inet6 fe80::42:acff:fe11:3/64 scope link\n valid_lft forever preferred_lft forever\n\n2001:db8:1::/64 dev eth0 proto kernel metric 256\nfe80::/64 dev eth0 proto kernel metric 256\ndefault via fe80::1 dev eth0 metric 1024\n\nIn this example the Docker container is assigned a link-local address with the\nnetwork suffix /64 (here: fe80::42:acff:fe11:3/64) and a globally routable\nIPv6 address (here: 2001:db8:1:0:0:242:ac11:3/64). The container will create\nconnections to addresses outside of the 2001:db8:1::/64 network via the\nlink-local gateway at fe80::1 on eth0.\nOften servers or virtual machines get a /64 IPv6 subnet assigned (e.g.\n2001:db8:23:42::/64). In this case you can split it up further and provide\nDocker a /80 subnet while using a separate /80 subnet for other\napplications on the host:\n\nIn this setup the subnet 2001:db8:23:42::/80 with a range from 2001:db8:23:42:0:0:0:0\nto 2001:db8:23:42:0:ffff:ffff:ffff is attached to eth0, with the host listening\nat 2001:db8:23:42::1. The subnet 2001:db8:23:42:1::/80 with an address range from\n2001:db8:23:42:1:0:0:0 to 2001:db8:23:42:1:ffff:ffff:ffff is attached to\ndocker0 and will be used by containers.\nDocker IPv6 Cluster\nSwitched Network Environment\nUsing routable IPv6 addresses allows you to realize communication between\ncontainers on different hosts. Let's have a look at a simple Docker IPv6 cluster\nexample:\n\nThe Docker hosts are in the 2001:db8:0::/64 subnet. Host1 is configured\nto provide addresses from the 2001:db8:1::/64 subnet to its containers. It\nhas three routes configured:\n\nRoute all traffic to 2001:db8:0::/64 via eth0\nRoute all traffic to 2001:db8:1::/64 via docker0\nRoute all traffic to 2001:db8:2::/64 via Host2 with IP 2001:db8::2\n\nHost1 also acts as a router on OSI layer 3. When one of the network clients\ntries to contact a target that is specified in Host1's routing table Host1 will\nforward the traffic accordingly. It acts as a router for all networks it knows:\n2001:db8::/64, 2001:db8:1::/64 and 2001:db8:2::/64.\nOn Host2 we have nearly the same configuration. Host2's containers will get\nIPv6 addresses from 2001:db8:2::/64. Host2 has three routes configured:\n\nRoute all traffic to 2001:db8:0::/64 via eth0\nRoute all traffic to 2001:db8:2::/64 via docker0\nRoute all traffic to 2001:db8:1::/64 via Host1 with IP 2001:db8:0::1\n\nThe difference to Host1 is that the network 2001:db8:2::/64 is directly\nattached to the host via its docker0 interface whereas it reaches\n2001:db8:1::/64 via Host1's IPv6 address 2001:db8::1.\nThis way every container is able to contact every other container. The\ncontainers Container1-* share the same subnet and contact each other directly.\nThe traffic between Container1-* and Container2-* will be routed via Host1\nand Host2 because those containers do not share the same subnet.\nIn a switched environment every host has to know all routes to every subnet. You\nalways have to update the hosts' routing tables once you add or remove a host\nto the cluster.\nEvery configuration in the diagram that is shown below the dashed line is\nhandled by Docker: The docker0 bridge IP address configuration, the route to\nthe Docker subnet on the host, the container IP addresses and the routes on the\ncontainers. The configuration above the line is up to the user and can be\nadapted to the individual environment.\nRouted Network Environment\nIn a routed network environment you replace the level 2 switch with a level 3\nrouter. Now the hosts just have to know their default gateway (the router) and\nthe route to their own containers (managed by Docker). The router holds all\nrouting information about the Docker subnets. When you add or remove a host to\nthis environment you just have to update the routing table in the router - not\non every host.\n\nIn this scenario containers of the same host can communicate directly with each\nother. The traffic between containers on different hosts will be routed via\ntheir hosts and the router. For example packet from Container1-1 to \nContainer2-1 will be routed through Host1, Router and Host2 until it\narrives at Container2-1.\nTo keep the IPv6 addresses short in this example a /48 network is assigned to\nevery host. The hosts use a /64 subnet of this for its own services and one\nfor Docker. When adding a third host you would add a route for the subnet\n2001:db8:3::/48 in the router and configure Docker on Host3 with\n--fixed-cidr-v6=2001:db8:3:1::/64.\nRemember the subnet for Docker containers should at least have a size of /80.\nThis way an IPv6 address can end with the container's MAC address and you\nprevent NDP neighbor cache invalidation issues in the Docker layer. So if you\nhave a /64 for your whole environment use /68 subnets for the hosts and\n/80 for the containers. This way you can use 4096 hosts with 16 /80 subnets\neach.\nEvery configuration in the diagram that is visualized below the dashed line is\nhandled by Docker: The docker0 bridge IP address configuration, the route to\nthe Docker subnet on the host, the container IP addresses and the routes on the\ncontainers. The configuration above the line is up to the user and can be\nadapted to the individual environment.\nCustomizing docker0\n\nBy default, the Docker server creates and configures the host system's\ndocker0 interface as an Ethernet bridge inside the Linux kernel that\ncan pass packets back and forth between other physical or virtual\nnetwork interfaces so that they behave as a single Ethernet network.\nDocker configures docker0 with an IP address, netmask and IP\nallocation range. The host machine can both receive and send packets to\ncontainers connected to the bridge, and gives it an MTU \u2014 the maximum\ntransmission unit or largest packet length that the interface will\nallow \u2014 of either 1,500 bytes or else a more specific value copied from\nthe Docker host's interface that supports its default route. These\noptions are configurable at server startup:\n\n\n--bip=CIDR \u2014 supply a specific IP address and netmask for the\n docker0 bridge, using standard CIDR notation like\n 192.168.1.5/24.\n\n\n--fixed-cidr=CIDR \u2014 restrict the IP range from the docker0 subnet,\n using the standard CIDR notation like 172.167.1.0/28. This range must\n be and IPv4 range for fixed IPs (ex: 10.20.0.0/16) and must be a subset\n of the bridge IP range (docker0 or set using --bridge). For example\n with --fixed-cidr=192.168.1.0/25, IPs for your containers will be chosen\n from the first half of 192.168.1.0/24 subnet.\n\n\n--mtu=BYTES \u2014 override the maximum packet length on docker0.\n\n\nOn Ubuntu you would add these to the DOCKER_OPTS setting in\n/etc/default/docker on your Docker host and restarting the Docker\nservice.\nOnce you have one or more containers up and running, you can confirm\nthat Docker has properly connected them to the docker0 bridge by\nrunning the brctl command on the host machine and looking at the\ninterfaces column of the output. Here is a host with two different\ncontainers connected:\n# Display bridge info\n\n$ sudo brctl show\nbridge name bridge id STP enabled interfaces\ndocker0 8000.3a1d7362b4ee no veth65f9\n vethdda6\n\nIf the brctl command is not installed on your Docker host, then on\nUbuntu you should be able to run sudo apt-get install bridge-utils to\ninstall it.\nFinally, the docker0 Ethernet bridge settings are used every time you\ncreate a new container. Docker selects a free IP address from the range\navailable on the bridge each time you docker run a new container, and\nconfigures the container's eth0 interface with that IP address and the\nbridge's netmask. The Docker host's own IP address on the bridge is\nused as the default gateway by which each container reaches the rest of\nthe Internet.\n# The network, as seen from a container\n\n$ sudo docker run -i -t --rm base /bin/bash\n\n$$ ip addr show eth0\n24: eth0: BROADCAST,UP,LOWER_UP mtu 1500 qdisc pfifo_fast state UP group default qlen 1000\n link/ether 32:6f:e0:35:57:91 brd ff:ff:ff:ff:ff:ff\n inet 172.17.0.3/16 scope global eth0\n valid_lft forever preferred_lft forever\n inet6 fe80::306f:e0ff:fe35:5791/64 scope link\n valid_lft forever preferred_lft forever\n\n$$ ip route\ndefault via 172.17.42.1 dev eth0\n172.17.0.0/16 dev eth0 proto kernel scope link src 172.17.0.3\n\n$$ exit\n\nRemember that the Docker host will not be willing to forward container\npackets out on to the Internet unless its ip_forward system setting is\n1 \u2014 see the section above on Communication between\ncontainers for details.\nBuilding your own bridge\n\nIf you want to take Docker out of the business of creating its own\nEthernet bridge entirely, you can set up your own bridge before starting\nDocker and use -b BRIDGE or --bridge=BRIDGE to tell Docker to use\nyour bridge instead. If you already have Docker up and running with its\nold docker0 still configured, you will probably want to begin by\nstopping the service and removing the interface:\n# Stopping Docker and removing docker0\n\n$ sudo service docker stop\n$ sudo ip link set dev docker0 down\n$ sudo brctl delbr docker0\n$ sudo iptables -t nat -F POSTROUTING\n\nThen, before starting the Docker service, create your own bridge and\ngive it whatever configuration you want. Here we will create a simple\nenough bridge that we really could just have used the options in the\nprevious section to customize docker0, but it will be enough to\nillustrate the technique.\n# Create our own bridge\n\n$ sudo brctl addbr bridge0\n$ sudo ip addr add 192.168.5.1/24 dev bridge0\n$ sudo ip link set dev bridge0 up\n\n# Confirming that our bridge is up and running\n\n$ ip addr show bridge0\n4: bridge0: BROADCAST,MULTICAST mtu 1500 qdisc noop state UP group default\n link/ether 66:38:d0:0d:76:18 brd ff:ff:ff:ff:ff:ff\n inet 192.168.5.1/24 scope global bridge0\n valid_lft forever preferred_lft forever\n\n# Tell Docker about it and restart (on Ubuntu)\n\n$ echo 'DOCKER_OPTS=\"-b=bridge0\"' /etc/default/docker\n$ sudo service docker start\n\n# Confirming new outgoing NAT masquerade is set up\n\n$ sudo iptables -t nat -L -n\n...\nChain POSTROUTING (policy ACCEPT)\ntarget prot opt source destination\nMASQUERADE all -- 192.168.5.0/24 0.0.0.0/0\n\nThe result should be that the Docker server starts successfully and is\nnow prepared to bind containers to the new bridge. After pausing to\nverify the bridge's configuration, try creating a container \u2014 you will\nsee that its IP address is in your new IP address range, which Docker\nwill have auto-detected.\nJust as we learned in the previous section, you can use the brctl show\ncommand to see Docker add and remove interfaces from the bridge as you\nstart and stop containers, and can run ip addr and ip route inside a\ncontainer to see that it has been given an address in the bridge's IP\naddress range and has been told to use the Docker host's IP address on\nthe bridge as its default gateway to the rest of the Internet.\nHow Docker networks a container\n\nWhile Docker is under active development and continues to tweak and\nimprove its network configuration logic, the shell commands in this\nsection are rough equivalents to the steps that Docker takes when\nconfiguring networking for each new container.\nLet's review a few basics.\nTo communicate using the Internet Protocol\u00a0(IP), a machine needs access\nto at least one network interface at which packets can be sent and\nreceived, and a routing table that defines the range of IP addresses\nreachable through that interface. Network interfaces do not have to be\nphysical devices. In fact, the lo loopback interface available on\nevery Linux machine (and inside each Docker container) is entirely\nvirtual \u2014 the Linux kernel simply copies loopback packets directly from\nthe sender's memory into the receiver's memory.\nDocker uses special virtual interfaces to let containers communicate\nwith the host machine \u2014 pairs of virtual interfaces called \u201cpeers\u201d that\nare linked inside of the host machine's kernel so that packets can\ntravel between them. They are simple to create, as we will see in a\nmoment.\nThe steps with which Docker configures a container are:\n\n\nCreate a pair of peer virtual interfaces.\n\n\nGive one of them a unique name like veth65f9, keep it inside of\n the main Docker host, and bind it to docker0 or whatever bridge\n Docker is supposed to be using.\n\n\nToss the other interface over the wall into the new container (which\n will already have been provided with an lo interface) and rename\n it to the much prettier name eth0 since, inside of the container's\n separate and unique network interface namespace, there are no\n physical interfaces with which this name could collide.\n\n\nSet the interface's MAC address according to the --mac-address\n parameter or generate a random one.\n\n\nGive the container's eth0 a new IP address from within the\n bridge's range of network addresses, and set its default route to\n the IP address that the Docker host owns on the bridge. If available\n the IP address is generated from the MAC address. This prevents ARP\n cache invalidation problems, when a new container comes up with an\n IP used in the past by another container with another MAC.\n\n\nWith these steps complete, the container now possesses an eth0\n(virtual) network card and will find itself able to communicate with\nother containers and the rest of the Internet.\nYou can opt out of the above process for a particular container by\ngiving the --net= option to docker run, which takes four possible\nvalues.\n\n\n--net=bridge \u2014 The default action, that connects the container to\n the Docker bridge as described above.\n\n\n--net=host \u2014 Tells Docker to skip placing the container inside of\n a separate network stack. In essence, this choice tells Docker to\n not containerize the container's networking! While container\n processes will still be confined to their own filesystem and process\n list and resource limits, a quick ip addr command will show you\n that, network-wise, they live \u201coutside\u201d in the main Docker host and\n have full access to its network interfaces. Note that this does\n not let the container reconfigure the host network stack \u2014 that\n would require --privileged=true \u2014 but it does let container\n processes open low-numbered ports like any other root process.\n It also allows the container to access local network services\n like D-bus. This can lead to processes in the container being\n able to do unexpected things like\n restart your computer.\n You should use this option with caution.\n\n\n--net=container:NAME_or_ID \u2014 Tells Docker to put this container's\n processes inside of the network stack that has already been created\n inside of another container. The new container's processes will be\n confined to their own filesystem and process list and resource\n limits, but will share the same IP address and port numbers as the\n first container, and processes on the two containers will be able to\n connect to each other over the loopback interface.\n\n\n--net=none \u2014 Tells Docker to put the container inside of its own\n network stack but not to take any steps to configure its network,\n leaving you free to build any of the custom configurations explored\n in the last few sections of this document.\n\n\nTo get an idea of the steps that are necessary if you use --net=none\nas described in that last bullet point, here are the commands that you\nwould run to reach roughly the same configuration as if you had let\nDocker do all of the configuration:\n# At one shell, start a container and\n# leave its shell idle and running\n\n$ sudo docker run -i -t --rm --net=none base /bin/bash\nroot@63f36fc01b5f:/#\n\n# At another shell, learn the container process ID\n# and create its namespace entry in /var/run/netns/\n# for the \"ip netns\" command we will be using below\n\n$ sudo docker inspect -f '{{.State.Pid}}' 63f36fc01b5f\n2778\n$ pid=2778\n$ sudo mkdir -p /var/run/netns\n$ sudo ln -s /proc/$pid/ns/net /var/run/netns/$pid\n\n# Check the bridge's IP address and netmask\n\n$ ip addr show docker0\n21: docker0: ...\ninet 172.17.42.1/16 scope global docker0\n...\n\n# Create a pair of \"peer\" interfaces A and B,\n# bind the A end to the bridge, and bring it up\n\n$ sudo ip link add A type veth peer name B\n$ sudo brctl addif docker0 A\n$ sudo ip link set A up\n\n# Place B inside the container's network namespace,\n# rename to eth0, and activate it with a free IP\n\n$ sudo ip link set B netns $pid\n$ sudo ip netns exec $pid ip link set dev B name eth0\n$ sudo ip netns exec $pid ip link set eth0 address 12:34:56:78:9a:bc\n$ sudo ip netns exec $pid ip link set eth0 up\n$ sudo ip netns exec $pid ip addr add 172.17.42.99/16 dev eth0\n$ sudo ip netns exec $pid ip route add default via 172.17.42.1\n\nAt this point your container should be able to perform networking\noperations as usual.\nWhen you finally exit the shell and Docker cleans up the container, the\nnetwork namespace is destroyed along with our virtual eth0 \u2014 whose\ndestruction in turn destroys interface A out in the Docker host and\nautomatically un-registers it from the docker0 bridge. So everything\ngets cleaned up without our having to run any extra commands! Well,\nalmost everything:\n# Clean up dangling symlinks in /var/run/netns\n\nfind -L /var/run/netns -type l -delete\n\nAlso note that while the script above used modern ip command instead\nof old deprecated wrappers like ipconfig and route, these older\ncommands would also have worked inside of our container. The ip addr\ncommand can be typed as ip a if you are in a hurry.\nFinally, note the importance of the ip netns exec command, which let\nus reach inside and configure a network namespace as root. The same\ncommands would not have worked if run inside of the container, because\npart of safe containerization is that Docker strips container processes\nof the right to configure their own networks. Using ip netns exec is\nwhat let us finish up the configuration without having to take the\ndangerous step of running the container itself with --privileged=true.\nTools and Examples\nBefore diving into the following sections on custom network topologies,\nyou might be interested in glancing at a few external tools or examples\nof the same kinds of configuration. Here are two:\n\n\nJ\u00e9r\u00f4me Petazzoni has created a pipework shell script to help you\n connect together containers in arbitrarily complex scenarios:\n https://github.com/jpetazzo/pipework\n\n\nBrandon Rhodes has created a whole network topology of Docker\n containers for the next edition of Foundations of Python Network\n Programming that includes routing, NAT'd firewalls, and servers that\n offer HTTP, SMTP, POP, IMAP, Telnet, SSH, and FTP:\n https://github.com/brandon-rhodes/fopnp/tree/m/playground\n\n\nBoth tools use networking commands very much like the ones you saw in\nthe previous section, and will see in the following sections.\nBuilding a point-to-point connection\n\nBy default, Docker attaches all containers to the virtual subnet\nimplemented by docker0. You can create containers that are each\nconnected to some different virtual subnet by creating your own bridge\nas shown in Building your own bridge, starting each\ncontainer with docker run --net=none, and then attaching the\ncontainers to your bridge with the shell commands shown in How Docker\nnetworks a container.\nBut sometimes you want two particular containers to be able to\ncommunicate directly without the added complexity of both being bound to\na host-wide Ethernet bridge.\nThe solution is simple: when you create your pair of peer interfaces,\nsimply throw both of them into containers, and configure them as\nclassic point-to-point links. The two containers will then be able to\ncommunicate directly (provided you manage to tell each container the\nother's IP address, of course). You might adjust the instructions of\nthe previous section to go something like this:\n# Start up two containers in two terminal windows\n\n$ sudo docker run -i -t --rm --net=none base /bin/bash\nroot@1f1f4c1f931a:/#\n\n$ sudo docker run -i -t --rm --net=none base /bin/bash\nroot@12e343489d2f:/#\n\n# Learn the container process IDs\n# and create their namespace entries\n\n$ sudo docker inspect -f '{{.State.Pid}}' 1f1f4c1f931a\n2989\n$ sudo docker inspect -f '{{.State.Pid}}' 12e343489d2f\n3004\n$ sudo mkdir -p /var/run/netns\n$ sudo ln -s /proc/2989/ns/net /var/run/netns/2989\n$ sudo ln -s /proc/3004/ns/net /var/run/netns/3004\n\n# Create the \"peer\" interfaces and hand them out\n\n$ sudo ip link add A type veth peer name B\n\n$ sudo ip link set A netns 2989\n$ sudo ip netns exec 2989 ip addr add 10.1.1.1/32 dev A\n$ sudo ip netns exec 2989 ip link set A up\n$ sudo ip netns exec 2989 ip route add 10.1.1.2/32 dev A\n\n$ sudo ip link set B netns 3004\n$ sudo ip netns exec 3004 ip addr add 10.1.1.2/32 dev B\n$ sudo ip netns exec 3004 ip link set B up\n$ sudo ip netns exec 3004 ip route add 10.1.1.1/32 dev B\n\nThe two containers should now be able to ping each other and make\nconnections successfully. Point-to-point links like this do not depend\non a subnet nor a netmask, but on the bare assertion made by ip route\nthat some other single IP address is connected to a particular network\ninterface.\nNote that point-to-point links can be safely combined with other kinds\nof network connectivity \u2014 there is no need to start the containers with\n--net=none if you want point-to-point links to be an addition to the\ncontainer's normal networking instead of a replacement.\nA final permutation of this pattern is to create the point-to-point link\nbetween the Docker host and one container, which would allow the host to\ncommunicate with that one container on some single IP address and thus\ncommunicate \u201cout-of-band\u201d of the bridge that connects the other, more\nusual containers. But unless you have very specific networking needs\nthat drive you to such a solution, it is probably far preferable to use\n--icc=false to lock down inter-container communication, as we explored\nearlier.\nEditing networking config files\nStarting with Docker v.1.2.0, you can now edit /etc/hosts, /etc/hostname\nand /etc/resolve.conf in a running container. This is useful if you need\nto install bind or other services that might override one of those files.\nNote, however, that changes to these files will not be saved by\ndocker commit, nor will they be saved during docker run.\nThat means they won't be saved in the image, nor will they persist when a\ncontainer is restarted; they will only \"stick\" in a running container.",
|
|
"title": "Advanced networking"
|
|
},
|
|
{
|
|
"loc": "/articles/networking#network-configuration",
|
|
"tags": "",
|
|
"text": "",
|
|
"title": "Network Configuration"
|
|
},
|
|
{
|
|
"loc": "/articles/networking#tldr",
|
|
"tags": "",
|
|
"text": "When Docker starts, it creates a virtual interface named docker0 on\nthe host machine. It randomly chooses an address and subnet from the\nprivate range defined by RFC 1918 \nthat are not in use on the host machine, and assigns it to docker0 .\nDocker made the choice 172.17.42.1/16 when I started it a few minutes\nago, for example \u2014 a 16-bit netmask providing 65,534 addresses for the\nhost machine and its containers. The MAC address is generated using the\nIP address allocated to the container to avoid ARP collisions, using a\nrange from 02:42:ac:11:00:00 to 02:42:ac:11:ff:ff . Note: \nThis document discusses advanced networking configuration\nand options for Docker. In most cases you won't need this information.\nIf you're looking to get started with a simpler explanation of Docker\nnetworking and an introduction to the concept of container linking see\nthe Docker User Guide . But docker0 is no ordinary interface. It is a virtual Ethernet\nbridge that automatically forwards packets between any other network\ninterfaces that are attached to it. This lets containers communicate\nboth with the host machine and with each other. Every time Docker\ncreates a container, it creates a pair of \u201cpeer\u201d interfaces that are\nlike opposite ends of a pipe \u2014 a packet sent on one will be received on\nthe other. It gives one of the peers to the container to become its eth0 interface and keeps the other peer, with a unique name like vethAQI2QT , out in the namespace of the host machine. By binding\nevery veth* interface to the docker0 bridge, Docker creates a\nvirtual subnet shared between the host machine and every Docker\ncontainer. The remaining sections of this document explain all of the ways that you\ncan use Docker options and \u2014 in advanced cases \u2014 raw Linux networking\ncommands to tweak, supplement, or entirely replace Docker's default\nnetworking configuration.",
|
|
"title": "TL;DR"
|
|
},
|
|
{
|
|
"loc": "/articles/networking#quick-guide-to-the-options",
|
|
"tags": "",
|
|
"text": "Here is a quick list of the networking-related Docker command-line\noptions, in case it helps you find the section below that you are\nlooking for. Some networking command-line options can only be supplied to the Docker\nserver when it starts up, and cannot be changed once it is running: -b BRIDGE or --bridge=BRIDGE \u2014 see\n Building your own bridge --bip=CIDR \u2014 see\n Customizing docker0 --fixed-cidr \u2014 see\n Customizing docker0 --fixed-cidr-v6 \u2014 see\n IPv6 -H SOCKET... or --host=SOCKET... \u2014\n This might sound like it would affect container networking,\n but it actually faces in the other direction:\n it tells the Docker server over what channels\n it should be willing to receive commands\n like \u201crun container\u201d and \u201cstop container.\u201d --icc=true|false \u2014 see\n Communication between containers --ip=IP_ADDRESS \u2014 see\n Binding container ports --ipv6=true|false \u2014 see\n IPv6 --ip-forward=true|false \u2014 see\n Communication between containers and the wider world --iptables=true|false \u2014 see\n Communication between containers --mtu=BYTES \u2014 see\n Customizing docker0 There are two networking options that can be supplied either at startup\nor when docker run is invoked. When provided at startup, set the\ndefault value that docker run will later use if the options are not\nspecified: --dns=IP_ADDRESS... \u2014 see\n Configuring DNS --dns-search=DOMAIN... \u2014 see\n Configuring DNS Finally, several networking options can only be provided when calling docker run because they specify something specific to one container: -h HOSTNAME or --hostname=HOSTNAME \u2014 see\n Configuring DNS and\n How Docker networks a container --link=CONTAINER_NAME_or_ID:ALIAS \u2014 see\n Configuring DNS and\n Communication between containers --net=bridge|none|container:NAME_or_ID|host \u2014 see\n How Docker networks a container --mac-address=MACADDRESS... \u2014 see\n How Docker networks a container -p SPEC or --publish=SPEC \u2014 see\n Binding container ports -P or --publish-all=true|false \u2014 see\n Binding container ports The following sections tackle all of the above topics in an order that\nmoves roughly from simplest to most complex.",
|
|
"title": "Quick Guide to the Options"
|
|
},
|
|
{
|
|
"loc": "/articles/networking#configuring-dns",
|
|
"tags": "",
|
|
"text": "How can Docker supply each container with a hostname and DNS\nconfiguration, without having to build a custom image with the hostname\nwritten inside? Its trick is to overlay three crucial /etc files\ninside the container with virtual files where it can write fresh\ninformation. You can see this by running mount inside a container: $$ mount\n...\n/dev/disk/by-uuid/1fec...ebdf on /etc/hostname type ext4 ...\n/dev/disk/by-uuid/1fec...ebdf on /etc/hosts type ext4 ...\n/dev/disk/by-uuid/1fec...ebdf on /etc/resolv.conf type ext4 ...\n... This arrangement allows Docker to do clever things like keep resolv.conf up to date across all containers when the host machine\nreceives new configuration over\u00a0DHCP later. The exact details of how\nDocker maintains these files inside the container can change from one\nDocker version to the next, so you should leave the files themselves\nalone and use the following Docker options instead. Four different options affect container domain name services. -h HOSTNAME or --hostname=HOSTNAME \u2014 sets the hostname by which\n the container knows itself. This is written into /etc/hostname ,\n into /etc/hosts as the name of the container's host-facing IP\n address, and is the name that /bin/bash inside the container will\n display inside its prompt. But the hostname is not easy to see from\n outside the container. It will not appear in docker ps nor in the\n /etc/hosts file of any other container. --link=CONTAINER_NAME_or_ID:ALIAS \u2014 using this option as you run a\n container gives the new container's /etc/hosts an extra entry\n named ALIAS that points to the IP address of the container identified by\n CONTAINER_NAME_or_ID . This lets processes inside the new container\n connect to the hostname ALIAS without having to know its IP. The\n --link= option is discussed in more detail below, in the section\n Communication between containers . Because\n Docker may assign a different IP address to the linked containers\n on restart, Docker updates the ALIAS entry in the /etc/hosts file\n of the recipient containers. --dns=IP_ADDRESS... \u2014 sets the IP addresses added as server \n lines to the container's /etc/resolv.conf file. Processes in the\n container, when confronted with a hostname not in /etc/hosts , will\n connect to these IP addresses on port 53 looking for name resolution\n services. --dns-search=DOMAIN... \u2014 sets the domain names that are searched\n when a bare unqualified hostname is used inside of the container, by\n writing search lines into the container's /etc/resolv.conf .\n When a container process attempts to access host and the search\n domain example.com is set, for instance, the DNS logic will not\n only look up host but also host.example.com .\n Use --dns-search=. if you don't wish to set the search domain. Note that Docker, in the absence of either of the last two options\nabove, will make /etc/resolv.conf inside of each container look like\nthe /etc/resolv.conf of the host machine where the docker daemon is\nrunning. You might wonder what happens when the host machine's /etc/resolv.conf file changes. The docker daemon has a file change\nnotifier active which will watch for changes to the host DNS configuration.\nWhen the host file changes, all stopped containers which have a matching resolv.conf to the host will be updated immediately to this newest host\nconfiguration. Containers which are running when the host configuration\nchanges will need to stop and start to pick up the host changes due to lack\nof a facility to ensure atomic writes of the resolv.conf file while the\ncontainer is running. If the container's resolv.conf has been edited since\nit was started with the default configuration, no replacement will be\nattempted as it would overwrite the changes performed by the container.\nIf the options ( --dns or --dns-search ) have been used to modify the \ndefault host configuration, then the replacement with an updated host's /etc/resolv.conf will not happen as well. Note :\nFor containers which were created prior to the implementation of\nthe /etc/resolv.conf update feature in Docker 1.5.0: those\ncontainers will not receive updates when the host resolv.conf \nfile changes. Only containers created with Docker 1.5.0 and above\nwill utilize this auto-update feature.",
|
|
"title": "Configuring DNS"
|
|
},
|
|
{
|
|
"loc": "/articles/networking#communication-between-containers-and-the-wider-world",
|
|
"tags": "",
|
|
"text": "Whether a container can talk to the world is governed by two factors. Is the host machine willing to forward IP packets? This is governed\n by the ip_forward system parameter. Packets can only pass between\n containers if this parameter is 1 . Usually you will simply leave\n the Docker server at its default setting --ip-forward=true and\n Docker will go set ip_forward to 1 for you when the server\n starts up. To check the setting or turn it on manually: $ cat /proc/sys/net/ipv4/ip_forward\n0\n$ echo 1 /proc/sys/net/ipv4/ip_forward\n$ cat /proc/sys/net/ipv4/ip_forward\n1 Many using Docker will want ip_forward to be on, to at\nleast make communication possible between containers and\nthe wider world. May also be needed for inter-container communication if you are\nin a multiple bridge setup. Do your iptables allow this particular connection? Docker will\n never make changes to your system iptables rules if you set\n --iptables=false when the daemon starts. Otherwise the Docker\n server will append forwarding rules to the DOCKER filter chain. Docker will not delete or modify any pre-existing rules from the DOCKER \nfilter chain. This allows the user to create in advance any rules required\nto further restrict access to the containers. Docker's forward rules permit all external source IPs by default. To allow\nonly a specific IP or network to access the containers, insert a negated\nrule at the top of the DOCKER filter chain. For example, to restrict\nexternal access such that only source IP 8.8.8.8 can access the\ncontainers, the following rule could be added: $ iptables -I DOCKER -i ext_if ! -s 8.8.8.8 -j DROP",
|
|
"title": "Communication between containers and the wider world"
|
|
},
|
|
{
|
|
"loc": "/articles/networking#communication-between-containers",
|
|
"tags": "",
|
|
"text": "Whether two containers can communicate is governed, at the operating\nsystem level, by two factors. Does the network topology even connect the containers' network\n interfaces? By default Docker will attach all containers to a\n single docker0 bridge, providing a path for packets to travel\n between them. See the later sections of this document for other\n possible topologies. Do your iptables allow this particular connection? Docker will never\n make changes to your system iptables rules if you set\n --iptables=false when the daemon starts. Otherwise the Docker server\n will add a default rule to the FORWARD chain with a blanket ACCEPT \n policy if you retain the default --icc=true , or else will set the\n policy to DROP if --icc=false . It is a strategic question whether to leave --icc=true or change it to --icc=false (on Ubuntu, by editing the DOCKER_OPTS variable in /etc/default/docker and restarting the Docker server) so that iptables will protect other containers \u2014 and the main host \u2014 from\nhaving arbitrary ports probed or accessed by a container that gets\ncompromised. If you choose the most secure setting of --icc=false , then how can\ncontainers communicate in those cases where you want them to provide\neach other services? The answer is the --link=CONTAINER_NAME_or_ID:ALIAS option, which was\nmentioned in the previous section because of its effect upon name\nservices. If the Docker daemon is running with both --icc=false and --iptables=true then, when it sees docker run invoked with the --link= option, the Docker server will insert a pair of iptables ACCEPT rules so that the new container can connect to the ports\nexposed by the other container \u2014 the ports that it mentioned in the EXPOSE lines of its Dockerfile . Docker has more documentation on\nthis subject \u2014 see the linking Docker containers \npage for further details. Note :\nThe value CONTAINER_NAME in --link= must either be an\nauto-assigned Docker name like stupefied_pare or else the name you\nassigned with --name= when you ran docker run . It cannot be a\nhostname, which Docker will not recognize in the context of the --link= option. You can run the iptables command on your Docker host to see whether\nthe FORWARD chain has a default policy of ACCEPT or DROP : # When --icc=false, you should see a DROP rule:\n\n$ sudo iptables -L -n\n...\nChain FORWARD (policy ACCEPT)\ntarget prot opt source destination\nDOCKER all -- 0.0.0.0/0 0.0.0.0/0\nDROP all -- 0.0.0.0/0 0.0.0.0/0\n...\n\n# When a --link= has been created under --icc=false,\n# you should see port-specific ACCEPT rules overriding\n# the subsequent DROP policy for all other packets:\n\n$ sudo iptables -L -n\n...\nChain FORWARD (policy ACCEPT)\ntarget prot opt source destination\nDOCKER all -- 0.0.0.0/0 0.0.0.0/0\nDROP all -- 0.0.0.0/0 0.0.0.0/0\n\nChain DOCKER (1 references)\ntarget prot opt source destination\nACCEPT tcp -- 172.17.0.2 172.17.0.3 tcp spt:80\nACCEPT tcp -- 172.17.0.3 172.17.0.2 tcp dpt:80 Note :\nDocker is careful that its host-wide iptables rules fully expose\ncontainers to each other's raw IP addresses, so connections from one\ncontainer to another should always appear to be originating from the\nfirst container's own IP address.",
|
|
"title": "Communication between containers"
|
|
},
|
|
{
|
|
"loc": "/articles/networking#binding-container-ports-to-the-host",
|
|
"tags": "",
|
|
"text": "By default Docker containers can make connections to the outside world,\nbut the outside world cannot connect to containers. Each outgoing\nconnection will appear to originate from one of the host machine's own\nIP addresses thanks to an iptables masquerading rule on the host\nmachine that the Docker server creates when it starts: # You can see that the Docker server creates a\n# masquerade rule that let containers connect\n# to IP addresses in the outside world:\n\n$ sudo iptables -t nat -L -n\n...\nChain POSTROUTING (policy ACCEPT)\ntarget prot opt source destination\nMASQUERADE all -- 172.17.0.0/16 !172.17.0.0/16\n... But if you want containers to accept incoming connections, you will need\nto provide special options when invoking docker run . These options\nare covered in more detail in the Docker User Guide \npage. There are two approaches. First, you can supply -P or --publish-all=true|false to docker run \nwhich is a blanket operation that identifies every port with an EXPOSE \nline in the image's Dockerfile and maps it to a host port somewhere in\nthe range 49153\u201365535. This tends to be a bit inconvenient, since you\nthen have to run other docker sub-commands to learn which external\nport a given service was mapped to. More convenient is the -p SPEC or --publish=SPEC option which lets\nyou be explicit about exactly which external port on the Docker server \u2014\nwhich can be any port at all, not just those in the 49153-65535 block \u2014\nyou want mapped to which port in the container. Either way, you should be able to peek at what Docker has accomplished\nin your network stack by examining your NAT tables. # What your NAT rules might look like when Docker\n# is finished setting up a -P forward:\n\n$ iptables -t nat -L -n\n...\nChain DOCKER (2 references)\ntarget prot opt source destination\nDNAT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:49153 to:172.17.0.2:80\n\n# What your NAT rules might look like when Docker\n# is finished setting up a -p 80:80 forward:\n\nChain DOCKER (2 references)\ntarget prot opt source destination\nDNAT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:80 to:172.17.0.2:80 You can see that Docker has exposed these container ports on 0.0.0.0 ,\nthe wildcard IP address that will match any possible incoming port on\nthe host machine. If you want to be more restrictive and only allow\ncontainer services to be contacted through a specific external interface\non the host machine, you have two choices. When you invoke docker run \nyou can use either -p IP:host_port:container_port or -p IP::port to\nspecify the external interface for one particular binding. Or if you always want Docker port forwards to bind to one specific IP\naddress, you can edit your system-wide Docker server settings (on\nUbuntu, by editing DOCKER_OPTS in /etc/default/docker ) and add the\noption --ip=IP_ADDRESS . Remember to restart your Docker server after\nediting this setting. Again, this topic is covered without all of these low-level networking\ndetails in the Docker User Guide document if you\nwould like to use that as your port redirection reference instead.",
|
|
"title": "Binding container ports to the host"
|
|
},
|
|
{
|
|
"loc": "/articles/networking#ipv6",
|
|
"tags": "",
|
|
"text": "As we are running out of IPv4 addresses \nthe IETF has standardized an IPv4 successor, Internet Protocol Version 6 \n, in RFC 2460 . Both protocols, IPv4 and\nIPv6, reside on layer 3 of the OSI model . IPv6 with Docker By default, the Docker server configures the container network for IPv4 only.\nYou can enable IPv4/IPv6 dualstack support by running the Docker daemon with the --ipv6 flag. Docker will set up the bridge docker0 with the IPv6 link-local address fe80::1 . By default, containers that are created will only get a link-local IPv6 address.\nTo assign globally routable IPv6 addresses to your containers you have to\nspecify an IPv6 subnet to pick the addresses from. Set the IPv6 subnet via the --fixed-cidr-v6 parameter when starting Docker daemon: docker -d --ipv6 --fixed-cidr-v6=\"2001:db8:1::/64\" The subnet for Docker containers should at least have a size of /80 . This way\nan IPv6 address can end with the container's MAC address and you prevent NDP\nneighbor cache invalidation issues in the Docker layer. With the --fixed-cidr-v6 parameter set Docker will add a new route to the\nrouting table. Further IPv6 routing will be enabled (you may prevent this by\nstarting Docker daemon with --ip-forward=false ): $ ip -6 route add 2001:db8:1::/64 dev docker0\n$ sysctl net.ipv6.conf.default.forwarding=1\n$ sysctl net.ipv6.conf.all.forwarding=1 All traffic to the subnet 2001:db8:1::/64 will now be routed\nvia the docker0 interface. Be aware that IPv6 forwarding may interfere with your existing IPv6\nconfiguration: If you are using Router Advertisements to get IPv6 settings for\nyour host's interfaces you should set accept_ra to 2 . Otherwise IPv6\nenabled forwarding will result in rejecting Router Advertisements. E.g., if you\nwant to configure eth0 via Router Advertisements you should set: ```\n$ sysctl net.ipv6.conf.eth0.accept_ra=2\n``` Every new container will get an IPv6 address from the defined subnet. Further\na default route will be added via the gateway fe80::1 on eth0 : docker run -it ubuntu bash -c \"ip -6 addr show dev eth0; ip -6 route show\"\n\n15: eth0: BROADCAST,UP,LOWER_UP mtu 1500\n inet6 2001:db8:1:0:0:242:ac11:3/64 scope global\n valid_lft forever preferred_lft forever\n inet6 fe80::42:acff:fe11:3/64 scope link\n valid_lft forever preferred_lft forever\n\n2001:db8:1::/64 dev eth0 proto kernel metric 256\nfe80::/64 dev eth0 proto kernel metric 256\ndefault via fe80::1 dev eth0 metric 1024 In this example the Docker container is assigned a link-local address with the\nnetwork suffix /64 (here: fe80::42:acff:fe11:3/64 ) and a globally routable\nIPv6 address (here: 2001:db8:1:0:0:242:ac11:3/64 ). The container will create\nconnections to addresses outside of the 2001:db8:1::/64 network via the\nlink-local gateway at fe80::1 on eth0 . Often servers or virtual machines get a /64 IPv6 subnet assigned (e.g. 2001:db8:23:42::/64 ). In this case you can split it up further and provide\nDocker a /80 subnet while using a separate /80 subnet for other\napplications on the host: In this setup the subnet 2001:db8:23:42::/80 with a range from 2001:db8:23:42:0:0:0:0 \nto 2001:db8:23:42:0:ffff:ffff:ffff is attached to eth0 , with the host listening\nat 2001:db8:23:42::1 . The subnet 2001:db8:23:42:1::/80 with an address range from 2001:db8:23:42:1:0:0:0 to 2001:db8:23:42:1:ffff:ffff:ffff is attached to docker0 and will be used by containers. Docker IPv6 Cluster Switched Network Environment Using routable IPv6 addresses allows you to realize communication between\ncontainers on different hosts. Let's have a look at a simple Docker IPv6 cluster\nexample: The Docker hosts are in the 2001:db8:0::/64 subnet. Host1 is configured\nto provide addresses from the 2001:db8:1::/64 subnet to its containers. It\nhas three routes configured: Route all traffic to 2001:db8:0::/64 via eth0 Route all traffic to 2001:db8:1::/64 via docker0 Route all traffic to 2001:db8:2::/64 via Host2 with IP 2001:db8::2 Host1 also acts as a router on OSI layer 3. When one of the network clients\ntries to contact a target that is specified in Host1's routing table Host1 will\nforward the traffic accordingly. It acts as a router for all networks it knows: 2001:db8::/64 , 2001:db8:1::/64 and 2001:db8:2::/64 . On Host2 we have nearly the same configuration. Host2's containers will get\nIPv6 addresses from 2001:db8:2::/64 . Host2 has three routes configured: Route all traffic to 2001:db8:0::/64 via eth0 Route all traffic to 2001:db8:2::/64 via docker0 Route all traffic to 2001:db8:1::/64 via Host1 with IP 2001:db8:0::1 The difference to Host1 is that the network 2001:db8:2::/64 is directly\nattached to the host via its docker0 interface whereas it reaches 2001:db8:1::/64 via Host1's IPv6 address 2001:db8::1 . This way every container is able to contact every other container. The\ncontainers Container1-* share the same subnet and contact each other directly.\nThe traffic between Container1-* and Container2-* will be routed via Host1\nand Host2 because those containers do not share the same subnet. In a switched environment every host has to know all routes to every subnet. You\nalways have to update the hosts' routing tables once you add or remove a host\nto the cluster. Every configuration in the diagram that is shown below the dashed line is\nhandled by Docker: The docker0 bridge IP address configuration, the route to\nthe Docker subnet on the host, the container IP addresses and the routes on the\ncontainers. The configuration above the line is up to the user and can be\nadapted to the individual environment. Routed Network Environment In a routed network environment you replace the level 2 switch with a level 3\nrouter. Now the hosts just have to know their default gateway (the router) and\nthe route to their own containers (managed by Docker). The router holds all\nrouting information about the Docker subnets. When you add or remove a host to\nthis environment you just have to update the routing table in the router - not\non every host. In this scenario containers of the same host can communicate directly with each\nother. The traffic between containers on different hosts will be routed via\ntheir hosts and the router. For example packet from Container1-1 to Container2-1 will be routed through Host1 , Router and Host2 until it\narrives at Container2-1 . To keep the IPv6 addresses short in this example a /48 network is assigned to\nevery host. The hosts use a /64 subnet of this for its own services and one\nfor Docker. When adding a third host you would add a route for the subnet 2001:db8:3::/48 in the router and configure Docker on Host3 with --fixed-cidr-v6=2001:db8:3:1::/64 . Remember the subnet for Docker containers should at least have a size of /80 .\nThis way an IPv6 address can end with the container's MAC address and you\nprevent NDP neighbor cache invalidation issues in the Docker layer. So if you\nhave a /64 for your whole environment use /68 subnets for the hosts and /80 for the containers. This way you can use 4096 hosts with 16 /80 subnets\neach. Every configuration in the diagram that is visualized below the dashed line is\nhandled by Docker: The docker0 bridge IP address configuration, the route to\nthe Docker subnet on the host, the container IP addresses and the routes on the\ncontainers. The configuration above the line is up to the user and can be\nadapted to the individual environment.",
|
|
"title": "IPv6"
|
|
},
|
|
{
|
|
"loc": "/articles/networking#customizing-docker0",
|
|
"tags": "",
|
|
"text": "By default, the Docker server creates and configures the host system's docker0 interface as an Ethernet bridge inside the Linux kernel that\ncan pass packets back and forth between other physical or virtual\nnetwork interfaces so that they behave as a single Ethernet network. Docker configures docker0 with an IP address, netmask and IP\nallocation range. The host machine can both receive and send packets to\ncontainers connected to the bridge, and gives it an MTU \u2014 the maximum\ntransmission unit or largest packet length that the interface will\nallow \u2014 of either 1,500 bytes or else a more specific value copied from\nthe Docker host's interface that supports its default route. These\noptions are configurable at server startup: --bip=CIDR \u2014 supply a specific IP address and netmask for the\n docker0 bridge, using standard CIDR notation like\n 192.168.1.5/24 . --fixed-cidr=CIDR \u2014 restrict the IP range from the docker0 subnet,\n using the standard CIDR notation like 172.167.1.0/28 . This range must\n be and IPv4 range for fixed IPs (ex: 10.20.0.0/16) and must be a subset\n of the bridge IP range ( docker0 or set using --bridge ). For example\n with --fixed-cidr=192.168.1.0/25 , IPs for your containers will be chosen\n from the first half of 192.168.1.0/24 subnet. --mtu=BYTES \u2014 override the maximum packet length on docker0 . On Ubuntu you would add these to the DOCKER_OPTS setting in /etc/default/docker on your Docker host and restarting the Docker\nservice. Once you have one or more containers up and running, you can confirm\nthat Docker has properly connected them to the docker0 bridge by\nrunning the brctl command on the host machine and looking at the interfaces column of the output. Here is a host with two different\ncontainers connected: # Display bridge info\n\n$ sudo brctl show\nbridge name bridge id STP enabled interfaces\ndocker0 8000.3a1d7362b4ee no veth65f9\n vethdda6 If the brctl command is not installed on your Docker host, then on\nUbuntu you should be able to run sudo apt-get install bridge-utils to\ninstall it. Finally, the docker0 Ethernet bridge settings are used every time you\ncreate a new container. Docker selects a free IP address from the range\navailable on the bridge each time you docker run a new container, and\nconfigures the container's eth0 interface with that IP address and the\nbridge's netmask. The Docker host's own IP address on the bridge is\nused as the default gateway by which each container reaches the rest of\nthe Internet. # The network, as seen from a container\n\n$ sudo docker run -i -t --rm base /bin/bash\n\n$$ ip addr show eth0\n24: eth0: BROADCAST,UP,LOWER_UP mtu 1500 qdisc pfifo_fast state UP group default qlen 1000\n link/ether 32:6f:e0:35:57:91 brd ff:ff:ff:ff:ff:ff\n inet 172.17.0.3/16 scope global eth0\n valid_lft forever preferred_lft forever\n inet6 fe80::306f:e0ff:fe35:5791/64 scope link\n valid_lft forever preferred_lft forever\n\n$$ ip route\ndefault via 172.17.42.1 dev eth0\n172.17.0.0/16 dev eth0 proto kernel scope link src 172.17.0.3\n\n$$ exit Remember that the Docker host will not be willing to forward container\npackets out on to the Internet unless its ip_forward system setting is 1 \u2014 see the section above on Communication between\ncontainers for details.",
|
|
"title": "Customizing docker0"
|
|
},
|
|
{
|
|
"loc": "/articles/networking#building-your-own-bridge",
|
|
"tags": "",
|
|
"text": "If you want to take Docker out of the business of creating its own\nEthernet bridge entirely, you can set up your own bridge before starting\nDocker and use -b BRIDGE or --bridge=BRIDGE to tell Docker to use\nyour bridge instead. If you already have Docker up and running with its\nold docker0 still configured, you will probably want to begin by\nstopping the service and removing the interface: # Stopping Docker and removing docker0\n\n$ sudo service docker stop\n$ sudo ip link set dev docker0 down\n$ sudo brctl delbr docker0\n$ sudo iptables -t nat -F POSTROUTING Then, before starting the Docker service, create your own bridge and\ngive it whatever configuration you want. Here we will create a simple\nenough bridge that we really could just have used the options in the\nprevious section to customize docker0 , but it will be enough to\nillustrate the technique. # Create our own bridge\n\n$ sudo brctl addbr bridge0\n$ sudo ip addr add 192.168.5.1/24 dev bridge0\n$ sudo ip link set dev bridge0 up\n\n# Confirming that our bridge is up and running\n\n$ ip addr show bridge0\n4: bridge0: BROADCAST,MULTICAST mtu 1500 qdisc noop state UP group default\n link/ether 66:38:d0:0d:76:18 brd ff:ff:ff:ff:ff:ff\n inet 192.168.5.1/24 scope global bridge0\n valid_lft forever preferred_lft forever\n\n# Tell Docker about it and restart (on Ubuntu)\n\n$ echo 'DOCKER_OPTS=\"-b=bridge0\"' /etc/default/docker\n$ sudo service docker start\n\n# Confirming new outgoing NAT masquerade is set up\n\n$ sudo iptables -t nat -L -n\n...\nChain POSTROUTING (policy ACCEPT)\ntarget prot opt source destination\nMASQUERADE all -- 192.168.5.0/24 0.0.0.0/0 The result should be that the Docker server starts successfully and is\nnow prepared to bind containers to the new bridge. After pausing to\nverify the bridge's configuration, try creating a container \u2014 you will\nsee that its IP address is in your new IP address range, which Docker\nwill have auto-detected. Just as we learned in the previous section, you can use the brctl show \ncommand to see Docker add and remove interfaces from the bridge as you\nstart and stop containers, and can run ip addr and ip route inside a\ncontainer to see that it has been given an address in the bridge's IP\naddress range and has been told to use the Docker host's IP address on\nthe bridge as its default gateway to the rest of the Internet.",
|
|
"title": "Building your own bridge"
|
|
},
|
|
{
|
|
"loc": "/articles/networking#how-docker-networks-a-container",
|
|
"tags": "",
|
|
"text": "While Docker is under active development and continues to tweak and\nimprove its network configuration logic, the shell commands in this\nsection are rough equivalents to the steps that Docker takes when\nconfiguring networking for each new container. Let's review a few basics. To communicate using the Internet Protocol\u00a0(IP), a machine needs access\nto at least one network interface at which packets can be sent and\nreceived, and a routing table that defines the range of IP addresses\nreachable through that interface. Network interfaces do not have to be\nphysical devices. In fact, the lo loopback interface available on\nevery Linux machine (and inside each Docker container) is entirely\nvirtual \u2014 the Linux kernel simply copies loopback packets directly from\nthe sender's memory into the receiver's memory. Docker uses special virtual interfaces to let containers communicate\nwith the host machine \u2014 pairs of virtual interfaces called \u201cpeers\u201d that\nare linked inside of the host machine's kernel so that packets can\ntravel between them. They are simple to create, as we will see in a\nmoment. The steps with which Docker configures a container are: Create a pair of peer virtual interfaces. Give one of them a unique name like veth65f9 , keep it inside of\n the main Docker host, and bind it to docker0 or whatever bridge\n Docker is supposed to be using. Toss the other interface over the wall into the new container (which\n will already have been provided with an lo interface) and rename\n it to the much prettier name eth0 since, inside of the container's\n separate and unique network interface namespace, there are no\n physical interfaces with which this name could collide. Set the interface's MAC address according to the --mac-address \n parameter or generate a random one. Give the container's eth0 a new IP address from within the\n bridge's range of network addresses, and set its default route to\n the IP address that the Docker host owns on the bridge. If available\n the IP address is generated from the MAC address. This prevents ARP\n cache invalidation problems, when a new container comes up with an\n IP used in the past by another container with another MAC. With these steps complete, the container now possesses an eth0 \n(virtual) network card and will find itself able to communicate with\nother containers and the rest of the Internet. You can opt out of the above process for a particular container by\ngiving the --net= option to docker run , which takes four possible\nvalues. --net=bridge \u2014 The default action, that connects the container to\n the Docker bridge as described above. --net=host \u2014 Tells Docker to skip placing the container inside of\n a separate network stack. In essence, this choice tells Docker to\n not containerize the container's networking ! While container\n processes will still be confined to their own filesystem and process\n list and resource limits, a quick ip addr command will show you\n that, network-wise, they live \u201coutside\u201d in the main Docker host and\n have full access to its network interfaces. Note that this does\n not let the container reconfigure the host network stack \u2014 that\n would require --privileged=true \u2014 but it does let container\n processes open low-numbered ports like any other root process.\n It also allows the container to access local network services\n like D-bus. This can lead to processes in the container being\n able to do unexpected things like\n restart your computer .\n You should use this option with caution. --net=container:NAME_or_ID \u2014 Tells Docker to put this container's\n processes inside of the network stack that has already been created\n inside of another container. The new container's processes will be\n confined to their own filesystem and process list and resource\n limits, but will share the same IP address and port numbers as the\n first container, and processes on the two containers will be able to\n connect to each other over the loopback interface. --net=none \u2014 Tells Docker to put the container inside of its own\n network stack but not to take any steps to configure its network,\n leaving you free to build any of the custom configurations explored\n in the last few sections of this document. To get an idea of the steps that are necessary if you use --net=none \nas described in that last bullet point, here are the commands that you\nwould run to reach roughly the same configuration as if you had let\nDocker do all of the configuration: # At one shell, start a container and\n# leave its shell idle and running\n\n$ sudo docker run -i -t --rm --net=none base /bin/bash\nroot@63f36fc01b5f:/#\n\n# At another shell, learn the container process ID\n# and create its namespace entry in /var/run/netns/\n# for the \"ip netns\" command we will be using below\n\n$ sudo docker inspect -f '{{.State.Pid}}' 63f36fc01b5f\n2778\n$ pid=2778\n$ sudo mkdir -p /var/run/netns\n$ sudo ln -s /proc/$pid/ns/net /var/run/netns/$pid\n\n# Check the bridge's IP address and netmask\n\n$ ip addr show docker0\n21: docker0: ...\ninet 172.17.42.1/16 scope global docker0\n...\n\n# Create a pair of \"peer\" interfaces A and B,\n# bind the A end to the bridge, and bring it up\n\n$ sudo ip link add A type veth peer name B\n$ sudo brctl addif docker0 A\n$ sudo ip link set A up\n\n# Place B inside the container's network namespace,\n# rename to eth0, and activate it with a free IP\n\n$ sudo ip link set B netns $pid\n$ sudo ip netns exec $pid ip link set dev B name eth0\n$ sudo ip netns exec $pid ip link set eth0 address 12:34:56:78:9a:bc\n$ sudo ip netns exec $pid ip link set eth0 up\n$ sudo ip netns exec $pid ip addr add 172.17.42.99/16 dev eth0\n$ sudo ip netns exec $pid ip route add default via 172.17.42.1 At this point your container should be able to perform networking\noperations as usual. When you finally exit the shell and Docker cleans up the container, the\nnetwork namespace is destroyed along with our virtual eth0 \u2014 whose\ndestruction in turn destroys interface A out in the Docker host and\nautomatically un-registers it from the docker0 bridge. So everything\ngets cleaned up without our having to run any extra commands! Well,\nalmost everything: # Clean up dangling symlinks in /var/run/netns\n\nfind -L /var/run/netns -type l -delete Also note that while the script above used modern ip command instead\nof old deprecated wrappers like ipconfig and route , these older\ncommands would also have worked inside of our container. The ip addr \ncommand can be typed as ip a if you are in a hurry. Finally, note the importance of the ip netns exec command, which let\nus reach inside and configure a network namespace as root. The same\ncommands would not have worked if run inside of the container, because\npart of safe containerization is that Docker strips container processes\nof the right to configure their own networks. Using ip netns exec is\nwhat let us finish up the configuration without having to take the\ndangerous step of running the container itself with --privileged=true .",
|
|
"title": "How Docker networks a container"
|
|
},
|
|
{
|
|
"loc": "/articles/networking#tools-and-examples",
|
|
"tags": "",
|
|
"text": "Before diving into the following sections on custom network topologies,\nyou might be interested in glancing at a few external tools or examples\nof the same kinds of configuration. Here are two: J\u00e9r\u00f4me Petazzoni has created a pipework shell script to help you\n connect together containers in arbitrarily complex scenarios:\n https://github.com/jpetazzo/pipework Brandon Rhodes has created a whole network topology of Docker\n containers for the next edition of Foundations of Python Network\n Programming that includes routing, NAT'd firewalls, and servers that\n offer HTTP, SMTP, POP, IMAP, Telnet, SSH, and FTP:\n https://github.com/brandon-rhodes/fopnp/tree/m/playground Both tools use networking commands very much like the ones you saw in\nthe previous section, and will see in the following sections.",
|
|
"title": "Tools and Examples"
|
|
},
|
|
{
|
|
"loc": "/articles/networking#building-a-point-to-point-connection",
|
|
"tags": "",
|
|
"text": "By default, Docker attaches all containers to the virtual subnet\nimplemented by docker0 . You can create containers that are each\nconnected to some different virtual subnet by creating your own bridge\nas shown in Building your own bridge , starting each\ncontainer with docker run --net=none , and then attaching the\ncontainers to your bridge with the shell commands shown in How Docker\nnetworks a container . But sometimes you want two particular containers to be able to\ncommunicate directly without the added complexity of both being bound to\na host-wide Ethernet bridge. The solution is simple: when you create your pair of peer interfaces,\nsimply throw both of them into containers, and configure them as\nclassic point-to-point links. The two containers will then be able to\ncommunicate directly (provided you manage to tell each container the\nother's IP address, of course). You might adjust the instructions of\nthe previous section to go something like this: # Start up two containers in two terminal windows\n\n$ sudo docker run -i -t --rm --net=none base /bin/bash\nroot@1f1f4c1f931a:/#\n\n$ sudo docker run -i -t --rm --net=none base /bin/bash\nroot@12e343489d2f:/#\n\n# Learn the container process IDs\n# and create their namespace entries\n\n$ sudo docker inspect -f '{{.State.Pid}}' 1f1f4c1f931a\n2989\n$ sudo docker inspect -f '{{.State.Pid}}' 12e343489d2f\n3004\n$ sudo mkdir -p /var/run/netns\n$ sudo ln -s /proc/2989/ns/net /var/run/netns/2989\n$ sudo ln -s /proc/3004/ns/net /var/run/netns/3004\n\n# Create the \"peer\" interfaces and hand them out\n\n$ sudo ip link add A type veth peer name B\n\n$ sudo ip link set A netns 2989\n$ sudo ip netns exec 2989 ip addr add 10.1.1.1/32 dev A\n$ sudo ip netns exec 2989 ip link set A up\n$ sudo ip netns exec 2989 ip route add 10.1.1.2/32 dev A\n\n$ sudo ip link set B netns 3004\n$ sudo ip netns exec 3004 ip addr add 10.1.1.2/32 dev B\n$ sudo ip netns exec 3004 ip link set B up\n$ sudo ip netns exec 3004 ip route add 10.1.1.1/32 dev B The two containers should now be able to ping each other and make\nconnections successfully. Point-to-point links like this do not depend\non a subnet nor a netmask, but on the bare assertion made by ip route \nthat some other single IP address is connected to a particular network\ninterface. Note that point-to-point links can be safely combined with other kinds\nof network connectivity \u2014 there is no need to start the containers with --net=none if you want point-to-point links to be an addition to the\ncontainer's normal networking instead of a replacement. A final permutation of this pattern is to create the point-to-point link\nbetween the Docker host and one container, which would allow the host to\ncommunicate with that one container on some single IP address and thus\ncommunicate \u201cout-of-band\u201d of the bridge that connects the other, more\nusual containers. But unless you have very specific networking needs\nthat drive you to such a solution, it is probably far preferable to use --icc=false to lock down inter-container communication, as we explored\nearlier.",
|
|
"title": "Building a point-to-point connection"
|
|
},
|
|
{
|
|
"loc": "/articles/networking#editing-networking-config-files",
|
|
"tags": "",
|
|
"text": "Starting with Docker v.1.2.0, you can now edit /etc/hosts , /etc/hostname \nand /etc/resolve.conf in a running container. This is useful if you need\nto install bind or other services that might override one of those files. Note, however, that changes to these files will not be saved by docker commit , nor will they be saved during docker run .\nThat means they won't be saved in the image, nor will they persist when a\ncontainer is restarted; they will only \"stick\" in a running container.",
|
|
"title": "Editing networking config files"
|
|
},
|
|
{
|
|
"loc": "/articles/security/",
|
|
"tags": "",
|
|
"text": "Docker Security\nThere are three major areas to consider when reviewing Docker security:\n\nthe intrinsic security of the kernel and its support for\n namespaces and cgroups;\nthe attack surface of the Docker daemon itself;\nloopholes in the container configuration profile, either by default,\n or when customized by users.\nthe \"hardening\" security features of the kernel and how they\n interact with containers.\n\nKernel Namespaces\nDocker containers are very similar to LXC containers, and they have\nsimilar security features. When you start a container with docker\nrun, behind the scenes Docker creates a set of namespaces and control\ngroups for the container.\nNamespaces provide the first and most straightforward form of\nisolation: processes running within a container cannot see, and even\nless affect, processes running in another container, or in the host\nsystem.\nEach container also gets its own network stack, meaning that a\ncontainer doesn't get privileged access to the sockets or interfaces\nof another container. Of course, if the host system is setup\naccordingly, containers can interact with each other through their\nrespective network interfaces \u2014 just like they can interact with\nexternal hosts. When you specify public ports for your containers or use\nlinks\nthen IP traffic is allowed between containers. They can ping each other,\nsend/receive UDP packets, and establish TCP connections, but that can be\nrestricted if necessary. From a network architecture point of view, all\ncontainers on a given Docker host are sitting on bridge interfaces. This\nmeans that they are just like physical machines connected through a\ncommon Ethernet switch; no more, no less.\nHow mature is the code providing kernel namespaces and private\nnetworking? Kernel namespaces were introduced between kernel version\n2.6.15 and\n2.6.26.\nThis means that since July 2008 (date of the 2.6.26 release, now 5 years\nago), namespace code has been exercised and scrutinized on a large\nnumber of production systems. And there is more: the design and\ninspiration for the namespaces code are even older. Namespaces are\nactually an effort to reimplement the features of OpenVZ in such a way that they could be\nmerged within the mainstream kernel. And OpenVZ was initially released\nin 2005, so both the design and the implementation are pretty mature.\nControl Groups\nControl Groups are another key component of Linux Containers. They\nimplement resource accounting and limiting. They provide many\nuseful metrics, but they also help ensure that each container gets\nits fair share of memory, CPU, disk I/O; and, more importantly, that a\nsingle container cannot bring the system down by exhausting one of those\nresources.\nSo while they do not play a role in preventing one container from\naccessing or affecting the data and processes of another container, they\nare essential to fend off some denial-of-service attacks. They are\nparticularly important on multi-tenant platforms, like public and\nprivate PaaS, to guarantee a consistent uptime (and performance) even\nwhen some applications start to misbehave.\nControl Groups have been around for a while as well: the code was\nstarted in 2006, and initially merged in kernel 2.6.24.\nDocker Daemon Attack Surface\nRunning containers (and applications) with Docker implies running the\nDocker daemon. This daemon currently requires root privileges, and you\nshould therefore be aware of some important details.\nFirst of all, only trusted users should be allowed to control your\nDocker daemon. This is a direct consequence of some powerful Docker\nfeatures. Specifically, Docker allows you to share a directory between\nthe Docker host and a guest container; and it allows you to do so\nwithout limiting the access rights of the container. This means that you\ncan start a container where the /host directory will be the / directory\non your host; and the container will be able to alter your host filesystem\nwithout any restriction. This is similar to how virtualization systems\nallow filesystem resource sharing. Nothing prevents you from sharing your\nroot filesystem (or even your root block device) with a virtual machine.\nThis has a strong security implication: for example, if you instrument Docker\nfrom a web server to provision containers through an API, you should be\neven more careful than usual with parameter checking, to make sure that\na malicious user cannot pass crafted parameters causing Docker to create\narbitrary containers.\nFor this reason, the REST API endpoint (used by the Docker CLI to\ncommunicate with the Docker daemon) changed in Docker 0.5.2, and now\nuses a UNIX socket instead of a TCP socket bound on 127.0.0.1 (the\nlatter being prone to cross-site-scripting attacks if you happen to run\nDocker directly on your local machine, outside of a VM). You can then\nuse traditional UNIX permission checks to limit access to the control\nsocket.\nYou can also expose the REST API over HTTP if you explicitly decide so.\nHowever, if you do that, being aware of the above mentioned security\nimplication, you should ensure that it will be reachable only from a\ntrusted network or VPN; or protected with e.g., stunnel and client SSL\ncertificates. You can also secure them with HTTPS and\ncertificates.\nThe daemon is also potentially vulnerable to other inputs, such as image\nloading from either disk with 'docker load', or from the network with\n'docker pull'. This has been a focus of improvement in the community,\nespecially for 'pull' security. While these overlap, it should be noted\nthat 'docker load' is a mechanism for backup and restore and is not\ncurrently considered a secure mechanism for loading images. As of\nDocker 1.3.2, images are now extracted in a chrooted subprocess on\nLinux/Unix platforms, being the first-step in a wider effort toward\nprivilege separation.\nEventually, it is expected that the Docker daemon will run restricted\nprivileges, delegating operations well-audited sub-processes,\neach with its own (very limited) scope of Linux capabilities, \nvirtual network setup, filesystem management, etc. That is, most likely,\npieces of the Docker engine itself will run inside of containers.\nFinally, if you run Docker on a server, it is recommended to run\nexclusively Docker in the server, and move all other services within\ncontainers controlled by Docker. Of course, it is fine to keep your\nfavorite admin tools (probably at least an SSH server), as well as\nexisting monitoring/supervision processes (e.g., NRPE, collectd, etc).\nLinux Kernel Capabilities\nBy default, Docker starts containers with a restricted set of\ncapabilities. What does that mean?\nCapabilities turn the binary \"root/non-root\" dichotomy into a\nfine-grained access control system. Processes (like web servers) that\njust need to bind on a port below 1024 do not have to run as root: they\ncan just be granted the net_bind_service capability instead. And there\nare many other capabilities, for almost all the specific areas where root\nprivileges are usually needed.\nThis means a lot for container security; let's see why!\nYour average server (bare metal or virtual machine) needs to run a bunch\nof processes as root. Those typically include SSH, cron, syslogd;\nhardware management tools (e.g., load modules), network configuration\ntools (e.g., to handle DHCP, WPA, or VPNs), and much more. A container is\nvery different, because almost all of those tasks are handled by the\ninfrastructure around the container:\n\nSSH access will typically be managed by a single server running on\n the Docker host;\ncron, when necessary, should run as a user\n process, dedicated and tailored for the app that needs its\n scheduling service, rather than as a platform-wide facility;\nlog management will also typically be handed to Docker, or by\n third-party services like Loggly or Splunk;\nhardware management is irrelevant, meaning that you never need to\n run udevd or equivalent daemons within\n containers;\nnetwork management happens outside of the containers, enforcing\n separation of concerns as much as possible, meaning that a container\n should never need to perform ifconfig,\n route, or ip commands (except when a container\n is specifically engineered to behave like a router or firewall, of\n course).\n\nThis means that in most cases, containers will not need \"real\" root\nprivileges at all. And therefore, containers can run with a reduced\ncapability set; meaning that \"root\" within a container has much less\nprivileges than the real \"root\". For instance, it is possible to:\n\ndeny all \"mount\" operations;\ndeny access to raw sockets (to prevent packet spoofing);\ndeny access to some filesystem operations, like creating new device\n nodes, changing the owner of files, or altering attributes (including\n the immutable flag);\ndeny module loading;\nand many others.\n\nThis means that even if an intruder manages to escalate to root within a\ncontainer, it will be much harder to do serious damage, or to escalate\nto the host.\nThis won't affect regular web apps; but malicious users will find that\nthe arsenal at their disposal has shrunk considerably! By default Docker\ndrops all capabilities except those\nneeded,\na whitelist instead of a blacklist approach. You can see a full list of\navailable capabilities in Linux\nmanpages.\nOne primary risk with running Docker containers is that the default set\nof capabilities and mounts given to a container may provide incomplete\nisolation, either independently, or when used in combination with\nkernel vulnerabilities.\nDocker supports the addition and removal of capabilities, allowing use\nof a non-default profile. This may make Docker more secure through\ncapability removal, or less secure through the addition of capabilities.\nThe best practice for users would be to remove all capabilities except\nthose explicitly required for their processes.\nOther Kernel Security Features\nCapabilities are just one of the many security features provided by\nmodern Linux kernels. It is also possible to leverage existing,\nwell-known systems like TOMOYO, AppArmor, SELinux, GRSEC, etc. with\nDocker.\nWhile Docker currently only enables capabilities, it doesn't interfere\nwith the other systems. This means that there are many different ways to\nharden a Docker host. Here are a few examples.\n\nYou can run a kernel with GRSEC and PAX. This will add many safety\n checks, both at compile-time and run-time; it will also defeat many\n exploits, thanks to techniques like address randomization. It doesn't\n require Docker-specific configuration, since those security features\n apply system-wide, independent of containers.\nIf your distribution comes with security model templates for\n Docker containers, you can use them out of the box. For instance, we\n ship a template that works with AppArmor and Red Hat comes with SELinux\n policies for Docker. These templates provide an extra safety net (even\n though it overlaps greatly with capabilities).\nYou can define your own policies using your favorite access control\n mechanism.\n\nJust like there are many third-party tools to augment Docker containers\nwith e.g., special network topologies or shared filesystems, you can\nexpect to see tools to harden existing Docker containers without\naffecting Docker's core.\nRecent improvements in Linux namespaces will soon allow to run\nfull-featured containers without root privileges, thanks to the new user\nnamespace. This is covered in detail here.\nMoreover, this will solve the problem caused by sharing filesystems\nbetween host and guest, since the user namespace allows users within\ncontainers (including the root user) to be mapped to other users in the\nhost system.\nToday, Docker does not directly support user namespaces, but they\nmay still be utilized by Docker containers on supported kernels,\nby directly using the clone syscall, or utilizing the 'unshare'\nutility. Using this, some users may find it possible to drop\nmore capabilities from their process as user namespaces provide\nan artifical capabilities set. Likewise, however, this artifical\ncapabilities set may require use of 'capsh' to restrict the\nuser-namespace capabilities set when using 'unshare'.\nEventually, it is expected that Docker will direct, native support\nfor user-namespaces, simplifying the process of hardening containers.\nConclusions\nDocker containers are, by default, quite secure; especially if you take\ncare of running your processes inside the containers as non-privileged\nusers (i.e., non-root).\nYou can add an extra layer of safety by enabling Apparmor, SELinux,\nGRSEC, or your favorite hardening solution.\nLast but not least, if you see interesting security features in other\ncontainerization systems, these are simply kernels features that may\nbe implemented in Docker as well. We welcome users to submit issues,\npull requests, and communicate via the mailing list.\nReferences:\n Docker Containers: How Secure Are They? (2013).\n On the Security of Containers (2014).",
|
|
"title": "Security"
|
|
},
|
|
{
|
|
"loc": "/articles/security#docker-security",
|
|
"tags": "",
|
|
"text": "There are three major areas to consider when reviewing Docker security: the intrinsic security of the kernel and its support for\n namespaces and cgroups; the attack surface of the Docker daemon itself; loopholes in the container configuration profile, either by default,\n or when customized by users. the \"hardening\" security features of the kernel and how they\n interact with containers.",
|
|
"title": "Docker Security"
|
|
},
|
|
{
|
|
"loc": "/articles/security#kernel-namespaces",
|
|
"tags": "",
|
|
"text": "Docker containers are very similar to LXC containers, and they have\nsimilar security features. When you start a container with docker\nrun , behind the scenes Docker creates a set of namespaces and control\ngroups for the container. Namespaces provide the first and most straightforward form of\nisolation : processes running within a container cannot see, and even\nless affect, processes running in another container, or in the host\nsystem. Each container also gets its own network stack , meaning that a\ncontainer doesn't get privileged access to the sockets or interfaces\nof another container. Of course, if the host system is setup\naccordingly, containers can interact with each other through their\nrespective network interfaces \u2014 just like they can interact with\nexternal hosts. When you specify public ports for your containers or use links \nthen IP traffic is allowed between containers. They can ping each other,\nsend/receive UDP packets, and establish TCP connections, but that can be\nrestricted if necessary. From a network architecture point of view, all\ncontainers on a given Docker host are sitting on bridge interfaces. This\nmeans that they are just like physical machines connected through a\ncommon Ethernet switch; no more, no less. How mature is the code providing kernel namespaces and private\nnetworking? Kernel namespaces were introduced between kernel version\n2.6.15 and\n2.6.26 .\nThis means that since July 2008 (date of the 2.6.26 release, now 5 years\nago), namespace code has been exercised and scrutinized on a large\nnumber of production systems. And there is more: the design and\ninspiration for the namespaces code are even older. Namespaces are\nactually an effort to reimplement the features of OpenVZ in such a way that they could be\nmerged within the mainstream kernel. And OpenVZ was initially released\nin 2005, so both the design and the implementation are pretty mature.",
|
|
"title": "Kernel Namespaces"
|
|
},
|
|
{
|
|
"loc": "/articles/security#control-groups",
|
|
"tags": "",
|
|
"text": "Control Groups are another key component of Linux Containers. They\nimplement resource accounting and limiting. They provide many\nuseful metrics, but they also help ensure that each container gets\nits fair share of memory, CPU, disk I/O; and, more importantly, that a\nsingle container cannot bring the system down by exhausting one of those\nresources. So while they do not play a role in preventing one container from\naccessing or affecting the data and processes of another container, they\nare essential to fend off some denial-of-service attacks. They are\nparticularly important on multi-tenant platforms, like public and\nprivate PaaS, to guarantee a consistent uptime (and performance) even\nwhen some applications start to misbehave. Control Groups have been around for a while as well: the code was\nstarted in 2006, and initially merged in kernel 2.6.24.",
|
|
"title": "Control Groups"
|
|
},
|
|
{
|
|
"loc": "/articles/security#docker-daemon-attack-surface",
|
|
"tags": "",
|
|
"text": "Running containers (and applications) with Docker implies running the\nDocker daemon. This daemon currently requires root privileges, and you\nshould therefore be aware of some important details. First of all, only trusted users should be allowed to control your\nDocker daemon . This is a direct consequence of some powerful Docker\nfeatures. Specifically, Docker allows you to share a directory between\nthe Docker host and a guest container; and it allows you to do so\nwithout limiting the access rights of the container. This means that you\ncan start a container where the /host directory will be the / directory\non your host; and the container will be able to alter your host filesystem\nwithout any restriction. This is similar to how virtualization systems\nallow filesystem resource sharing. Nothing prevents you from sharing your\nroot filesystem (or even your root block device) with a virtual machine. This has a strong security implication: for example, if you instrument Docker\nfrom a web server to provision containers through an API, you should be\neven more careful than usual with parameter checking, to make sure that\na malicious user cannot pass crafted parameters causing Docker to create\narbitrary containers. For this reason, the REST API endpoint (used by the Docker CLI to\ncommunicate with the Docker daemon) changed in Docker 0.5.2, and now\nuses a UNIX socket instead of a TCP socket bound on 127.0.0.1 (the\nlatter being prone to cross-site-scripting attacks if you happen to run\nDocker directly on your local machine, outside of a VM). You can then\nuse traditional UNIX permission checks to limit access to the control\nsocket. You can also expose the REST API over HTTP if you explicitly decide so.\nHowever, if you do that, being aware of the above mentioned security\nimplication, you should ensure that it will be reachable only from a\ntrusted network or VPN; or protected with e.g., stunnel and client SSL\ncertificates. You can also secure them with HTTPS and\ncertificates . The daemon is also potentially vulnerable to other inputs, such as image\nloading from either disk with 'docker load', or from the network with\n'docker pull'. This has been a focus of improvement in the community,\nespecially for 'pull' security. While these overlap, it should be noted\nthat 'docker load' is a mechanism for backup and restore and is not\ncurrently considered a secure mechanism for loading images. As of\nDocker 1.3.2, images are now extracted in a chrooted subprocess on\nLinux/Unix platforms, being the first-step in a wider effort toward\nprivilege separation. Eventually, it is expected that the Docker daemon will run restricted\nprivileges, delegating operations well-audited sub-processes,\neach with its own (very limited) scope of Linux capabilities, \nvirtual network setup, filesystem management, etc. That is, most likely,\npieces of the Docker engine itself will run inside of containers. Finally, if you run Docker on a server, it is recommended to run\nexclusively Docker in the server, and move all other services within\ncontainers controlled by Docker. Of course, it is fine to keep your\nfavorite admin tools (probably at least an SSH server), as well as\nexisting monitoring/supervision processes (e.g., NRPE, collectd, etc).",
|
|
"title": "Docker Daemon Attack Surface"
|
|
},
|
|
{
|
|
"loc": "/articles/security#linux-kernel-capabilities",
|
|
"tags": "",
|
|
"text": "By default, Docker starts containers with a restricted set of\ncapabilities. What does that mean? Capabilities turn the binary \"root/non-root\" dichotomy into a\nfine-grained access control system. Processes (like web servers) that\njust need to bind on a port below 1024 do not have to run as root: they\ncan just be granted the net_bind_service capability instead. And there\nare many other capabilities, for almost all the specific areas where root\nprivileges are usually needed. This means a lot for container security; let's see why! Your average server (bare metal or virtual machine) needs to run a bunch\nof processes as root. Those typically include SSH, cron, syslogd;\nhardware management tools (e.g., load modules), network configuration\ntools (e.g., to handle DHCP, WPA, or VPNs), and much more. A container is\nvery different, because almost all of those tasks are handled by the\ninfrastructure around the container: SSH access will typically be managed by a single server running on\n the Docker host; cron , when necessary, should run as a user\n process, dedicated and tailored for the app that needs its\n scheduling service, rather than as a platform-wide facility; log management will also typically be handed to Docker, or by\n third-party services like Loggly or Splunk; hardware management is irrelevant, meaning that you never need to\n run udevd or equivalent daemons within\n containers; network management happens outside of the containers, enforcing\n separation of concerns as much as possible, meaning that a container\n should never need to perform ifconfig ,\n route , or ip commands (except when a container\n is specifically engineered to behave like a router or firewall, of\n course). This means that in most cases, containers will not need \"real\" root\nprivileges at all . And therefore, containers can run with a reduced\ncapability set; meaning that \"root\" within a container has much less\nprivileges than the real \"root\". For instance, it is possible to: deny all \"mount\" operations; deny access to raw sockets (to prevent packet spoofing); deny access to some filesystem operations, like creating new device\n nodes, changing the owner of files, or altering attributes (including\n the immutable flag); deny module loading; and many others. This means that even if an intruder manages to escalate to root within a\ncontainer, it will be much harder to do serious damage, or to escalate\nto the host. This won't affect regular web apps; but malicious users will find that\nthe arsenal at their disposal has shrunk considerably! By default Docker\ndrops all capabilities except those\nneeded ,\na whitelist instead of a blacklist approach. You can see a full list of\navailable capabilities in Linux\nmanpages . One primary risk with running Docker containers is that the default set\nof capabilities and mounts given to a container may provide incomplete\nisolation, either independently, or when used in combination with\nkernel vulnerabilities. Docker supports the addition and removal of capabilities, allowing use\nof a non-default profile. This may make Docker more secure through\ncapability removal, or less secure through the addition of capabilities.\nThe best practice for users would be to remove all capabilities except\nthose explicitly required for their processes.",
|
|
"title": "Linux Kernel Capabilities"
|
|
},
|
|
{
|
|
"loc": "/articles/security#other-kernel-security-features",
|
|
"tags": "",
|
|
"text": "Capabilities are just one of the many security features provided by\nmodern Linux kernels. It is also possible to leverage existing,\nwell-known systems like TOMOYO, AppArmor, SELinux, GRSEC, etc. with\nDocker. While Docker currently only enables capabilities, it doesn't interfere\nwith the other systems. This means that there are many different ways to\nharden a Docker host. Here are a few examples. You can run a kernel with GRSEC and PAX. This will add many safety\n checks, both at compile-time and run-time; it will also defeat many\n exploits, thanks to techniques like address randomization. It doesn't\n require Docker-specific configuration, since those security features\n apply system-wide, independent of containers. If your distribution comes with security model templates for\n Docker containers, you can use them out of the box. For instance, we\n ship a template that works with AppArmor and Red Hat comes with SELinux\n policies for Docker. These templates provide an extra safety net (even\n though it overlaps greatly with capabilities). You can define your own policies using your favorite access control\n mechanism. Just like there are many third-party tools to augment Docker containers\nwith e.g., special network topologies or shared filesystems, you can\nexpect to see tools to harden existing Docker containers without\naffecting Docker's core. Recent improvements in Linux namespaces will soon allow to run\nfull-featured containers without root privileges, thanks to the new user\nnamespace. This is covered in detail here .\nMoreover, this will solve the problem caused by sharing filesystems\nbetween host and guest, since the user namespace allows users within\ncontainers (including the root user) to be mapped to other users in the\nhost system. Today, Docker does not directly support user namespaces, but they\nmay still be utilized by Docker containers on supported kernels,\nby directly using the clone syscall, or utilizing the 'unshare'\nutility. Using this, some users may find it possible to drop\nmore capabilities from their process as user namespaces provide\nan artifical capabilities set. Likewise, however, this artifical\ncapabilities set may require use of 'capsh' to restrict the\nuser-namespace capabilities set when using 'unshare'. Eventually, it is expected that Docker will direct, native support\nfor user-namespaces, simplifying the process of hardening containers.",
|
|
"title": "Other Kernel Security Features"
|
|
},
|
|
{
|
|
"loc": "/articles/security#conclusions",
|
|
"tags": "",
|
|
"text": "Docker containers are, by default, quite secure; especially if you take\ncare of running your processes inside the containers as non-privileged\nusers (i.e., non- root ). You can add an extra layer of safety by enabling Apparmor, SELinux,\nGRSEC, or your favorite hardening solution. Last but not least, if you see interesting security features in other\ncontainerization systems, these are simply kernels features that may\nbe implemented in Docker as well. We welcome users to submit issues,\npull requests, and communicate via the mailing list. References: Docker Containers: How Secure Are They? (2013) . On the Security of Containers (2014) .",
|
|
"title": "Conclusions"
|
|
},
|
|
{
|
|
"loc": "/articles/https/",
|
|
"tags": "",
|
|
"text": "Protecting the Docker daemon Socket with HTTPS\nBy default, Docker runs via a non-networked Unix socket. It can also\noptionally communicate using a HTTP socket.\nIf you need Docker to be reachable via the network in a safe manner, you can\nenable TLS by specifying the tlsverify flag and pointing Docker's\ntlscacert flag to a trusted CA certificate.\nIn the daemon mode, it will only allow connections from clients\nauthenticated by a certificate signed by that CA. In the client mode,\nit will only connect to servers with a certificate signed by that CA.\n\nWarning:\nUsing TLS and managing a CA is an advanced topic. Please familiarize yourself\nwith OpenSSL, x509 and TLS before using it in production.\nWarning:\nThese TLS commands will only generate a working set of certificates on Linux.\nMac OS X comes with a version of OpenSSL that is incompatible with the\ncertificates that Docker requires.\n\nCreate a CA, server and client keys with OpenSSL\n\nNote: replace all instances of $HOST in the following example with the\nDNS name of your Docker daemon's host.\n\nFirst generate CA private and public keys:\n$ openssl genrsa -aes256 -out ca-key.pem 2048\nGenerating RSA private key, 2048 bit long modulus\n......+++\n...............+++\ne is 65537 (0x10001)\nEnter pass phrase for ca-key.pem:\nVerifying - Enter pass phrase for ca-key.pem:\n$ openssl req -new -x509 -days 365 -key ca-key.pem -sha256 -out ca.pem\nEnter pass phrase for ca-key.pem:\nYou are about to be asked to enter information that will be incorporated\ninto your certificate request.\nWhat you are about to enter is what is called a Distinguished Name or a DN.\nThere are quite a few fields but you can leave some blank\nFor some fields there will be a default value,\nIf you enter '.', the field will be left blank.\n-----\nCountry Name (2 letter code) [AU]:\nState or Province Name (full name) [Some-State]:Queensland\nLocality Name (eg, city) []:Brisbane\nOrganization Name (eg, company) [Internet Widgits Pty Ltd]:Docker Inc\nOrganizational Unit Name (eg, section) []:Boot2Docker\nCommon Name (e.g. server FQDN or YOUR name) []:$HOST\nEmail Address []:Sven@home.org.au\n\nNow that we have a CA, you can create a server key and certificate\nsigning request (CSR). Make sure that \"Common Name\" (i.e., server FQDN or YOUR\nname) matches the hostname you will use to connect to Docker:\n\nNote: replace all instances of $HOST in the following example with the\nDNS name of your Docker daemon's host.\n\n$ openssl genrsa -out server-key.pem 2048\nGenerating RSA private key, 2048 bit long modulus\n......................................................+++\n............................................+++\ne is 65537 (0x10001)\n$ openssl req -subj \"/CN=$HOST\" -new -key server-key.pem -out server.csr\n\nNext, we're going to sign the public key with our CA:\nSince TLS connections can be made via IP address as well as DNS name, they need\nto be specified when creating the certificate. For example, to allow connections\nusing 10.10.10.20 and 127.0.0.1:\n$ echo subjectAltName = IP:10.10.10.20,IP:127.0.0.1 extfile.cnf\n\n$ openssl x509 -req -days 365 -in server.csr -CA ca.pem -CAkey ca-key.pem \\\n -CAcreateserial -out server-cert.pem -extfile extfile.cnf\nSignature ok\nsubject=/CN=your.host.com\nGetting CA Private Key\nEnter pass phrase for ca-key.pem:\n\nFor client authentication, create a client key and certificate signing\nrequest:\n$ openssl genrsa -out key.pem 2048\nGenerating RSA private key, 2048 bit long modulus\n...............................................+++\n...............................................................+++\ne is 65537 (0x10001)\n$ openssl req -subj '/CN=client' -new -key key.pem -out client.csr\n\nTo make the key suitable for client authentication, create an extensions\nconfig file:\n$ echo extendedKeyUsage = clientAuth extfile.cnf\n\nNow sign the public key:\n$ openssl x509 -req -days 365 -in client.csr -CA ca.pem -CAkey ca-key.pem \\\n -CAcreateserial -out cert.pem -extfile extfile.cnf\nSignature ok\nsubject=/CN=client\nGetting CA Private Key\nEnter pass phrase for ca-key.pem:\n\nAfter generating cert.pem and server-cert.pem you can safely remove the\ntwo certificate signing requests:\n$ rm -v client.csr server.csr\n\nWith a default umask of 022, your secret keys will be world-readable and\nwritable for you and your group.\nIn order to protect your keys from accidental damage, you will want to remove their\nwrite permissions. To make them only readable by you, change file modes as follows:\n$ chmod -v 0400 ca-key.pem key.pem server-key.pem\n\nCertificates can be world-readable, but you might want to remove write access to\nprevent accidental damage:\n$ chmod -v 0444 ca.pem server-cert.pem cert.pem\n\nNow you can make the Docker daemon only accept connections from clients\nproviding a certificate trusted by our CA:\n$ docker -d --tlsverify --tlscacert=ca.pem --tlscert=server-cert.pem --tlskey=server-key.pem \\\n -H=0.0.0.0:2376\n\nTo be able to connect to Docker and validate its certificate, you now\nneed to provide your client keys, certificates and trusted CA:\n\nNote: replace all instances of $HOST in the following example with the\nDNS name of your Docker daemon's host.\n\n$ docker --tlsverify --tlscacert=ca.pem --tlscert=cert.pem --tlskey=key.pem \\\n -H=$HOST:2376 version\n\n\nNote:\nDocker over TLS should run on TCP port 2376.\nWarning:\nAs shown in the example above, you don't have to run the docker client\nwith sudo or the docker group when you use certificate authentication.\nThat means anyone with the keys can give any instructions to your Docker\ndaemon, giving them root access to the machine hosting the daemon. Guard\nthese keys as you would a root password!\n\nSecure by default\nIf you want to secure your Docker client connections by default, you can move\nthe files to the .docker directory in your home directory -- and set the\nDOCKER_HOST and DOCKER_TLS_VERIFY variables as well (instead of passing\n-H=tcp://$HOST:2376 and --tlsverify on every call).\n$ mkdir -pv ~/.docker\n$ cp -v {ca,cert,key}.pem ~/.docker\n$ export DOCKER_HOST=tcp://$HOST:2376 DOCKER_TLS_VERIFY=1\n\nDocker will now connect securely by default:\n$ docker ps\n\nOther modes\nIf you don't want to have complete two-way authentication, you can run\nDocker in various other modes by mixing the flags.\nDaemon modes\n\ntlsverify, tlscacert, tlscert, tlskey set: Authenticate clients\ntls, tlscert, tlskey: Do not authenticate clients\n\nClient modes\n\ntls: Authenticate server based on public/default CA pool\ntlsverify, tlscacert: Authenticate server based on given CA\ntls, tlscert, tlskey: Authenticate with client certificate, do not\n authenticate server based on given CA\ntlsverify, tlscacert, tlscert, tlskey: Authenticate with client\n certificate and authenticate server based on given CA\n\nIf found, the client will send its client certificate, so you just need\nto drop your keys into ~/.docker/{ca,cert,key}.pem. Alternatively,\nif you want to store your keys in another location, you can specify that\nlocation using the environment variable DOCKER_CERT_PATH.\n$ export DOCKER_CERT_PATH=~/.docker/zone1/\n$ docker --tlsverify ps\n\nConnecting to the Secure Docker port using curl\nTo use curl to make test API requests, you need to use three extra command line\nflags:\n$ curl https://$HOST:2376/images/json \\\n --cert ~/.docker/cert.pem \\\n --key ~/.docker/key.pem \\\n --cacert ~/.docker/ca.pem",
|
|
"title": "Running Docker with HTTPS"
|
|
},
|
|
{
|
|
"loc": "/articles/https#protecting-the-docker-daemon-socket-with-https",
|
|
"tags": "",
|
|
"text": "By default, Docker runs via a non-networked Unix socket. It can also\noptionally communicate using a HTTP socket. If you need Docker to be reachable via the network in a safe manner, you can\nenable TLS by specifying the tlsverify flag and pointing Docker's tlscacert flag to a trusted CA certificate. In the daemon mode, it will only allow connections from clients\nauthenticated by a certificate signed by that CA. In the client mode,\nit will only connect to servers with a certificate signed by that CA. Warning :\nUsing TLS and managing a CA is an advanced topic. Please familiarize yourself\nwith OpenSSL, x509 and TLS before using it in production. Warning :\nThese TLS commands will only generate a working set of certificates on Linux.\nMac OS X comes with a version of OpenSSL that is incompatible with the\ncertificates that Docker requires.",
|
|
"title": "Protecting the Docker daemon Socket with HTTPS"
|
|
},
|
|
{
|
|
"loc": "/articles/https#create-a-ca-server-and-client-keys-with-openssl",
|
|
"tags": "",
|
|
"text": "Note : replace all instances of $HOST in the following example with the\nDNS name of your Docker daemon's host. First generate CA private and public keys: $ openssl genrsa -aes256 -out ca-key.pem 2048\nGenerating RSA private key, 2048 bit long modulus\n......+++\n...............+++\ne is 65537 (0x10001)\nEnter pass phrase for ca-key.pem:\nVerifying - Enter pass phrase for ca-key.pem:\n$ openssl req -new -x509 -days 365 -key ca-key.pem -sha256 -out ca.pem\nEnter pass phrase for ca-key.pem:\nYou are about to be asked to enter information that will be incorporated\ninto your certificate request.\nWhat you are about to enter is what is called a Distinguished Name or a DN.\nThere are quite a few fields but you can leave some blank\nFor some fields there will be a default value,\nIf you enter '.', the field will be left blank.\n-----\nCountry Name (2 letter code) [AU]:\nState or Province Name (full name) [Some-State]:Queensland\nLocality Name (eg, city) []:Brisbane\nOrganization Name (eg, company) [Internet Widgits Pty Ltd]:Docker Inc\nOrganizational Unit Name (eg, section) []:Boot2Docker\nCommon Name (e.g. server FQDN or YOUR name) []:$HOST\nEmail Address []:Sven@home.org.au Now that we have a CA, you can create a server key and certificate\nsigning request (CSR). Make sure that \"Common Name\" (i.e., server FQDN or YOUR\nname) matches the hostname you will use to connect to Docker: Note : replace all instances of $HOST in the following example with the\nDNS name of your Docker daemon's host. $ openssl genrsa -out server-key.pem 2048\nGenerating RSA private key, 2048 bit long modulus\n......................................................+++\n............................................+++\ne is 65537 (0x10001)\n$ openssl req -subj \"/CN=$HOST\" -new -key server-key.pem -out server.csr Next, we're going to sign the public key with our CA: Since TLS connections can be made via IP address as well as DNS name, they need\nto be specified when creating the certificate. For example, to allow connections\nusing 10.10.10.20 and 127.0.0.1 : $ echo subjectAltName = IP:10.10.10.20,IP:127.0.0.1 extfile.cnf\n\n$ openssl x509 -req -days 365 -in server.csr -CA ca.pem -CAkey ca-key.pem \\\n -CAcreateserial -out server-cert.pem -extfile extfile.cnf\nSignature ok\nsubject=/CN=your.host.com\nGetting CA Private Key\nEnter pass phrase for ca-key.pem: For client authentication, create a client key and certificate signing\nrequest: $ openssl genrsa -out key.pem 2048\nGenerating RSA private key, 2048 bit long modulus\n...............................................+++\n...............................................................+++\ne is 65537 (0x10001)\n$ openssl req -subj '/CN=client' -new -key key.pem -out client.csr To make the key suitable for client authentication, create an extensions\nconfig file: $ echo extendedKeyUsage = clientAuth extfile.cnf Now sign the public key: $ openssl x509 -req -days 365 -in client.csr -CA ca.pem -CAkey ca-key.pem \\\n -CAcreateserial -out cert.pem -extfile extfile.cnf\nSignature ok\nsubject=/CN=client\nGetting CA Private Key\nEnter pass phrase for ca-key.pem: After generating cert.pem and server-cert.pem you can safely remove the\ntwo certificate signing requests: $ rm -v client.csr server.csr With a default umask of 022, your secret keys will be world-readable and\nwritable for you and your group. In order to protect your keys from accidental damage, you will want to remove their\nwrite permissions. To make them only readable by you, change file modes as follows: $ chmod -v 0400 ca-key.pem key.pem server-key.pem Certificates can be world-readable, but you might want to remove write access to\nprevent accidental damage: $ chmod -v 0444 ca.pem server-cert.pem cert.pem Now you can make the Docker daemon only accept connections from clients\nproviding a certificate trusted by our CA: $ docker -d --tlsverify --tlscacert=ca.pem --tlscert=server-cert.pem --tlskey=server-key.pem \\\n -H=0.0.0.0:2376 To be able to connect to Docker and validate its certificate, you now\nneed to provide your client keys, certificates and trusted CA: Note : replace all instances of $HOST in the following example with the\nDNS name of your Docker daemon's host. $ docker --tlsverify --tlscacert=ca.pem --tlscert=cert.pem --tlskey=key.pem \\\n -H=$HOST:2376 version Note :\nDocker over TLS should run on TCP port 2376. Warning :\nAs shown in the example above, you don't have to run the docker client\nwith sudo or the docker group when you use certificate authentication.\nThat means anyone with the keys can give any instructions to your Docker\ndaemon, giving them root access to the machine hosting the daemon. Guard\nthese keys as you would a root password!",
|
|
"title": "Create a CA, server and client keys with OpenSSL"
|
|
},
|
|
{
|
|
"loc": "/articles/https#secure-by-default",
|
|
"tags": "",
|
|
"text": "If you want to secure your Docker client connections by default, you can move\nthe files to the .docker directory in your home directory -- and set the DOCKER_HOST and DOCKER_TLS_VERIFY variables as well (instead of passing -H=tcp://$HOST:2376 and --tlsverify on every call). $ mkdir -pv ~/.docker\n$ cp -v {ca,cert,key}.pem ~/.docker\n$ export DOCKER_HOST=tcp://$HOST:2376 DOCKER_TLS_VERIFY=1 Docker will now connect securely by default: $ docker ps",
|
|
"title": "Secure by default"
|
|
},
|
|
{
|
|
"loc": "/articles/https#other-modes",
|
|
"tags": "",
|
|
"text": "If you don't want to have complete two-way authentication, you can run\nDocker in various other modes by mixing the flags. Daemon modes tlsverify , tlscacert , tlscert , tlskey set: Authenticate clients tls , tlscert , tlskey : Do not authenticate clients Client modes tls : Authenticate server based on public/default CA pool tlsverify , tlscacert : Authenticate server based on given CA tls , tlscert , tlskey : Authenticate with client certificate, do not\n authenticate server based on given CA tlsverify , tlscacert , tlscert , tlskey : Authenticate with client\n certificate and authenticate server based on given CA If found, the client will send its client certificate, so you just need\nto drop your keys into ~/.docker/{ca,cert,key}.pem . Alternatively,\nif you want to store your keys in another location, you can specify that\nlocation using the environment variable DOCKER_CERT_PATH . $ export DOCKER_CERT_PATH=~/.docker/zone1/\n$ docker --tlsverify ps Connecting to the Secure Docker port using curl To use curl to make test API requests, you need to use three extra command line\nflags: $ curl https://$HOST:2376/images/json \\\n --cert ~/.docker/cert.pem \\\n --key ~/.docker/key.pem \\\n --cacert ~/.docker/ca.pem",
|
|
"title": "Other modes"
|
|
},
|
|
{
|
|
"loc": "/articles/registry_mirror/",
|
|
"tags": "",
|
|
"text": "Run a local registry mirror\nWhy?\nIf you have multiple instances of Docker running in your environment\n(e.g., multiple physical or virtual machines, all running the Docker\ndaemon), each time one of them requires an image that it doesn't have\nit will go out to the internet and fetch it from the public Docker\nregistry. By running a local registry mirror, you can keep most of the\nimage fetch traffic on your local network.\nHow does it work?\nThe first time you request an image from your local registry mirror,\nit pulls the image from the public Docker registry and stores it locally\nbefore handing it back to you. On subsequent requests, the local registry\nmirror is able to serve the image from its own storage.\nHow do I set up a local registry mirror?\nThere are two steps to set up and use a local registry mirror.\nStep 1: Configure your Docker daemons to use the local registry mirror\nYou will need to pass the --registry-mirror option to your Docker daemon on\nstartup:\nsudo docker --registry-mirror=http://my-docker-mirror-host -d\n\nFor example, if your mirror is serving on http://10.0.0.2:5000, you would run:\nsudo docker --registry-mirror=http://10.0.0.2:5000 -d\n\nNOTE:\nDepending on your local host setup, you may be able to add the\n--registry-mirror options to the DOCKER_OPTS variable in\n/etc/default/docker.\nStep 2: Run the local registry mirror\nYou will need to start a local registry mirror service. The\nregistry image provides this\nfunctionality. For example, to run a local registry mirror that serves on\nport 5000 and mirrors the content at registry-1.docker.io:\nsudo docker run -p 5000:5000 \\\n -e STANDALONE=false \\\n -e MIRROR_SOURCE=https://registry-1.docker.io \\\n -e MIRROR_SOURCE_INDEX=https://index.docker.io registry\n\nTest it out\nWith your mirror running, pull an image that you haven't pulled before (using\ntime to time it):\n$ time sudo docker pull node:latest\nPulling repository node\n[...]\n\nreal 1m14.078s\nuser 0m0.176s\nsys 0m0.120s\n\nNow, remove the image from your local machine:\n$ sudo docker rmi node:latest\n\nFinally, re-pull the image:\n$ time sudo docker pull node:latest\nPulling repository node\n[...]\n\nreal 0m51.376s\nuser 0m0.120s\nsys 0m0.116s\n\nThe second time around, the local registry mirror served the image from storage,\navoiding a trip out to the internet to refetch it.",
|
|
"title": "Run a local registry mirror"
|
|
},
|
|
{
|
|
"loc": "/articles/registry_mirror#run-a-local-registry-mirror",
|
|
"tags": "",
|
|
"text": "",
|
|
"title": "Run a local registry mirror"
|
|
},
|
|
{
|
|
"loc": "/articles/registry_mirror#why",
|
|
"tags": "",
|
|
"text": "If you have multiple instances of Docker running in your environment\n(e.g., multiple physical or virtual machines, all running the Docker\ndaemon), each time one of them requires an image that it doesn't have\nit will go out to the internet and fetch it from the public Docker\nregistry. By running a local registry mirror, you can keep most of the\nimage fetch traffic on your local network.",
|
|
"title": "Why?"
|
|
},
|
|
{
|
|
"loc": "/articles/registry_mirror#how-does-it-work",
|
|
"tags": "",
|
|
"text": "The first time you request an image from your local registry mirror,\nit pulls the image from the public Docker registry and stores it locally\nbefore handing it back to you. On subsequent requests, the local registry\nmirror is able to serve the image from its own storage.",
|
|
"title": "How does it work?"
|
|
},
|
|
{
|
|
"loc": "/articles/registry_mirror#how-do-i-set-up-a-local-registry-mirror",
|
|
"tags": "",
|
|
"text": "There are two steps to set up and use a local registry mirror. Step 1: Configure your Docker daemons to use the local registry mirror You will need to pass the --registry-mirror option to your Docker daemon on\nstartup: sudo docker --registry-mirror=http:// my-docker-mirror-host -d For example, if your mirror is serving on http://10.0.0.2:5000 , you would run: sudo docker --registry-mirror=http://10.0.0.2:5000 -d NOTE: \nDepending on your local host setup, you may be able to add the --registry-mirror options to the DOCKER_OPTS variable in /etc/default/docker . Step 2: Run the local registry mirror You will need to start a local registry mirror service. The registry image provides this\nfunctionality. For example, to run a local registry mirror that serves on\nport 5000 and mirrors the content at registry-1.docker.io : sudo docker run -p 5000:5000 \\\n -e STANDALONE=false \\\n -e MIRROR_SOURCE=https://registry-1.docker.io \\\n -e MIRROR_SOURCE_INDEX=https://index.docker.io registry",
|
|
"title": "How do I set up a local registry mirror?"
|
|
},
|
|
{
|
|
"loc": "/articles/registry_mirror#test-it-out",
|
|
"tags": "",
|
|
"text": "With your mirror running, pull an image that you haven't pulled before (using time to time it): $ time sudo docker pull node:latest\nPulling repository node\n[...]\n\nreal 1m14.078s\nuser 0m0.176s\nsys 0m0.120s Now, remove the image from your local machine: $ sudo docker rmi node:latest Finally, re-pull the image: $ time sudo docker pull node:latest\nPulling repository node\n[...]\n\nreal 0m51.376s\nuser 0m0.120s\nsys 0m0.116s The second time around, the local registry mirror served the image from storage,\navoiding a trip out to the internet to refetch it.",
|
|
"title": "Test it out"
|
|
},
|
|
{
|
|
"loc": "/articles/host_integration/",
|
|
"tags": "",
|
|
"text": "Automatically Start Containers\nAs of Docker 1.2,\nrestart policies are the\nbuilt-in Docker mechanism for restarting containers when they exit. If set,\nrestart policies will be used when the Docker daemon starts up, as typically\nhappens after a system boot. Restart policies will ensure that linked containers\nare started in the correct order.\nIf restart policies don't suit your needs (i.e., you have non-Docker processes\nthat depend on Docker containers), you can use a process manager like\nupstart,\nsystemd or\nsupervisor instead.\nUsing a Process Manager\nDocker does not set any restart policies by default, but be aware that they will\nconflict with most process managers. So don't set restart policies if you are\nusing a process manager.\nNote: Prior to Docker 1.2, restarting of Docker containers had to be\nexplicitly disabled. Refer to the\nprevious version of this article for the\ndetails on how to do that.\nWhen you have finished setting up your image and are happy with your\nrunning container, you can then attach a process manager to manage it.\nWhen you run docker start -a, Docker will automatically attach to the\nrunning container, or start it if needed and forward all signals so that\nthe process manager can detect when a container stops and correctly\nrestart it.\nHere are a few sample scripts for systemd and upstart to integrate with\nDocker.\nExamples\nThe examples below show configuration files for two popular process managers,\nupstart and systemd. In these examples, we'll assume that we have already\ncreated a container to run Redis with --name=redis_server. These files define\na new service that will be started after the docker daemon service has started.\nupstart\ndescription \"Redis container\"\nauthor \"Me\"\nstart on filesystem and started docker\nstop on runlevel [!2345]\nrespawn\nscript\n /usr/bin/docker start -a redis_server\nend script\n\nsystemd\n[Unit]\nDescription=Redis container\nAuthor=Me\nAfter=docker.service\n\n[Service]\nRestart=always\nExecStart=/usr/bin/docker start -a redis_server\nExecStop=/usr/bin/docker stop -t 2 redis_server\n\n[Install]\nWantedBy=local.target",
|
|
"title": "Automatically starting containers"
|
|
},
|
|
{
|
|
"loc": "/articles/host_integration#automatically-start-containers",
|
|
"tags": "",
|
|
"text": "As of Docker 1.2, restart policies are the\nbuilt-in Docker mechanism for restarting containers when they exit. If set,\nrestart policies will be used when the Docker daemon starts up, as typically\nhappens after a system boot. Restart policies will ensure that linked containers\nare started in the correct order. If restart policies don't suit your needs (i.e., you have non-Docker processes\nthat depend on Docker containers), you can use a process manager like upstart , systemd or supervisor instead.",
|
|
"title": "Automatically Start Containers"
|
|
},
|
|
{
|
|
"loc": "/articles/host_integration#using-a-process-manager",
|
|
"tags": "",
|
|
"text": "Docker does not set any restart policies by default, but be aware that they will\nconflict with most process managers. So don't set restart policies if you are\nusing a process manager. Note: Prior to Docker 1.2, restarting of Docker containers had to be\nexplicitly disabled. Refer to the previous version of this article for the\ndetails on how to do that. When you have finished setting up your image and are happy with your\nrunning container, you can then attach a process manager to manage it.\nWhen you run docker start -a , Docker will automatically attach to the\nrunning container, or start it if needed and forward all signals so that\nthe process manager can detect when a container stops and correctly\nrestart it. Here are a few sample scripts for systemd and upstart to integrate with\nDocker.",
|
|
"title": "Using a Process Manager"
|
|
},
|
|
{
|
|
"loc": "/articles/host_integration#examples",
|
|
"tags": "",
|
|
"text": "The examples below show configuration files for two popular process managers,\nupstart and systemd. In these examples, we'll assume that we have already\ncreated a container to run Redis with --name=redis_server . These files define\na new service that will be started after the docker daemon service has started. upstart description \"Redis container\"\nauthor \"Me\"\nstart on filesystem and started docker\nstop on runlevel [!2345]\nrespawn\nscript\n /usr/bin/docker start -a redis_server\nend script systemd [Unit]\nDescription=Redis container\nAuthor=Me\nAfter=docker.service\n\n[Service]\nRestart=always\nExecStart=/usr/bin/docker start -a redis_server\nExecStop=/usr/bin/docker stop -t 2 redis_server\n\n[Install]\nWantedBy=local.target",
|
|
"title": "Examples"
|
|
},
|
|
{
|
|
"loc": "/articles/baseimages/",
|
|
"tags": "",
|
|
"text": "Create a Base Image\nSo you want to create your own Base Image? Great!\nThe specific process will depend heavily on the Linux distribution you\nwant to package. We have some examples below, and you are encouraged to\nsubmit pull requests to contribute new ones.\nCreate a full image using tar\nIn general, you'll want to start with a working machine that is running\nthe distribution you'd like to package as a base image, though that is\nnot required for some tools like Debian's\nDebootstrap, which you can also\nuse to build Ubuntu images.\nIt can be as simple as this to create an Ubuntu base image:\n$ sudo debootstrap raring raring /dev/null\n$ sudo tar -C raring -c . | sudo docker import - raring\na29c15f1bf7a\n$ sudo docker run raring cat /etc/lsb-release\nDISTRIB_ID=Ubuntu\nDISTRIB_RELEASE=13.04\nDISTRIB_CODENAME=raring\nDISTRIB_DESCRIPTION=\"Ubuntu 13.04\"\n\nThere are more example scripts for creating base images in the Docker\nGitHub Repo:\n\nBusyBox\nCentOS / Scientific Linux CERN (SLC) on Debian/Ubuntu or\n on CentOS/RHEL/SLC/etc.\nDebian / Ubuntu\n\nCreating a simple base image using scratch\nThere is a special repository in the Docker registry called scratch, which\nwas created using an empty tar file:\n$ tar cv --files-from /dev/null | docker import - scratch\n\nwhich you can docker pull. You can then use that\nimage to base your new minimal containers FROM:\nFROM scratch\nCOPY true-asm /true\nCMD [\"/true\"]\n\nThe Dockerfile above is from an extremely minimal image - tianon/true.\nMore resources\nThere are lots more resources available to help you write your 'Dockerfile`.\n\nThere's a complete guide to all the instructions available for use in a Dockerfile in the reference section.\nTo help you write a clear, readable, maintainable Dockerfile, we've also\nwritten a Dockerfile Best Practices guide.\nIf you're working on an Official Repo, be sure to check out the Official Repo Guidelines.",
|
|
"title": "Creating a base image"
|
|
},
|
|
{
|
|
"loc": "/articles/baseimages#create-a-base-image",
|
|
"tags": "",
|
|
"text": "So you want to create your own Base Image ? Great! The specific process will depend heavily on the Linux distribution you\nwant to package. We have some examples below, and you are encouraged to\nsubmit pull requests to contribute new ones.",
|
|
"title": "Create a Base Image"
|
|
},
|
|
{
|
|
"loc": "/articles/baseimages#create-a-full-image-using-tar",
|
|
"tags": "",
|
|
"text": "In general, you'll want to start with a working machine that is running\nthe distribution you'd like to package as a base image, though that is\nnot required for some tools like Debian's Debootstrap , which you can also\nuse to build Ubuntu images. It can be as simple as this to create an Ubuntu base image: $ sudo debootstrap raring raring /dev/null\n$ sudo tar -C raring -c . | sudo docker import - raring\na29c15f1bf7a\n$ sudo docker run raring cat /etc/lsb-release\nDISTRIB_ID=Ubuntu\nDISTRIB_RELEASE=13.04\nDISTRIB_CODENAME=raring\nDISTRIB_DESCRIPTION=\"Ubuntu 13.04\" There are more example scripts for creating base images in the Docker\nGitHub Repo: BusyBox CentOS / Scientific Linux CERN (SLC) on Debian/Ubuntu or\n on CentOS/RHEL/SLC/etc. Debian / Ubuntu",
|
|
"title": "Create a full image using tar"
|
|
},
|
|
{
|
|
"loc": "/articles/baseimages#creating-a-simple-base-image-using-scratch",
|
|
"tags": "",
|
|
"text": "There is a special repository in the Docker registry called scratch , which\nwas created using an empty tar file: $ tar cv --files-from /dev/null | docker import - scratch which you can docker pull . You can then use that\nimage to base your new minimal containers FROM : FROM scratch\nCOPY true-asm /true\nCMD [\"/true\"] The Dockerfile above is from an extremely minimal image - tianon/true .",
|
|
"title": "Creating a simple base image using scratch"
|
|
},
|
|
{
|
|
"loc": "/articles/baseimages#more-resources",
|
|
"tags": "",
|
|
"text": "There are lots more resources available to help you write your 'Dockerfile`. There's a complete guide to all the instructions available for use in a Dockerfile in the reference section. To help you write a clear, readable, maintainable Dockerfile , we've also\nwritten a Dockerfile Best Practices guide . If you're working on an Official Repo, be sure to check out the Official Repo Guidelines .",
|
|
"title": "More resources"
|
|
},
|
|
{
|
|
"loc": "/articles/dockerfile_best-practices/",
|
|
"tags": "",
|
|
"text": "Best practices for writing Dockerfiles\nOverview\nDocker can build images automatically by reading the instructions from a\nDockerfile, a text file that contains all the commands, in order, needed to\nbuild a given image. Dockerfiles adhere to a specific format and use a\nspecific set of instructions. You can learn the basics on the \nDockerfile Reference page. If\nyou\u2019re new to writing Dockerfiles, you should start there.\nThis document covers the best practices and methods recommended by Docker,\nInc. and the Docker community for creating easy-to-use, effective\nDockerfiles. We strongly suggest you follow these recommendations (in fact,\nif you\u2019re creating an Official Image, you must adhere to these practices).\nYou can see many of these practices and recommendations in action in the buildpack-deps Dockerfile.\n\nNote: for more detailed explanations of any of the Dockerfile commands\nmentioned here, visit the Dockerfile Reference page.\n\nGeneral guidelines and recommendations\nContainers should be ephemeral\nThe container produced by the image your Dockerfile defines should be as\nephemeral as possible. By \u201cephemeral,\u201d we mean that it can be stopped and\ndestroyed and a new one built and put in place with an absolute minimum of\nset-up and configuration.\nUse a .dockerignore file\nFor faster uploading and efficiency during docker build, you should use\na .dockerignore file to exclude files or directories from the build\ncontext and final image. For example, unless.git is needed by your build\nprocess or scripts, you should add it to .dockerignore, which can save many\nmegabytes worth of upload time.\nAvoid installing unnecessary packages\nIn order to reduce complexity, dependencies, file sizes, and build times, you\nshould avoid installing extra or unnecessary packages just because they\nmight be \u201cnice to have.\u201d For example, you don\u2019t need to include a text editor\nin a database image.\nRun only one process per container\nIn almost all cases, you should only run a single process in a single\ncontainer. Decoupling applications into multiple containers makes it much\neasier to scale horizontally and reuse containers. If that service depends on\nanother service, make use of container linking.\nMinimize the number of layers\nYou need to find the balance between readability (and thus long-term\nmaintainability) of the Dockerfile and minimizing the number of layers it\nuses. Be strategic and cautious about the number of layers you use.\nSort multi-line arguments\nWhenever possible, ease later changes by sorting multi-line arguments\nalphanumerically. This will help you avoid duplication of packages and make the\nlist much easier to update. This also makes PRs a lot easier to read and\nreview. Adding a space before a backslash (\\) helps as well.\nHere\u2019s an example from the buildpack-deps image:\nRUN apt-get update apt-get install -y \\\n bzr \\\n cvs \\\n git \\\n mercurial \\\n subversion\n\nBuild cache\nDuring the process of building an image Docker will step through the\ninstructions in your Dockerfile executing each in the order specified.\nAs each instruction is examined Docker will look for an existing image in its\ncache that it can reuse, rather than creating a new (duplicate) image.\nIf you do not want to use the cache at all you can use the --no-cache=true\noption on the docker build command.\nHowever, if you do let Docker use its cache then it is very important to\nunderstand when it will, and will not, find a matching image. The basic rules\nthat Docker will follow are outlined below:\n\n\nStarting with a base image that is already in the cache, the next\ninstruction is compared against all child images derived from that base\nimage to see if one of them was built using the exact same instruction. If\nnot, the cache is invalidated.\n\n\nIn most cases simply comparing the instruction in the Dockerfile with one\nof the child images is sufficient. However, certain instructions require\na little more examination and explanation.\n\n\nIn the case of the ADD and COPY instructions, the contents of the file(s)\nbeing put into the image are examined. Specifically, a checksum is done\nof the file(s) and then that checksum is used during the cache lookup.\nIf anything has changed in the file(s), including its metadata,\nthen the cache is invalidated.\n\n\nAside from the ADD and COPY commands cache checking will not look at the\nfiles in the container to determine a cache match. For example, when processing\na RUN apt-get -y update command the files updated in the container\nwill not be examined to determine if a cache hit exists. In that case just\nthe command string itself will be used to find a match.\n\n\nOnce the cache is invalidated, all subsequent Dockerfile commands will\ngenerate new images and the cache will not be used.\nThe Dockerfile instructions\nBelow you'll find recommendations for the best way to write the\nvarious instructions available for use in a Dockerfile.\nFROM\nWhenever possible, use current Official Repositories as the basis for your\nimage. We recommend the Debian image\nsince it\u2019s very tightly controlled and kept extremely minimal (currently under\n100 mb), while still being a full distribution.\nRUN\nAs always, to make your Dockerfile more readable, understandable, and\nmaintainable, put long or complex RUN statements on multiple lines separated\nwith backslashes.\nProbably the most common use-case for RUN is an application of apt-get.\nWhen using apt-get, here are a few things to keep in mind:\n\n\nDon\u2019t do RUN apt-get update on a single line. This will cause\ncaching issues if the referenced archive gets updated, which will make your\nsubsequent apt-get install fail without comment.\n\n\nAvoid RUN apt-get upgrade or dist-upgrade, since many of the \u201cessential\u201d\npackages from the base images will fail to upgrade inside an unprivileged\ncontainer. If a base package is out of date, you should contact its\nmaintainers. If you know there\u2019s a particular package, foo, that needs to be\nupdated, use apt-get install -y foo and it will update automatically.\n\n\nDo write instructions like:\nRUN apt-get update apt-get install -y package-bar package-foo package-baz\n\n\nWriting the instruction this way not only makes it easier to read\nand maintain, but also, by including apt-get update, ensures that the cache\nwill naturally be busted and the latest versions will be installed with no\nfurther coding or manual intervention required.\n\nFurther natural cache-busting can be realized by version-pinning packages\n(e.g., package-foo=1.3.*). This will force retrieval of that version\nregardless of what\u2019s in the cache.\nWriting your apt-get code this way will greatly ease maintenance and reduce\nfailures due to unanticipated changes in required packages.\n\nExample\nBelow is a well-formed RUN instruction that demonstrates the above\nrecommendations. Note that the last package, s3cmd, specifies a version\n1.1.0*. If the image previously used an older version, specifying the new one\nwill cause a cache bust of apt-get update and ensure the installation of\nthe new version (which in this case had a new, required feature).\nRUN apt-get update apt-get install -y \\\n aufs-tools \\\n automake \\\n btrfs-tools \\\n build-essential \\\n curl \\\n dpkg-sig \\\n git \\\n iptables \\\n libapparmor-dev \\\n libcap-dev \\\n libsqlite3-dev \\\n lxc=1.0* \\\n mercurial \\\n parallel \\\n reprepro \\\n ruby1.9.1 \\\n ruby1.9.1-dev \\\n s3cmd=1.1.0*\n\nWriting the instruction this way also helps you avoid potential duplication of\na given package because it is much easier to read than an instruction like:\nRUN apt-get install -y package-foo apt-get install -y package-bar\n\nCMD\nThe CMD instruction should be used to run the software contained by your\nimage, along with any arguments. CMD should almost always be used in the\nform of CMD [\u201cexecutable\u201d, \u201cparam1\u201d, \u201cparam2\u201d\u2026]. Thus, if the image is for a\nservice (Apache, Rails, etc.), you would run something like\nCMD [\"apache2\",\"-DFOREGROUND\"]. Indeed, this form of the instruction is\nrecommended for any service-based image.\nIn most other cases, CMD should be given an interactive shell (bash, python,\nperl, etc), for example, CMD [\"perl\", \"-de0\"], CMD [\"python\"], or\nCMD [\u201cphp\u201d, \u201c-a\u201d]. Using this form means that when you execute something like\ndocker run -it python, you\u2019ll get dropped into a usable shell, ready to go.\nCMD should rarely be used in the manner of CMD [\u201cparam\u201d, \u201cparam\u201d] in\nconjunction with ENTRYPOINT, unless\nyou and your expected users are already quite familiar with how ENTRYPOINT\nworks. \nEXPOSE\nThe EXPOSE instruction indicates the ports on which a container will listen\nfor connections. Consequently, you should use the common, traditional port for\nyour application. For example, an image containing the Apache web server would\nuse EXPOSE 80, while an image containing MongoDB would use EXPOSE 27017 and\nso on.\nFor external access, your users can execute docker run with a flag indicating\nhow to map the specified port to the port of their choice.\nFor container linking, Docker provides environment variables for the path from\nthe recipient container back to the source (ie, MYSQL_PORT_3306_TCP).\nENV\nIn order to make new software easier to run, you can use ENV to update the\nPATH environment variable for the software your container installs. For\nexample, ENV PATH /usr/local/nginx/bin:$PATH will ensure that CMD [\u201cnginx\u201d]\njust works.\nThe ENV instruction is also useful for providing required environment\nvariables specific to services you wish to containerize, such as Postgres\u2019s\nPGDATA.\nLastly, ENV can also be used to set commonly used version numbers so that\nversion bumps are easier to maintain, as seen in the following example:\nENV PG_MAJOR 9.3\nENV PG_VERSION 9.3.4\nRUN curl -SL http://example.com/postgres-$PG_VERSION.tar.xz | tar -xJC /usr/src/postgress \u2026\nENV PATH /usr/local/postgres-$PG_MAJOR/bin:$PATH\n\nSimilar to having constant variables in a program (as opposed to hard-coding\nvalues), this approach lets you change a single ENV instruction to\nauto-magically bump the version of the software in your container.\nADD or COPY\nAlthough ADD and COPY are functionally similar, generally speaking, COPY\nis preferred. That\u2019s because it\u2019s more transparent than ADD. COPY only\nsupports the basic copying of local files into the container, while ADD has\nsome features (like local-only tar extraction and remote URL support) that are\nnot immediately obvious. Consequently, the best use for ADD is local tar file\nauto-extraction into the image, as in ADD rootfs.tar.xz /.\nIf you have multiple Dockerfile steps that use different files from your\ncontext, COPY them individually, rather than all at once. This will ensure that\neach step's build cache is only invalidated (forcing the step to be re-run) if the\nspecifically required files change.\nFor example:\nCOPY requirements.txt /tmp/\nRUN pip install /tmp/requirements.txt\nCOPY . /tmp/\n\nResults in fewer cache invalidations for the RUN step, than if you put the\nCOPY . /tmp/ before it.\nBecause image size matters, using ADD to fetch packages from remote URLs is\nstrongly discouraged; you should use curl or wget instead. That way you can\ndelete the files you no longer need after they've been extracted and you won't\nhave to add another layer in your image. For example, you should avoid doing\nthings like:\nADD http://example.com/big.tar.xz /usr/src/things/\nRUN tar -xJf /usr/src/things/big.tar.xz -C /usr/src/things\nRUN make -C /usr/src/things all\n\nAnd instead, do something like:\nRUN mkdir -p /usr/src/things \\\n curl -SL http://example.com/big.tar.gz \\\n | tar -xJC /usr/src/things \\\n make -C /usr/src/things all\n\nFor other items (files, directories) that do not require ADD\u2019s tar\nauto-extraction capability, you should always use COPY.\nENTRYPOINT\nThe best use for ENTRYPOINT is to set the image's main command, allowing that\nimage to be run as though it was that command (and then use CMD as the\ndefault flags).\nLet's start with an example of an image for the command line tool s3cmd:\nENTRYPOINT [\"s3cmd\"]\nCMD [\"--help\"]\n\nNow the image can be run like this to show the command's help:\n$ docker run s3cmd\n\nOr using the right parameters to execute a command:\n$ docker run s3cmd ls s3://mybucket\n\nThis is useful because the image name can double as a reference to the binary as\nshown in the command above.\nThe ENTRYPOINT instruction can also be used in combination with a helper\nscript, allowing it to function in a similar way to the command above, even\nwhen starting the tool may require more than one step.\nFor example, the Postgres Official Image\nuses the following script as its ENTRYPOINT:\n#!/bin/bash\nset -e\n\nif [ $1 = 'postgres' ]; then\n chown -R postgres $PGDATA\n\n if [ -z $(ls -A $PGDATA) ]; then\n gosu postgres initdb\n fi\n\n exec gosu postgres $@\nfi\n\nexec $@\n\n\n\nNote:\nThis script uses the exec Bash command\nso that the final running application becomes the container's PID 1. This allows\nthe application to receive any Unix signals sent to the container.\nSee the ENTRYPOINT\nhelp for more details.\n\nThe helper script is copied into the container and run via ENTRYPOINT on\ncontainer start:\nCOPY ./docker-entrypoint.sh /\nENTRYPOINT [\"/docker-entrypoint.sh\"]\n\nThis script allows the user to interact with Postgres in several ways.\nIt can simply start Postgres:\n$ docker run postgres\n\nOr, it can be used to run Postgres and pass parameters to the server:\n$ docker run postgres postgres --help\n\nLastly, it could also be used to start a totally different tool, such Bash:\n$ docker run --rm -it postgres bash\n\nVOLUME\nThe VOLUME instruction should be used to expose any database storage area,\nconfiguration storage, or files/folders created by your docker container. You\nare strongly encouraged to use VOLUME for any mutable and/or user-serviceable\nparts of your image.\nUSER\nIf a service can run without privileges, use USER to change to a non-root\nuser. Start by creating the user and group in the Dockerfile with something\nlike RUN groupadd -r postgres useradd -r -g postgres postgres.\n\nNote: Users and groups in an image get a non-deterministic\nUID/GID in that the \u201cnext\u201d UID/GID gets assigned regardless of image\nrebuilds. So, if it\u2019s critical, you should assign an explicit UID/GID.\n\nYou should avoid installing or using sudo since it has unpredictable TTY and\nsignal-forwarding behavior that can cause more problems than it solves. If\nyou absolutely need functionality similar to sudo (e.g., initializing the\ndaemon as root but running it as non-root), you may be able to use\n\u201cgosu\u201d. \nLastly, to reduce layers and complexity, avoid switching USER back\nand forth frequently.\nWORKDIR\nFor clarity and reliability, you should always use absolute paths for your\nWORKDIR. Also, you should use WORKDIR instead of proliferating\ninstructions like RUN cd \u2026 do-something, which are hard to read,\ntroubleshoot, and maintain.\nONBUILD\nONBUILD is only useful for images that are going to be built FROM a given\nimage. For example, you would use ONBUILD for a language stack image that\nbuilds arbitrary user software written in that language within the\nDockerfile, as you can see in Ruby\u2019s ONBUILD variants. \nImages built from ONBUILD should get a separate tag, for example:\nruby:1.9-onbuild or ruby:2.0-onbuild.\nBe careful when putting ADD or COPY in ONBUILD. The \u201conbuild\u201d image will\nfail catastrophically if the new build's context is missing the resource being\nadded. Adding a separate tag, as recommended above, will help mitigate this by\nallowing the Dockerfile author to make a choice.\nExamples For Official Repositories\nThese Official Repos have exemplary Dockerfiles:\n\nGo\nPerl\nHy\nRails\n\nAdditional Resources:\n\nDockerfile Reference\nMore about Base Images\nMore about Automated Builds\nGuidelines for Creating Official \nRepositories",
|
|
"title": "Best practices for writing Dockerfiles"
|
|
},
|
|
{
|
|
"loc": "/articles/dockerfile_best-practices#best-practices-for-writing-dockerfiles",
|
|
"tags": "",
|
|
"text": "",
|
|
"title": "Best practices for writing Dockerfiles"
|
|
},
|
|
{
|
|
"loc": "/articles/dockerfile_best-practices#overview",
|
|
"tags": "",
|
|
"text": "Docker can build images automatically by reading the instructions from a Dockerfile , a text file that contains all the commands, in order, needed to\nbuild a given image. Dockerfile s adhere to a specific format and use a\nspecific set of instructions. You can learn the basics on the Dockerfile Reference page. If\nyou\u2019re new to writing Dockerfile s, you should start there. This document covers the best practices and methods recommended by Docker,\nInc. and the Docker community for creating easy-to-use, effective Dockerfile s. We strongly suggest you follow these recommendations (in fact,\nif you\u2019re creating an Official Image, you must adhere to these practices). You can see many of these practices and recommendations in action in the buildpack-deps Dockerfile . Note: for more detailed explanations of any of the Dockerfile commands\nmentioned here, visit the Dockerfile Reference page.",
|
|
"title": "Overview"
|
|
},
|
|
{
|
|
"loc": "/articles/dockerfile_best-practices#general-guidelines-and-recommendations",
|
|
"tags": "",
|
|
"text": "Containers should be ephemeral The container produced by the image your Dockerfile defines should be as\nephemeral as possible. By \u201cephemeral,\u201d we mean that it can be stopped and\ndestroyed and a new one built and put in place with an absolute minimum of\nset-up and configuration. Use a .dockerignore file For faster uploading and efficiency during docker build , you should use\na .dockerignore file to exclude files or directories from the build\ncontext and final image. For example, unless .git is needed by your build\nprocess or scripts, you should add it to .dockerignore , which can save many\nmegabytes worth of upload time. Avoid installing unnecessary packages In order to reduce complexity, dependencies, file sizes, and build times, you\nshould avoid installing extra or unnecessary packages just because they\nmight be \u201cnice to have.\u201d For example, you don\u2019t need to include a text editor\nin a database image. Run only one process per container In almost all cases, you should only run a single process in a single\ncontainer. Decoupling applications into multiple containers makes it much\neasier to scale horizontally and reuse containers. If that service depends on\nanother service, make use of container linking . Minimize the number of layers You need to find the balance between readability (and thus long-term\nmaintainability) of the Dockerfile and minimizing the number of layers it\nuses. Be strategic and cautious about the number of layers you use. Sort multi-line arguments Whenever possible, ease later changes by sorting multi-line arguments\nalphanumerically. This will help you avoid duplication of packages and make the\nlist much easier to update. This also makes PRs a lot easier to read and\nreview. Adding a space before a backslash ( \\ ) helps as well. Here\u2019s an example from the buildpack-deps image : RUN apt-get update apt-get install -y \\\n bzr \\\n cvs \\\n git \\\n mercurial \\\n subversion Build cache During the process of building an image Docker will step through the\ninstructions in your Dockerfile executing each in the order specified.\nAs each instruction is examined Docker will look for an existing image in its\ncache that it can reuse, rather than creating a new (duplicate) image.\nIf you do not want to use the cache at all you can use the --no-cache=true \noption on the docker build command. However, if you do let Docker use its cache then it is very important to\nunderstand when it will, and will not, find a matching image. The basic rules\nthat Docker will follow are outlined below: Starting with a base image that is already in the cache, the next\ninstruction is compared against all child images derived from that base\nimage to see if one of them was built using the exact same instruction. If\nnot, the cache is invalidated. In most cases simply comparing the instruction in the Dockerfile with one\nof the child images is sufficient. However, certain instructions require\na little more examination and explanation. In the case of the ADD and COPY instructions, the contents of the file(s)\nbeing put into the image are examined. Specifically, a checksum is done\nof the file(s) and then that checksum is used during the cache lookup.\nIf anything has changed in the file(s), including its metadata,\nthen the cache is invalidated. Aside from the ADD and COPY commands cache checking will not look at the\nfiles in the container to determine a cache match. For example, when processing\na RUN apt-get -y update command the files updated in the container\nwill not be examined to determine if a cache hit exists. In that case just\nthe command string itself will be used to find a match. Once the cache is invalidated, all subsequent Dockerfile commands will\ngenerate new images and the cache will not be used.",
|
|
"title": "General guidelines and recommendations"
|
|
},
|
|
{
|
|
"loc": "/articles/dockerfile_best-practices#the-dockerfile-instructions",
|
|
"tags": "",
|
|
"text": "Below you'll find recommendations for the best way to write the\nvarious instructions available for use in a Dockerfile . FROM Whenever possible, use current Official Repositories as the basis for your\nimage. We recommend the Debian image \nsince it\u2019s very tightly controlled and kept extremely minimal (currently under\n100 mb), while still being a full distribution. RUN As always, to make your Dockerfile more readable, understandable, and\nmaintainable, put long or complex RUN statements on multiple lines separated\nwith backslashes. Probably the most common use-case for RUN is an application of apt-get .\nWhen using apt-get , here are a few things to keep in mind: Don\u2019t do RUN apt-get update on a single line. This will cause\ncaching issues if the referenced archive gets updated, which will make your\nsubsequent apt-get install fail without comment. Avoid RUN apt-get upgrade or dist-upgrade , since many of the \u201cessential\u201d\npackages from the base images will fail to upgrade inside an unprivileged\ncontainer. If a base package is out of date, you should contact its\nmaintainers. If you know there\u2019s a particular package, foo , that needs to be\nupdated, use apt-get install -y foo and it will update automatically. Do write instructions like: RUN apt-get update apt-get install -y package-bar package-foo package-baz Writing the instruction this way not only makes it easier to read\nand maintain, but also, by including apt-get update , ensures that the cache\nwill naturally be busted and the latest versions will be installed with no\nfurther coding or manual intervention required. Further natural cache-busting can be realized by version-pinning packages\n(e.g., package-foo=1.3.* ). This will force retrieval of that version\nregardless of what\u2019s in the cache.\nWriting your apt-get code this way will greatly ease maintenance and reduce\nfailures due to unanticipated changes in required packages. Example Below is a well-formed RUN instruction that demonstrates the above\nrecommendations. Note that the last package, s3cmd , specifies a version 1.1.0* . If the image previously used an older version, specifying the new one\nwill cause a cache bust of apt-get update and ensure the installation of\nthe new version (which in this case had a new, required feature). RUN apt-get update apt-get install -y \\\n aufs-tools \\\n automake \\\n btrfs-tools \\\n build-essential \\\n curl \\\n dpkg-sig \\\n git \\\n iptables \\\n libapparmor-dev \\\n libcap-dev \\\n libsqlite3-dev \\\n lxc=1.0* \\\n mercurial \\\n parallel \\\n reprepro \\\n ruby1.9.1 \\\n ruby1.9.1-dev \\\n s3cmd=1.1.0* Writing the instruction this way also helps you avoid potential duplication of\na given package because it is much easier to read than an instruction like: RUN apt-get install -y package-foo apt-get install -y package-bar CMD The CMD instruction should be used to run the software contained by your\nimage, along with any arguments. CMD should almost always be used in the\nform of CMD [\u201cexecutable\u201d, \u201cparam1\u201d, \u201cparam2\u201d\u2026] . Thus, if the image is for a\nservice (Apache, Rails, etc.), you would run something like CMD [\"apache2\",\"-DFOREGROUND\"] . Indeed, this form of the instruction is\nrecommended for any service-based image. In most other cases, CMD should be given an interactive shell (bash, python,\nperl, etc), for example, CMD [\"perl\", \"-de0\"] , CMD [\"python\"] , or CMD [\u201cphp\u201d, \u201c-a\u201d] . Using this form means that when you execute something like docker run -it python , you\u2019ll get dropped into a usable shell, ready to go. CMD should rarely be used in the manner of CMD [\u201cparam\u201d, \u201cparam\u201d] in\nconjunction with ENTRYPOINT , unless\nyou and your expected users are already quite familiar with how ENTRYPOINT \nworks. EXPOSE The EXPOSE instruction indicates the ports on which a container will listen\nfor connections. Consequently, you should use the common, traditional port for\nyour application. For example, an image containing the Apache web server would\nuse EXPOSE 80 , while an image containing MongoDB would use EXPOSE 27017 and\nso on. For external access, your users can execute docker run with a flag indicating\nhow to map the specified port to the port of their choice.\nFor container linking, Docker provides environment variables for the path from\nthe recipient container back to the source (ie, MYSQL_PORT_3306_TCP ). ENV In order to make new software easier to run, you can use ENV to update the PATH environment variable for the software your container installs. For\nexample, ENV PATH /usr/local/nginx/bin:$PATH will ensure that CMD [\u201cnginx\u201d] \njust works. The ENV instruction is also useful for providing required environment\nvariables specific to services you wish to containerize, such as Postgres\u2019s PGDATA . Lastly, ENV can also be used to set commonly used version numbers so that\nversion bumps are easier to maintain, as seen in the following example: ENV PG_MAJOR 9.3\nENV PG_VERSION 9.3.4\nRUN curl -SL http://example.com/postgres-$PG_VERSION.tar.xz | tar -xJC /usr/src/postgress \u2026\nENV PATH /usr/local/postgres-$PG_MAJOR/bin:$PATH Similar to having constant variables in a program (as opposed to hard-coding\nvalues), this approach lets you change a single ENV instruction to\nauto-magically bump the version of the software in your container. ADD or COPY Although ADD and COPY are functionally similar, generally speaking, COPY \nis preferred. That\u2019s because it\u2019s more transparent than ADD . COPY only\nsupports the basic copying of local files into the container, while ADD has\nsome features (like local-only tar extraction and remote URL support) that are\nnot immediately obvious. Consequently, the best use for ADD is local tar file\nauto-extraction into the image, as in ADD rootfs.tar.xz / . If you have multiple Dockerfile steps that use different files from your\ncontext, COPY them individually, rather than all at once. This will ensure that\neach step's build cache is only invalidated (forcing the step to be re-run) if the\nspecifically required files change. For example: COPY requirements.txt /tmp/\nRUN pip install /tmp/requirements.txt\nCOPY . /tmp/ Results in fewer cache invalidations for the RUN step, than if you put the COPY . /tmp/ before it. Because image size matters, using ADD to fetch packages from remote URLs is\nstrongly discouraged; you should use curl or wget instead. That way you can\ndelete the files you no longer need after they've been extracted and you won't\nhave to add another layer in your image. For example, you should avoid doing\nthings like: ADD http://example.com/big.tar.xz /usr/src/things/\nRUN tar -xJf /usr/src/things/big.tar.xz -C /usr/src/things\nRUN make -C /usr/src/things all And instead, do something like: RUN mkdir -p /usr/src/things \\\n curl -SL http://example.com/big.tar.gz \\\n | tar -xJC /usr/src/things \\\n make -C /usr/src/things all For other items (files, directories) that do not require ADD \u2019s tar\nauto-extraction capability, you should always use COPY . ENTRYPOINT The best use for ENTRYPOINT is to set the image's main command, allowing that\nimage to be run as though it was that command (and then use CMD as the\ndefault flags). Let's start with an example of an image for the command line tool s3cmd : ENTRYPOINT [\"s3cmd\"]\nCMD [\"--help\"] Now the image can be run like this to show the command's help: $ docker run s3cmd Or using the right parameters to execute a command: $ docker run s3cmd ls s3://mybucket This is useful because the image name can double as a reference to the binary as\nshown in the command above. The ENTRYPOINT instruction can also be used in combination with a helper\nscript, allowing it to function in a similar way to the command above, even\nwhen starting the tool may require more than one step. For example, the Postgres Official Image \nuses the following script as its ENTRYPOINT : #!/bin/bash\nset -e\n\nif [ $1 = 'postgres' ]; then\n chown -R postgres $PGDATA \n\n if [ -z $(ls -A $PGDATA ) ]; then\n gosu postgres initdb\n fi\n\n exec gosu postgres $@ \nfi\n\nexec $@ Note :\nThis script uses the exec Bash command \nso that the final running application becomes the container's PID 1. This allows\nthe application to receive any Unix signals sent to the container.\nSee the ENTRYPOINT \nhelp for more details. The helper script is copied into the container and run via ENTRYPOINT on\ncontainer start: COPY ./docker-entrypoint.sh /\nENTRYPOINT [\"/docker-entrypoint.sh\"] This script allows the user to interact with Postgres in several ways. It can simply start Postgres: $ docker run postgres Or, it can be used to run Postgres and pass parameters to the server: $ docker run postgres postgres --help Lastly, it could also be used to start a totally different tool, such Bash: $ docker run --rm -it postgres bash VOLUME The VOLUME instruction should be used to expose any database storage area,\nconfiguration storage, or files/folders created by your docker container. You\nare strongly encouraged to use VOLUME for any mutable and/or user-serviceable\nparts of your image. USER If a service can run without privileges, use USER to change to a non-root\nuser. Start by creating the user and group in the Dockerfile with something\nlike RUN groupadd -r postgres useradd -r -g postgres postgres . Note: Users and groups in an image get a non-deterministic\nUID/GID in that the \u201cnext\u201d UID/GID gets assigned regardless of image\nrebuilds. So, if it\u2019s critical, you should assign an explicit UID/GID. You should avoid installing or using sudo since it has unpredictable TTY and\nsignal-forwarding behavior that can cause more problems than it solves. If\nyou absolutely need functionality similar to sudo (e.g., initializing the\ndaemon as root but running it as non-root), you may be able to use \u201cgosu\u201d . Lastly, to reduce layers and complexity, avoid switching USER back\nand forth frequently. WORKDIR For clarity and reliability, you should always use absolute paths for your WORKDIR . Also, you should use WORKDIR instead of proliferating\ninstructions like RUN cd \u2026 do-something , which are hard to read,\ntroubleshoot, and maintain. ONBUILD ONBUILD is only useful for images that are going to be built FROM a given\nimage. For example, you would use ONBUILD for a language stack image that\nbuilds arbitrary user software written in that language within the Dockerfile , as you can see in Ruby\u2019s ONBUILD variants . Images built from ONBUILD should get a separate tag, for example: ruby:1.9-onbuild or ruby:2.0-onbuild . Be careful when putting ADD or COPY in ONBUILD . The \u201conbuild\u201d image will\nfail catastrophically if the new build's context is missing the resource being\nadded. Adding a separate tag, as recommended above, will help mitigate this by\nallowing the Dockerfile author to make a choice.",
|
|
"title": "The Dockerfile instructions"
|
|
},
|
|
{
|
|
"loc": "/articles/dockerfile_best-practices#examples-for-official-repositories",
|
|
"tags": "",
|
|
"text": "These Official Repos have exemplary Dockerfile s: Go Perl Hy Rails",
|
|
"title": "Examples For Official Repositories"
|
|
},
|
|
{
|
|
"loc": "/articles/dockerfile_best-practices#additional-resources",
|
|
"tags": "",
|
|
"text": "Dockerfile Reference More about Base Images More about Automated Builds Guidelines for Creating Official \nRepositories",
|
|
"title": "Additional Resources:"
|
|
},
|
|
{
|
|
"loc": "/articles/certificates/",
|
|
"tags": "",
|
|
"text": "Using certificates for repository client verification\nIn Running Docker with HTTPS, you learned that, by default,\nDocker runs via a non-networked Unix socket and TLS must be enabled in order\nto have the Docker client and the daemon communicate securely over HTTPS.\nNow, you will see how to allow the Docker registry (i.e., a server) to\nverify that the Docker daemon (i.e., a client) has the right to access the\nimages being hosted with certificate-based client-server authentication.\nWe will show you how to install a Certificate Authority (CA) root certificate\nfor the registry and how to set the client TLS certificate for verification.\nUnderstanding the configuration\nA custom certificate is configured by creating a directory under\n/etc/docker/certs.d using the same name as the registry's hostname (e.g.,\nlocalhost). All *.crt files are added to this directory as CA roots.\n\nNote:\nIn the absence of any root certificate authorities, Docker\nwill use the system default (i.e., host's root CA set).\n\nThe presence of one or more filename.key/cert pairs indicates to Docker\nthat there are custom certificates required for access to the desired\nrepository.\n\nNote:\nIf there are multiple certificates, each will be tried in alphabetical\norder. If there is an authentication error (e.g., 403, 404, 5xx, etc.), Docker\nwill continue to try with the next certificate.\n\nOur example is set up like this:\n/etc/docker/certs.d/ -- Certificate directory\n\u2514\u2500\u2500 localhost -- Hostname\n \u251c\u2500\u2500 client.cert -- Client certificate\n \u251c\u2500\u2500 client.key -- Client key\n \u2514\u2500\u2500 localhost.crt -- Registry certificate\n\nCreating the client certificates\nYou will use OpenSSL's genrsa and req commands to first generate an RSA\nkey and then use the key to create the certificate. \n$ openssl genrsa -out client.key 1024\n$ openssl req -new -x509 -text -key client.key -out client.cert\n\n\nWarning:: \nUsing TLS and managing a CA is an advanced topic.\nYou should be familiar with OpenSSL, x509, and TLS before\nattempting to use them in production. \nWarning:\nThese TLS commands will only generate a working set of certificates on Linux.\nThe version of OpenSSL in Mac OS X is incompatible with the type of\ncertificate Docker requires.\n\nTesting the verification setup\nYou can test this setup by using Apache to host a Docker registry.\nFor this purpose, you can copy a registry tree (containing images) inside\nthe Apache root.\n\nNote:\nYou can find such an example here - which contains the busybox image.\n\nOnce you set up the registry, you can use the following Apache configuration\nto implement certificate-based protection.\n# This must be in the root context, otherwise it causes a re-negotiation\n# which is not supported by the TLS implementation in go\nSSLVerifyClient optional_no_ca\n\nLocation /v1\nAction cert-protected /cgi-bin/cert.cgi\nSetHandler cert-protected\n\nHeader set x-docker-registry-version \"0.6.2\"\nSetEnvIf Host (.*) custom_host=$1\nHeader set X-Docker-Endpoints \"%{custom_host}e\"\n/Location\n\nSave the above content as /etc/httpd/conf.d/registry.conf, and\ncontinue with creating a cert.cgi file under /var/www/cgi-bin/.\n#!/bin/bash\nif [ \"$HTTPS\" != \"on\" ]; then\n echo \"Status: 403 Not using SSL\"\n echo \"x-docker-registry-version: 0.6.2\"\n echo\n exit 0\nfi\nif [ \"$SSL_CLIENT_VERIFY\" == \"NONE\" ]; then\n echo \"Status: 403 Client certificate invalid\"\n echo \"x-docker-registry-version: 0.6.2\"\n echo\n exit 0\nfi\necho \"Content-length: $(stat --printf='%s' $PATH_TRANSLATED)\"\necho \"x-docker-registry-version: 0.6.2\"\necho \"X-Docker-Endpoints: $SERVER_NAME\"\necho \"X-Docker-Size: 0\"\necho\n\ncat $PATH_TRANSLATED\n\nThis CGI script will ensure that all requests to /v1 without a valid\ncertificate will be returned with a 403 (i.e., HTTP forbidden) error.",
|
|
"title": "Using certificates for repository client verification"
|
|
},
|
|
{
|
|
"loc": "/articles/certificates#using-certificates-for-repository-client-verification",
|
|
"tags": "",
|
|
"text": "In Running Docker with HTTPS , you learned that, by default,\nDocker runs via a non-networked Unix socket and TLS must be enabled in order\nto have the Docker client and the daemon communicate securely over HTTPS. Now, you will see how to allow the Docker registry (i.e., a server ) to\nverify that the Docker daemon (i.e., a client ) has the right to access the\nimages being hosted with certificate-based client-server authentication . We will show you how to install a Certificate Authority (CA) root certificate\nfor the registry and how to set the client TLS certificate for verification.",
|
|
"title": "Using certificates for repository client verification"
|
|
},
|
|
{
|
|
"loc": "/articles/certificates#understanding-the-configuration",
|
|
"tags": "",
|
|
"text": "A custom certificate is configured by creating a directory under /etc/docker/certs.d using the same name as the registry's hostname (e.g., localhost ). All *.crt files are added to this directory as CA roots. Note: \nIn the absence of any root certificate authorities, Docker\nwill use the system default (i.e., host's root CA set). The presence of one or more filename .key/cert pairs indicates to Docker\nthat there are custom certificates required for access to the desired\nrepository. Note: \nIf there are multiple certificates, each will be tried in alphabetical\norder. If there is an authentication error (e.g., 403, 404, 5xx, etc.), Docker\nwill continue to try with the next certificate. Our example is set up like this: /etc/docker/certs.d/ -- Certificate directory\n\u2514\u2500\u2500 localhost -- Hostname\n \u251c\u2500\u2500 client.cert -- Client certificate\n \u251c\u2500\u2500 client.key -- Client key\n \u2514\u2500\u2500 localhost.crt -- Registry certificate",
|
|
"title": "Understanding the configuration"
|
|
},
|
|
{
|
|
"loc": "/articles/certificates#creating-the-client-certificates",
|
|
"tags": "",
|
|
"text": "You will use OpenSSL's genrsa and req commands to first generate an RSA\nkey and then use the key to create the certificate. $ openssl genrsa -out client.key 1024\n$ openssl req -new -x509 -text -key client.key -out client.cert Warning: : \nUsing TLS and managing a CA is an advanced topic.\nYou should be familiar with OpenSSL, x509, and TLS before\nattempting to use them in production. Warning: \nThese TLS commands will only generate a working set of certificates on Linux.\nThe version of OpenSSL in Mac OS X is incompatible with the type of\ncertificate Docker requires.",
|
|
"title": "Creating the client certificates"
|
|
},
|
|
{
|
|
"loc": "/articles/certificates#testing-the-verification-setup",
|
|
"tags": "",
|
|
"text": "You can test this setup by using Apache to host a Docker registry.\nFor this purpose, you can copy a registry tree (containing images) inside\nthe Apache root. Note: \nYou can find such an example here - which contains the busybox image. Once you set up the registry, you can use the following Apache configuration\nto implement certificate-based protection. # This must be in the root context, otherwise it causes a re-negotiation\n# which is not supported by the TLS implementation in go\nSSLVerifyClient optional_no_ca Location /v1 \nAction cert-protected /cgi-bin/cert.cgi\nSetHandler cert-protected\n\nHeader set x-docker-registry-version \"0.6.2\"\nSetEnvIf Host (.*) custom_host=$1\nHeader set X-Docker-Endpoints \"%{custom_host}e\" /Location Save the above content as /etc/httpd/conf.d/registry.conf , and\ncontinue with creating a cert.cgi file under /var/www/cgi-bin/ . #!/bin/bash\nif [ \"$HTTPS\" != \"on\" ]; then\n echo \"Status: 403 Not using SSL\"\n echo \"x-docker-registry-version: 0.6.2\"\n echo\n exit 0\nfi\nif [ \"$SSL_CLIENT_VERIFY\" == \"NONE\" ]; then\n echo \"Status: 403 Client certificate invalid\"\n echo \"x-docker-registry-version: 0.6.2\"\n echo\n exit 0\nfi\necho \"Content-length: $(stat --printf='%s' $PATH_TRANSLATED)\"\necho \"x-docker-registry-version: 0.6.2\"\necho \"X-Docker-Endpoints: $SERVER_NAME\"\necho \"X-Docker-Size: 0\"\necho\n\ncat $PATH_TRANSLATED This CGI script will ensure that all requests to /v1 without a valid\ncertificate will be returned with a 403 (i.e., HTTP forbidden) error.",
|
|
"title": "Testing the verification setup"
|
|
},
|
|
{
|
|
"loc": "/articles/using_supervisord/",
|
|
"tags": "",
|
|
"text": "Using Supervisor with Docker\n\nNote:\n- If you don't like sudo then see Giving non-root\n access\n\nTraditionally a Docker container runs a single process when it is\nlaunched, for example an Apache daemon or a SSH server daemon. Often\nthough you want to run more than one process in a container. There are a\nnumber of ways you can achieve this ranging from using a simple Bash\nscript as the value of your container's CMD instruction to installing\na process management tool.\nIn this example we're going to make use of the process management tool,\nSupervisor, to manage multiple processes in\nour container. Using Supervisor allows us to better control, manage, and\nrestart the processes we want to run. To demonstrate this we're going to\ninstall and manage both an SSH daemon and an Apache daemon.\nCreating a Dockerfile\nLet's start by creating a basic Dockerfile for our\nnew image.\nFROM ubuntu:13.04\nMAINTAINER examples@docker.com\n\nInstalling Supervisor\nWe can now install our SSH and Apache daemons as well as Supervisor in\nour container.\nRUN apt-get update apt-get install -y openssh-server apache2 supervisor\nRUN mkdir -p /var/lock/apache2 /var/run/apache2 /var/run/sshd /var/log/supervisor\n\nHere we're installing the openssh-server,\napache2 and supervisor\n(which provides the Supervisor daemon) packages. We're also creating four\nnew directories that are needed to run our SSH daemon and Supervisor.\nAdding Supervisor's configuration file\nNow let's add a configuration file for Supervisor. The default file is\ncalled supervisord.conf and is located in\n/etc/supervisor/conf.d/.\nCOPY supervisord.conf /etc/supervisor/conf.d/supervisord.conf\n\nLet's see what is inside our supervisord.conf\nfile.\n[supervisord]\nnodaemon=true\n\n[program:sshd]\ncommand=/usr/sbin/sshd -D\n\n[program:apache2]\ncommand=/bin/bash -c \"source /etc/apache2/envvars exec /usr/sbin/apache2 -DFOREGROUND\"\n\nThe supervisord.conf configuration file contains\ndirectives that configure Supervisor and the processes it manages. The\nfirst block [supervisord] provides configuration\nfor Supervisor itself. We're using one directive, nodaemon\nwhich tells Supervisor to run interactively rather than\ndaemonize.\nThe next two blocks manage the services we wish to control. Each block\ncontrols a separate process. The blocks contain a single directive,\ncommand, which specifies what command to run to\nstart each process.\nExposing ports and running Supervisor\nNow let's finish our Dockerfile by exposing some\nrequired ports and specifying the CMD instruction\nto start Supervisor when our container launches.\nEXPOSE 22 80\nCMD [\"/usr/bin/supervisord\"]\n\nHere We've exposed ports 22 and 80 on the container and we're running\nthe /usr/bin/supervisord binary when the container\nlaunches.\nBuilding our image\nWe can now build our new image.\n$ sudo docker build -t yourname/supervisord .\n\nRunning our Supervisor container\nOnce We've got a built image we can launch a container from it.\n$ sudo docker run -p 22 -p 80 -t -i yourname/supervisord\n2013-11-25 18:53:22,312 CRIT Supervisor running as root (no user in config file)\n2013-11-25 18:53:22,312 WARN Included extra file \"/etc/supervisor/conf.d/supervisord.conf\" during parsing\n2013-11-25 18:53:22,342 INFO supervisord started with pid 1\n2013-11-25 18:53:23,346 INFO spawned: 'sshd' with pid 6\n2013-11-25 18:53:23,349 INFO spawned: 'apache2' with pid 7\n. . .\n\nWe've launched a new container interactively using the docker run command.\nThat container has run Supervisor and launched the SSH and Apache daemons with\nit. We've specified the -p flag to expose ports 22 and 80. From here we can\nnow identify the exposed ports and connect to one or both of the SSH and Apache\ndaemons.",
|
|
"title": "Using Supervisor"
|
|
},
|
|
{
|
|
"loc": "/articles/using_supervisord#using-supervisor-with-docker",
|
|
"tags": "",
|
|
"text": "Note :\n- If you don't like sudo then see Giving non-root\n access Traditionally a Docker container runs a single process when it is\nlaunched, for example an Apache daemon or a SSH server daemon. Often\nthough you want to run more than one process in a container. There are a\nnumber of ways you can achieve this ranging from using a simple Bash\nscript as the value of your container's CMD instruction to installing\na process management tool. In this example we're going to make use of the process management tool, Supervisor , to manage multiple processes in\nour container. Using Supervisor allows us to better control, manage, and\nrestart the processes we want to run. To demonstrate this we're going to\ninstall and manage both an SSH daemon and an Apache daemon.",
|
|
"title": "Using Supervisor with Docker"
|
|
},
|
|
{
|
|
"loc": "/articles/using_supervisord#creating-a-dockerfile",
|
|
"tags": "",
|
|
"text": "Let's start by creating a basic Dockerfile for our\nnew image. FROM ubuntu:13.04\nMAINTAINER examples@docker.com",
|
|
"title": "Creating a Dockerfile"
|
|
},
|
|
{
|
|
"loc": "/articles/using_supervisord#installing-supervisor",
|
|
"tags": "",
|
|
"text": "We can now install our SSH and Apache daemons as well as Supervisor in\nour container. RUN apt-get update apt-get install -y openssh-server apache2 supervisor\nRUN mkdir -p /var/lock/apache2 /var/run/apache2 /var/run/sshd /var/log/supervisor Here we're installing the openssh-server , apache2 and supervisor \n(which provides the Supervisor daemon) packages. We're also creating four\nnew directories that are needed to run our SSH daemon and Supervisor.",
|
|
"title": "Installing Supervisor"
|
|
},
|
|
{
|
|
"loc": "/articles/using_supervisord#adding-supervisors-configuration-file",
|
|
"tags": "",
|
|
"text": "Now let's add a configuration file for Supervisor. The default file is\ncalled supervisord.conf and is located in /etc/supervisor/conf.d/ . COPY supervisord.conf /etc/supervisor/conf.d/supervisord.conf Let's see what is inside our supervisord.conf \nfile. [supervisord]\nnodaemon=true\n\n[program:sshd]\ncommand=/usr/sbin/sshd -D\n\n[program:apache2]\ncommand=/bin/bash -c \"source /etc/apache2/envvars exec /usr/sbin/apache2 -DFOREGROUND\" The supervisord.conf configuration file contains\ndirectives that configure Supervisor and the processes it manages. The\nfirst block [supervisord] provides configuration\nfor Supervisor itself. We're using one directive, nodaemon \nwhich tells Supervisor to run interactively rather than\ndaemonize. The next two blocks manage the services we wish to control. Each block\ncontrols a separate process. The blocks contain a single directive, command , which specifies what command to run to\nstart each process.",
|
|
"title": "Adding Supervisor's configuration file"
|
|
},
|
|
{
|
|
"loc": "/articles/using_supervisord#exposing-ports-and-running-supervisor",
|
|
"tags": "",
|
|
"text": "Now let's finish our Dockerfile by exposing some\nrequired ports and specifying the CMD instruction\nto start Supervisor when our container launches. EXPOSE 22 80\nCMD [\"/usr/bin/supervisord\"] Here We've exposed ports 22 and 80 on the container and we're running\nthe /usr/bin/supervisord binary when the container\nlaunches.",
|
|
"title": "Exposing ports and running Supervisor"
|
|
},
|
|
{
|
|
"loc": "/articles/using_supervisord#building-our-image",
|
|
"tags": "",
|
|
"text": "We can now build our new image. $ sudo docker build -t yourname /supervisord .",
|
|
"title": "Building our image"
|
|
},
|
|
{
|
|
"loc": "/articles/using_supervisord#running-our-supervisor-container",
|
|
"tags": "",
|
|
"text": "Once We've got a built image we can launch a container from it. $ sudo docker run -p 22 -p 80 -t -i yourname /supervisord\n2013-11-25 18:53:22,312 CRIT Supervisor running as root (no user in config file)\n2013-11-25 18:53:22,312 WARN Included extra file \"/etc/supervisor/conf.d/supervisord.conf\" during parsing\n2013-11-25 18:53:22,342 INFO supervisord started with pid 1\n2013-11-25 18:53:23,346 INFO spawned: 'sshd' with pid 6\n2013-11-25 18:53:23,349 INFO spawned: 'apache2' with pid 7\n. . . We've launched a new container interactively using the docker run command.\nThat container has run Supervisor and launched the SSH and Apache daemons with\nit. We've specified the -p flag to expose ports 22 and 80. From here we can\nnow identify the exposed ports and connect to one or both of the SSH and Apache\ndaemons.",
|
|
"title": "Running our Supervisor container"
|
|
},
|
|
{
|
|
"loc": "/articles/cfengine_process_management/",
|
|
"tags": "",
|
|
"text": "Process Management with CFEngine\nCreate Docker containers with managed processes.\nDocker monitors one process in each running container and the container\nlives or dies with that process. By introducing CFEngine inside Docker\ncontainers, we can alleviate a few of the issues that may arise:\n\nIt is possible to easily start multiple processes within a\n container, all of which will be managed automatically, with the\n normal docker run command.\nIf a managed process dies or crashes, CFEngine will start it again\n within 1 minute.\nThe container itself will live as long as the CFEngine scheduling\n daemon (cf-execd) lives. With CFEngine, we are able to decouple the\n life of the container from the uptime of the service it provides.\n\nHow it works\nCFEngine, together with the cfe-docker integration policies, are\ninstalled as part of the Dockerfile. This builds CFEngine into our\nDocker image.\nThe Dockerfile's ENTRYPOINT takes an arbitrary\namount of commands (with any desired arguments) as parameters. When we\nrun the Docker container these parameters get written to CFEngine\npolicies and CFEngine takes over to ensure that the desired processes\nare running in the container.\nCFEngine scans the process table for the basename of the commands given\nto the ENTRYPOINT and runs the command to start the process if the basename\nis not found. For example, if we start the container with\ndocker run \"/path/to/my/application parameters\", CFEngine will look for a\nprocess named application and run the command. If an entry for application\nis not found in the process table at any point in time, CFEngine will execute\n/path/to/my/application parameters to start the application once again. The\ncheck on the process table happens every minute.\nNote that it is therefore important that the command to start your\napplication leaves a process with the basename of the command. This can\nbe made more flexible by making some minor adjustments to the CFEngine\npolicies, if desired.\nUsage\nThis example assumes you have Docker installed and working. We will\ninstall and manage apache2 and sshd\nin a single container.\nThere are three steps:\n\nInstall CFEngine into the container.\nCopy the CFEngine Docker process management policy into the\n containerized CFEngine installation.\nStart your application processes as part of the docker run command.\n\nBuilding the image\nThe first two steps can be done as part of a Dockerfile, as follows.\nFROM ubuntu\nMAINTAINER Eystein M\u00e5l\u00f8y Stenberg eytein.stenberg@gmail.com\n\nRUN apt-get update apt-get install -y wget lsb-release unzip ca-certificates\n\n# install latest CFEngine\nRUN wget -qO- http://cfengine.com/pub/gpg.key | apt-key add -\nRUN echo \"deb http://cfengine.com/pub/apt $(lsb_release -cs) main\" /etc/apt/sources.list.d/cfengine-community.list\nRUN apt-get update apt-get install -y cfengine-community\n\n# install cfe-docker process management policy\nRUN wget https://github.com/estenberg/cfe-docker/archive/master.zip -P /tmp/ unzip /tmp/master.zip -d /tmp/\nRUN cp /tmp/cfe-docker-master/cfengine/bin/* /var/cfengine/bin/\nRUN cp /tmp/cfe-docker-master/cfengine/inputs/* /var/cfengine/inputs/\nRUN rm -rf /tmp/cfe-docker-master /tmp/master.zip\n\n# apache2 and openssh are just for testing purposes, install your own apps here\nRUN apt-get update apt-get install -y openssh-server apache2\nRUN mkdir -p /var/run/sshd\nRUN echo \"root:password\" | chpasswd # need a password for ssh\n\nENTRYPOINT [\"/var/cfengine/bin/docker_processes_run.sh\"]\n\nBy saving this file as Dockerfile to a working directory, you can then build\nyour image with the docker build command, e.g.,\ndocker build -t managed_image.\nTesting the container\nStart the container with apache2 and sshd running and managed, forwarding\na port to our SSH instance:\n$ sudo docker run -p 127.0.0.1:222:22 -d managed_image \"/usr/sbin/sshd\" \"/etc/init.d/apache2 start\"\n\nWe now clearly see one of the benefits of the cfe-docker integration: it\nallows to start several processes as part of a normal docker run command.\nWe can now log in to our new container and see that both apache2 and sshd\nare running. We have set the root password to \"password\" in the Dockerfile\nabove and can use that to log in with ssh:\nssh -p222 root@127.0.0.1\n\nps -ef\nUID PID PPID C STIME TTY TIME CMD\nroot 1 0 0 07:48 ? 00:00:00 /bin/bash /var/cfengine/bin/docker_processes_run.sh /usr/sbin/sshd /etc/init.d/apache2 start\nroot 18 1 0 07:48 ? 00:00:00 /var/cfengine/bin/cf-execd -F\nroot 20 1 0 07:48 ? 00:00:00 /usr/sbin/sshd\nroot 32 1 0 07:48 ? 00:00:00 /usr/sbin/apache2 -k start\nwww-data 34 32 0 07:48 ? 00:00:00 /usr/sbin/apache2 -k start\nwww-data 35 32 0 07:48 ? 00:00:00 /usr/sbin/apache2 -k start\nwww-data 36 32 0 07:48 ? 00:00:00 /usr/sbin/apache2 -k start\nroot 93 20 0 07:48 ? 00:00:00 sshd: root@pts/0\nroot 105 93 0 07:48 pts/0 00:00:00 -bash\nroot 112 105 0 07:49 pts/0 00:00:00 ps -ef\n\nIf we stop apache2, it will be started again within a minute by\nCFEngine.\nservice apache2 status\n Apache2 is running (pid 32).\nservice apache2 stop\n * Stopping web server apache2 ... waiting [ OK ]\nservice apache2 status\n Apache2 is NOT running.\n# ... wait up to 1 minute...\nservice apache2 status\n Apache2 is running (pid 173).\n\nAdapting to your applications\nTo make sure your applications get managed in the same manner, there are\njust two things you need to adjust from the above example:\n\nIn the Dockerfile used above, install your applications instead of\n apache2 and sshd.\nWhen you start the container with docker run,\n specify the command line arguments to your applications rather than\n apache2 and sshd.",
|
|
"title": "Process management with CFEngine"
|
|
},
|
|
{
|
|
"loc": "/articles/cfengine_process_management#process-management-with-cfengine",
|
|
"tags": "",
|
|
"text": "Create Docker containers with managed processes. Docker monitors one process in each running container and the container\nlives or dies with that process. By introducing CFEngine inside Docker\ncontainers, we can alleviate a few of the issues that may arise: It is possible to easily start multiple processes within a\n container, all of which will be managed automatically, with the\n normal docker run command. If a managed process dies or crashes, CFEngine will start it again\n within 1 minute. The container itself will live as long as the CFEngine scheduling\n daemon (cf-execd) lives. With CFEngine, we are able to decouple the\n life of the container from the uptime of the service it provides.",
|
|
"title": "Process Management with CFEngine"
|
|
},
|
|
{
|
|
"loc": "/articles/cfengine_process_management#how-it-works",
|
|
"tags": "",
|
|
"text": "CFEngine, together with the cfe-docker integration policies, are\ninstalled as part of the Dockerfile. This builds CFEngine into our\nDocker image. The Dockerfile's ENTRYPOINT takes an arbitrary\namount of commands (with any desired arguments) as parameters. When we\nrun the Docker container these parameters get written to CFEngine\npolicies and CFEngine takes over to ensure that the desired processes\nare running in the container. CFEngine scans the process table for the basename of the commands given\nto the ENTRYPOINT and runs the command to start the process if the basename \nis not found. For example, if we start the container with docker run \"/path/to/my/application parameters\" , CFEngine will look for a\nprocess named application and run the command. If an entry for application \nis not found in the process table at any point in time, CFEngine will execute /path/to/my/application parameters to start the application once again. The\ncheck on the process table happens every minute. Note that it is therefore important that the command to start your\napplication leaves a process with the basename of the command. This can\nbe made more flexible by making some minor adjustments to the CFEngine\npolicies, if desired.",
|
|
"title": "How it works"
|
|
},
|
|
{
|
|
"loc": "/articles/cfengine_process_management#usage",
|
|
"tags": "",
|
|
"text": "This example assumes you have Docker installed and working. We will\ninstall and manage apache2 and sshd \nin a single container. There are three steps: Install CFEngine into the container. Copy the CFEngine Docker process management policy into the\n containerized CFEngine installation. Start your application processes as part of the docker run command. Building the image The first two steps can be done as part of a Dockerfile, as follows. FROM ubuntu\nMAINTAINER Eystein M\u00e5l\u00f8y Stenberg eytein.stenberg@gmail.com \n\nRUN apt-get update apt-get install -y wget lsb-release unzip ca-certificates\n\n# install latest CFEngine\nRUN wget -qO- http://cfengine.com/pub/gpg.key | apt-key add -\nRUN echo \"deb http://cfengine.com/pub/apt $(lsb_release -cs) main\" /etc/apt/sources.list.d/cfengine-community.list\nRUN apt-get update apt-get install -y cfengine-community\n\n# install cfe-docker process management policy\nRUN wget https://github.com/estenberg/cfe-docker/archive/master.zip -P /tmp/ unzip /tmp/master.zip -d /tmp/\nRUN cp /tmp/cfe-docker-master/cfengine/bin/* /var/cfengine/bin/\nRUN cp /tmp/cfe-docker-master/cfengine/inputs/* /var/cfengine/inputs/\nRUN rm -rf /tmp/cfe-docker-master /tmp/master.zip\n\n# apache2 and openssh are just for testing purposes, install your own apps here\nRUN apt-get update apt-get install -y openssh-server apache2\nRUN mkdir -p /var/run/sshd\nRUN echo \"root:password\" | chpasswd # need a password for ssh\n\nENTRYPOINT [\"/var/cfengine/bin/docker_processes_run.sh\"] By saving this file as Dockerfile to a working directory, you can then build\nyour image with the docker build command, e.g., docker build -t managed_image . Testing the container Start the container with apache2 and sshd running and managed, forwarding\na port to our SSH instance: $ sudo docker run -p 127.0.0.1:222:22 -d managed_image \"/usr/sbin/sshd\" \"/etc/init.d/apache2 start\" We now clearly see one of the benefits of the cfe-docker integration: it\nallows to start several processes as part of a normal docker run command. We can now log in to our new container and see that both apache2 and sshd \nare running. We have set the root password to \"password\" in the Dockerfile\nabove and can use that to log in with ssh: ssh -p222 root@127.0.0.1\n\nps -ef\nUID PID PPID C STIME TTY TIME CMD\nroot 1 0 0 07:48 ? 00:00:00 /bin/bash /var/cfengine/bin/docker_processes_run.sh /usr/sbin/sshd /etc/init.d/apache2 start\nroot 18 1 0 07:48 ? 00:00:00 /var/cfengine/bin/cf-execd -F\nroot 20 1 0 07:48 ? 00:00:00 /usr/sbin/sshd\nroot 32 1 0 07:48 ? 00:00:00 /usr/sbin/apache2 -k start\nwww-data 34 32 0 07:48 ? 00:00:00 /usr/sbin/apache2 -k start\nwww-data 35 32 0 07:48 ? 00:00:00 /usr/sbin/apache2 -k start\nwww-data 36 32 0 07:48 ? 00:00:00 /usr/sbin/apache2 -k start\nroot 93 20 0 07:48 ? 00:00:00 sshd: root@pts/0\nroot 105 93 0 07:48 pts/0 00:00:00 -bash\nroot 112 105 0 07:49 pts/0 00:00:00 ps -ef If we stop apache2, it will be started again within a minute by\nCFEngine. service apache2 status\n Apache2 is running (pid 32).\nservice apache2 stop\n * Stopping web server apache2 ... waiting [ OK ]\nservice apache2 status\n Apache2 is NOT running.\n# ... wait up to 1 minute...\nservice apache2 status\n Apache2 is running (pid 173).",
|
|
"title": "Usage"
|
|
},
|
|
{
|
|
"loc": "/articles/cfengine_process_management#adapting-to-your-applications",
|
|
"tags": "",
|
|
"text": "To make sure your applications get managed in the same manner, there are\njust two things you need to adjust from the above example: In the Dockerfile used above, install your applications instead of\n apache2 and sshd . When you start the container with docker run ,\n specify the command line arguments to your applications rather than\n apache2 and sshd .",
|
|
"title": "Adapting to your applications"
|
|
},
|
|
{
|
|
"loc": "/articles/puppet/",
|
|
"tags": "",
|
|
"text": "Using Puppet\n\nNote: Please note this is a community contributed installation path. The\nonly official installation is using the\nUbuntu installation\npath. This version may sometimes be out of date.\n\nRequirements\nTo use this guide you'll need a working installation of Puppet from\nPuppet Labs .\nThe module also currently uses the official PPA so only works with\nUbuntu.\nInstallation\nThe module is available on the Puppet\nForge and can be\ninstalled using the built-in module tool.\n$ puppet module install garethr/docker\n\nIt can also be found on\nGitHub if you would rather\ndownload the source.\nUsage\nThe module provides a puppet class for installing Docker and two defined\ntypes for managing images and containers.\nInstallation\ninclude 'docker'\n\nImages\nThe next step is probably to install a Docker image. For this, we have a\ndefined type which can be used like so:\ndocker::image { 'ubuntu': }\n\nThis is equivalent to running:\n$ sudo docker pull ubuntu\n\nNote that it will only be downloaded if an image of that name does not\nalready exist. This is downloading a large binary so on first run can\ntake a while. For that reason this define turns off the default 5 minute\ntimeout for the exec type. Note that you can also remove images you no\nlonger need with:\ndocker::image { 'ubuntu':\n ensure = 'absent',\n}\n\nContainers\nNow you have an image where you can run commands within a container\nmanaged by Docker.\ndocker::run { 'helloworld':\n image = 'ubuntu',\n command = '/bin/sh -c \"while true; do echo hello world; sleep 1; done\"',\n}\n\nThis is equivalent to running the following command, but under upstart:\n$ sudo docker run -d ubuntu /bin/sh -c \"while true; do echo hello world; sleep 1; done\"\n\nRun also contains a number of optional parameters:\ndocker::run { 'helloworld':\n image = 'ubuntu',\n command = '/bin/sh -c \"while true; do echo hello world; sleep 1; done\"',\n ports = ['4444', '4555'],\n volumes = ['/var/lib/couchdb', '/var/log'],\n volumes_from = '6446ea52fbc9',\n memory_limit = 10485760, # bytes\n username = 'example',\n hostname = 'example.com',\n env = ['FOO=BAR', 'FOO2=BAR2'],\n dns = ['8.8.8.8', '8.8.4.4'],\n}\n\n\nNote:\nThe ports, env, dns and volumes attributes can be set with either a single\nstring or as above with an array of values.",
|
|
"title": "Using Puppet"
|
|
},
|
|
{
|
|
"loc": "/articles/puppet#using-puppet",
|
|
"tags": "",
|
|
"text": "Note: Please note this is a community contributed installation path. The\nonly official installation is using the Ubuntu installation\npath. This version may sometimes be out of date.",
|
|
"title": "Using Puppet"
|
|
},
|
|
{
|
|
"loc": "/articles/puppet#requirements",
|
|
"tags": "",
|
|
"text": "To use this guide you'll need a working installation of Puppet from Puppet Labs . The module also currently uses the official PPA so only works with\nUbuntu.",
|
|
"title": "Requirements"
|
|
},
|
|
{
|
|
"loc": "/articles/puppet#installation",
|
|
"tags": "",
|
|
"text": "The module is available on the Puppet\nForge and can be\ninstalled using the built-in module tool. $ puppet module install garethr/docker It can also be found on GitHub if you would rather\ndownload the source.",
|
|
"title": "Installation"
|
|
},
|
|
{
|
|
"loc": "/articles/puppet#usage",
|
|
"tags": "",
|
|
"text": "The module provides a puppet class for installing Docker and two defined\ntypes for managing images and containers. Installation include 'docker' Images The next step is probably to install a Docker image. For this, we have a\ndefined type which can be used like so: docker::image { 'ubuntu': } This is equivalent to running: $ sudo docker pull ubuntu Note that it will only be downloaded if an image of that name does not\nalready exist. This is downloading a large binary so on first run can\ntake a while. For that reason this define turns off the default 5 minute\ntimeout for the exec type. Note that you can also remove images you no\nlonger need with: docker::image { 'ubuntu':\n ensure = 'absent',\n} Containers Now you have an image where you can run commands within a container\nmanaged by Docker. docker::run { 'helloworld':\n image = 'ubuntu',\n command = '/bin/sh -c \"while true; do echo hello world; sleep 1; done\"',\n} This is equivalent to running the following command, but under upstart: $ sudo docker run -d ubuntu /bin/sh -c \"while true; do echo hello world; sleep 1; done\" Run also contains a number of optional parameters: docker::run { 'helloworld':\n image = 'ubuntu',\n command = '/bin/sh -c \"while true; do echo hello world; sleep 1; done\"',\n ports = ['4444', '4555'],\n volumes = ['/var/lib/couchdb', '/var/log'],\n volumes_from = '6446ea52fbc9',\n memory_limit = 10485760, # bytes\n username = 'example',\n hostname = 'example.com',\n env = ['FOO=BAR', 'FOO2=BAR2'],\n dns = ['8.8.8.8', '8.8.4.4'],\n} Note: \nThe ports , env , dns and volumes attributes can be set with either a single\nstring or as above with an array of values.",
|
|
"title": "Usage"
|
|
},
|
|
{
|
|
"loc": "/articles/chef/",
|
|
"tags": "",
|
|
"text": "Using Chef\n\nNote:\nPlease note this is a community contributed installation path. The only\nofficial installation is using the\nUbuntu installation\npath. This version may sometimes be out of date.\n\nRequirements\nTo use this guide you'll need a working installation of\nChef. This cookbook supports a variety of\noperating systems.\nInstallation\nThe cookbook is available on the Chef Community\nSite and can be\ninstalled using your favorite cookbook dependency manager.\nThe source can be found on\nGitHub.\nUsage\nThe cookbook provides recipes for installing Docker, configuring init\nfor Docker, and resources for managing images and containers. It\nsupports almost all Docker functionality.\nInstallation\ninclude_recipe 'docker'\n\nImages\nThe next step is to pull a Docker image. For this, we have a resource:\ndocker_image 'samalba/docker-registry'\n\nThis is equivalent to running:\n$ sudo docker pull samalba/docker-registry\n\nThere are attributes available to control how long the cookbook will\nallow for downloading (5 minute default).\nTo remove images you no longer need:\ndocker_image 'samalba/docker-registry' do\n action :remove\nend\n\nContainers\nNow you have an image where you can run commands within a container\nmanaged by Docker.\ndocker_container 'samalba/docker-registry' do\n detach true\n port '5000:5000'\n env 'SETTINGS_FLAVOR=local'\n volume '/mnt/docker:/docker-storage'\nend\n\nThis is equivalent to running the following command, but under upstart:\n$ sudo docker run --detach=true --publish='5000:5000' --env='SETTINGS_FLAVOR=local' --volume='/mnt/docker:/docker-storage' samalba/docker-registry\n\nThe resources will accept a single string or an array of values for any\nDocker flags that allow multiple values.",
|
|
"title": "Using Chef"
|
|
},
|
|
{
|
|
"loc": "/articles/chef#using-chef",
|
|
"tags": "",
|
|
"text": "Note :\nPlease note this is a community contributed installation path. The only official installation is using the Ubuntu installation\npath. This version may sometimes be out of date.",
|
|
"title": "Using Chef"
|
|
},
|
|
{
|
|
"loc": "/articles/chef#requirements",
|
|
"tags": "",
|
|
"text": "To use this guide you'll need a working installation of Chef . This cookbook supports a variety of\noperating systems.",
|
|
"title": "Requirements"
|
|
},
|
|
{
|
|
"loc": "/articles/chef#installation",
|
|
"tags": "",
|
|
"text": "The cookbook is available on the Chef Community\nSite and can be\ninstalled using your favorite cookbook dependency manager. The source can be found on GitHub .",
|
|
"title": "Installation"
|
|
},
|
|
{
|
|
"loc": "/articles/chef#usage",
|
|
"tags": "",
|
|
"text": "The cookbook provides recipes for installing Docker, configuring init\nfor Docker, and resources for managing images and containers. It\nsupports almost all Docker functionality. Installation include_recipe 'docker' Images The next step is to pull a Docker image. For this, we have a resource: docker_image 'samalba/docker-registry' This is equivalent to running: $ sudo docker pull samalba/docker-registry There are attributes available to control how long the cookbook will\nallow for downloading (5 minute default). To remove images you no longer need: docker_image 'samalba/docker-registry' do\n action :remove\nend Containers Now you have an image where you can run commands within a container\nmanaged by Docker. docker_container 'samalba/docker-registry' do\n detach true\n port '5000:5000'\n env 'SETTINGS_FLAVOR=local'\n volume '/mnt/docker:/docker-storage'\nend This is equivalent to running the following command, but under upstart: $ sudo docker run --detach=true --publish='5000:5000' --env='SETTINGS_FLAVOR=local' --volume='/mnt/docker:/docker-storage' samalba/docker-registry The resources will accept a single string or an array of values for any\nDocker flags that allow multiple values.",
|
|
"title": "Usage"
|
|
},
|
|
{
|
|
"loc": "/articles/dsc/",
|
|
"tags": "",
|
|
"text": "Using PowerShell DSC\nWindows PowerShell Desired State Configuration (DSC) is a configuration\nmanagement tool that extends the existing functionality of Windows PowerShell.\nDSC uses a declarative syntax to define the state in which a target should be\nconfigured. More information about PowerShell DSC can be found at\nhttp://technet.microsoft.com/en-us/library/dn249912.aspx.\nRequirements\nTo use this guide you'll need a Windows host with PowerShell v4.0 or newer.\nThe included DSC configuration script also uses the official PPA so\nonly an Ubuntu target is supported. The Ubuntu target must already have the\nrequired OMI Server and PowerShell DSC for Linux providers installed. More\ninformation can be found at https://github.com/MSFTOSSMgmt/WPSDSCLinux.\nThe source repository listed below also includes PowerShell DSC for Linux\ninstallation and init scripts along with more detailed installation information.\nInstallation\nThe DSC configuration example source is available in the following repository:\nhttps://github.com/anweiss/DockerClientDSC. It can be cloned with:\n$ git clone https://github.com/anweiss/DockerClientDSC.git\n\nUsage\nThe DSC configuration utilizes a set of shell scripts to determine whether or\nnot the specified Docker components are configured on the target node(s). The\nsource repository also includes a script (RunDockerClientConfig.ps1) that can\nbe used to establish the required CIM session(s) and execute the\nSet-DscConfiguration cmdlet.\nMore detailed usage information can be found at\nhttps://github.com/anweiss/DockerClientDSC.\nInstall Docker\nThe Docker installation configuration is equivalent to running:\napt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys\\\n36A1D7869245C8950F966E92D8576A8BA88D21E9\nsh -c echo deb https://get.docker.com/ubuntu docker main\\\n /etc/apt/sources.list.d/docker.list\napt-get update\napt-get install lxc-docker\n\n\nEnsure that your current working directory is set to the DockerClientDSC\nsource and load the DockerClient configuration into the current PowerShell\nsession\n. .\\DockerClient.ps1\n\n\nGenerate the required DSC configuration .mof file for the targeted node\nDockerClient -Hostname myhost\n\n\nA sample DSC configuration data file has also been included and can be modified\nand used in conjunction with or in place of the Hostname parameter:\nDockerClient -ConfigurationData .\\DockerConfigData.psd1\n\n\nStart the configuration application process on the targeted node\n.\\RunDockerClientConfig.ps1 -Hostname myhost\n\n\nThe RunDockerClientConfig.ps1 script can also parse a DSC configuration data\nfile and execute configurations against multiple nodes as such:\n.\\RunDockerClientConfig.ps1 -ConfigurationData .\\DockerConfigData.psd1\n\n\nImages\nImage configuration is equivalent to running: docker pull [image] or\ndocker rmi -f [IMAGE].\nUsing the same steps defined above, execute DockerClient with the Image\nparameter and apply the configuration:\nDockerClient -Hostname myhost -Image node\n.\\RunDockerClientConfig.ps1 -Hostname myhost\n\n\nYou can also configure the host to pull multiple images:\nDockerClient -Hostname myhost -Image node,mongo\n.\\RunDockerClientConfig.ps1 -Hostname myhost\n\n\nTo remove images, use a hashtable as follows:\nDockerClient -Hostname myhost -Image @{Name=node; Remove=$true}\n.\\RunDockerClientConfig.ps1 -Hostname $hostname\n\n\nContainers\nContainer configuration is equivalent to running:\ndocker run -d --name=[containername] -p '[port]' -e '[env]' --link '[link]'\\\n'[image]' '[command]'\n\n\nor\ndocker rm -f [containername]\n\n\nTo create or remove containers, you can use the Container parameter with one\nor more hashtables. The hashtable(s) passed to this parameter can have the\nfollowing properties:\n\nName (required)\nImage (required unless Remove property is set to $true)\nPort\nEnv\nLink\nCommand\nRemove\n\nFor example, create a hashtable with the settings for your container:\n$webContainer = @{Name=web; Image=anweiss/docker-platynem; Port=80:80}\n\n\nThen, using the same steps defined above, execute\nDockerClient with the -Image and -Container parameters:\nDockerClient -Hostname myhost -Image node -Container $webContainer\n.\\RunDockerClientConfig.ps1 -Hostname myhost\n\n\nExisting containers can also be removed as follows:\n$containerToRemove = @{Name=web; Remove=$true}\nDockerClient -Hostname myhost -Container $containerToRemove\n.\\RunDockerClientConfig.ps1 -Hostname myhost\n\n\nHere is a hashtable with all of the properties that can be used to create a\ncontainer:\n$containerProps = @{Name=web; Image=node:latest; Port=80:80; `\nEnv=PORT=80; Link=db:db; Command=grunt}",
|
|
"title": "Using PowerShell DSC"
|
|
},
|
|
{
|
|
"loc": "/articles/dsc#using-powershell-dsc",
|
|
"tags": "",
|
|
"text": "Windows PowerShell Desired State Configuration (DSC) is a configuration\nmanagement tool that extends the existing functionality of Windows PowerShell.\nDSC uses a declarative syntax to define the state in which a target should be\nconfigured. More information about PowerShell DSC can be found at http://technet.microsoft.com/en-us/library/dn249912.aspx .",
|
|
"title": "Using PowerShell DSC"
|
|
},
|
|
{
|
|
"loc": "/articles/dsc#requirements",
|
|
"tags": "",
|
|
"text": "To use this guide you'll need a Windows host with PowerShell v4.0 or newer. The included DSC configuration script also uses the official PPA so\nonly an Ubuntu target is supported. The Ubuntu target must already have the\nrequired OMI Server and PowerShell DSC for Linux providers installed. More\ninformation can be found at https://github.com/MSFTOSSMgmt/WPSDSCLinux .\nThe source repository listed below also includes PowerShell DSC for Linux\ninstallation and init scripts along with more detailed installation information.",
|
|
"title": "Requirements"
|
|
},
|
|
{
|
|
"loc": "/articles/dsc#installation",
|
|
"tags": "",
|
|
"text": "The DSC configuration example source is available in the following repository: https://github.com/anweiss/DockerClientDSC . It can be cloned with: $ git clone https://github.com/anweiss/DockerClientDSC.git",
|
|
"title": "Installation"
|
|
},
|
|
{
|
|
"loc": "/articles/dsc#usage",
|
|
"tags": "",
|
|
"text": "The DSC configuration utilizes a set of shell scripts to determine whether or\nnot the specified Docker components are configured on the target node(s). The\nsource repository also includes a script ( RunDockerClientConfig.ps1 ) that can\nbe used to establish the required CIM session(s) and execute the Set-DscConfiguration cmdlet. More detailed usage information can be found at https://github.com/anweiss/DockerClientDSC . Install Docker The Docker installation configuration is equivalent to running: apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys\\\n36A1D7869245C8950F966E92D8576A8BA88D21E9\nsh -c echo deb https://get.docker.com/ubuntu docker main\\ /etc/apt/sources.list.d/docker.list \napt-get update\napt-get install lxc-docker Ensure that your current working directory is set to the DockerClientDSC \nsource and load the DockerClient configuration into the current PowerShell\nsession . .\\DockerClient.ps1 Generate the required DSC configuration .mof file for the targeted node DockerClient -Hostname myhost A sample DSC configuration data file has also been included and can be modified\nand used in conjunction with or in place of the Hostname parameter: DockerClient -ConfigurationData .\\DockerConfigData.psd1 Start the configuration application process on the targeted node .\\RunDockerClientConfig.ps1 -Hostname myhost The RunDockerClientConfig.ps1 script can also parse a DSC configuration data\nfile and execute configurations against multiple nodes as such: .\\RunDockerClientConfig.ps1 -ConfigurationData .\\DockerConfigData.psd1 Images Image configuration is equivalent to running: docker pull [image] or docker rmi -f [IMAGE] . Using the same steps defined above, execute DockerClient with the Image \nparameter and apply the configuration: DockerClient -Hostname myhost -Image node \n.\\RunDockerClientConfig.ps1 -Hostname myhost You can also configure the host to pull multiple images: DockerClient -Hostname myhost -Image node , mongo \n.\\RunDockerClientConfig.ps1 -Hostname myhost To remove images, use a hashtable as follows: DockerClient -Hostname myhost -Image @{Name= node ; Remove=$true}\n.\\RunDockerClientConfig.ps1 -Hostname $hostname Containers Container configuration is equivalent to running: docker run -d --name= [containername] -p '[port]' -e '[env]' --link '[link]'\\\n'[image]' '[command]' or docker rm -f [containername] To create or remove containers, you can use the Container parameter with one\nor more hashtables. The hashtable(s) passed to this parameter can have the\nfollowing properties: Name (required) Image (required unless Remove property is set to $true ) Port Env Link Command Remove For example, create a hashtable with the settings for your container: $webContainer = @{Name= web ; Image= anweiss/docker-platynem ; Port= 80:80 } Then, using the same steps defined above, execute DockerClient with the -Image and -Container parameters: DockerClient -Hostname myhost -Image node -Container $webContainer\n.\\RunDockerClientConfig.ps1 -Hostname myhost Existing containers can also be removed as follows: $containerToRemove = @{Name= web ; Remove=$true}\nDockerClient -Hostname myhost -Container $containerToRemove\n.\\RunDockerClientConfig.ps1 -Hostname myhost Here is a hashtable with all of the properties that can be used to create a\ncontainer: $containerProps = @{Name= web ; Image= node:latest ; Port= 80:80 ; `\nEnv= PORT=80 ; Link= db:db ; Command= grunt }",
|
|
"title": "Usage"
|
|
},
|
|
{
|
|
"loc": "/articles/ambassador_pattern_linking/",
|
|
"tags": "",
|
|
"text": "Link via an Ambassador Container\nIntroduction\nRather than hardcoding network links between a service consumer and\nprovider, Docker encourages service portability, for example instead of:\n(consumer) -- (redis)\n\nRequiring you to restart the consumer to attach it to a different\nredis service, you can add ambassadors:\n(consumer) -- (redis-ambassador) -- (redis)\n\nOr\n(consumer) -- (redis-ambassador) ---network--- (redis-ambassador) -- (redis)\n\nWhen you need to rewire your consumer to talk to a different Redis\nserver, you can just restart the redis-ambassador container that the\nconsumer is connected to.\nThis pattern also allows you to transparently move the Redis server to a\ndifferent docker host from the consumer.\nUsing the svendowideit/ambassador container, the link wiring is\ncontrolled entirely from the docker run parameters.\nTwo host Example\nStart actual Redis server on one Docker host\nbig-server $ sudo docker run -d --name redis crosbymichael/redis\n\nThen add an ambassador linked to the Redis server, mapping a port to the\noutside world\nbig-server $ sudo docker run -d --link redis:redis --name redis_ambassador -p 6379:6379 svendowideit/ambassador\n\nOn the other host, you can set up another ambassador setting environment\nvariables for each remote port we want to proxy to the big-server\nclient-server $ sudo docker run -d --name redis_ambassador --expose 6379 -e REDIS_PORT_6379_TCP=tcp://192.168.1.52:6379 svendowideit/ambassador\n\nThen on the client-server host, you can use a Redis client container\nto talk to the remote Redis server, just by linking to the local Redis\nambassador.\nclient-server $ sudo docker run -i -t --rm --link redis_ambassador:redis relateiq/redis-cli\nredis 172.17.0.160:6379 ping\nPONG\n\nHow it works\nThe following example shows what the svendowideit/ambassador container\ndoes automatically (with a tiny amount of sed)\nOn the Docker host (192.168.1.52) that Redis will run on:\n# start actual redis server\n$ sudo docker run -d --name redis crosbymichael/redis\n\n# get a redis-cli container for connection testing\n$ sudo docker pull relateiq/redis-cli\n\n# test the redis server by talking to it directly\n$ sudo docker run -t -i --rm --link redis:redis relateiq/redis-cli\nredis 172.17.0.136:6379 ping\nPONG\nˆD\n\n# add redis ambassador\n$ sudo docker run -t -i --link redis:redis --name redis_ambassador -p 6379:6379 busybox sh\n\nIn the redis_ambassador container, you can see the linked Redis\ncontainers env:\n$ env\nREDIS_PORT=tcp://172.17.0.136:6379\nREDIS_PORT_6379_TCP_ADDR=172.17.0.136\nREDIS_NAME=/redis_ambassador/redis\nHOSTNAME=19d7adf4705e\nREDIS_PORT_6379_TCP_PORT=6379\nHOME=/\nREDIS_PORT_6379_TCP_PROTO=tcp\ncontainer=lxc\nREDIS_PORT_6379_TCP=tcp://172.17.0.136:6379\nTERM=xterm\nPATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\nPWD=/\n\nThis environment is used by the ambassador socat script to expose Redis\nto the world (via the -p 6379:6379 port mapping):\n$ sudo docker rm redis_ambassador\n$ sudo ./contrib/mkimage-unittest.sh\n$ sudo docker run -t -i --link redis:redis --name redis_ambassador -p 6379:6379 docker-ut sh\n\n$ socat TCP4-LISTEN:6379,fork,reuseaddr TCP4:172.17.0.136:6379\n\nNow ping the Redis server via the ambassador:\nNow go to a different server:\n$ sudo ./contrib/mkimage-unittest.sh\n$ sudo docker run -t -i --expose 6379 --name redis_ambassador docker-ut sh\n\n$ socat TCP4-LISTEN:6379,fork,reuseaddr TCP4:192.168.1.52:6379\n\nAnd get the redis-cli image so we can talk over the ambassador bridge.\n$ sudo docker pull relateiq/redis-cli\n$ sudo docker run -i -t --rm --link redis_ambassador:redis relateiq/redis-cli\nredis 172.17.0.160:6379 ping\nPONG\n\nThe svendowideit/ambassador Dockerfile\nThe svendowideit/ambassador image is a small busybox image with\nsocat built in. When you start the container, it uses a small sed\nscript to parse out the (possibly multiple) link environment variables\nto set up the port forwarding. On the remote host, you need to set the\nvariable using the -e command line option.\n--expose 1234 -e REDIS_PORT_1234_TCP=tcp://192.168.1.52:6379\n\nWill forward the local 1234 port to the remote IP and port, in this\ncase 192.168.1.52:6379.\n#\n#\n# first you need to build the docker-ut image\n# using ./contrib/mkimage-unittest.sh\n# then\n# docker build -t SvenDowideit/ambassador .\n# docker tag SvenDowideit/ambassador ambassador\n# then to run it (on the host that has the real backend on it)\n# docker run -t -i --link redis:redis --name redis_ambassador -p 6379:6379 ambassador\n# on the remote host, you can set up another ambassador\n# docker run -t -i --name redis_ambassador --expose 6379 sh\n\nFROM docker-ut\nMAINTAINER SvenDowideit@home.org.au\n\n\nCMD env | grep _TCP= | sed 's/.*_PORT_\\([0-9]*\\)_TCP=tcp:\\/\\/\\(.*\\):\\(.*\\)/socat TCP4-LISTEN:\\1,fork,reuseaddr TCP4:\\2:\\3 \\/' | sh top",
|
|
"title": "Cross-Host linking using ambassador containers"
|
|
},
|
|
{
|
|
"loc": "/articles/ambassador_pattern_linking#link-via-an-ambassador-container",
|
|
"tags": "",
|
|
"text": "",
|
|
"title": "Link via an Ambassador Container"
|
|
},
|
|
{
|
|
"loc": "/articles/ambassador_pattern_linking#introduction",
|
|
"tags": "",
|
|
"text": "Rather than hardcoding network links between a service consumer and\nprovider, Docker encourages service portability, for example instead of: (consumer) -- (redis) Requiring you to restart the consumer to attach it to a different redis service, you can add ambassadors: (consumer) -- (redis-ambassador) -- (redis) Or (consumer) -- (redis-ambassador) ---network--- (redis-ambassador) -- (redis) When you need to rewire your consumer to talk to a different Redis\nserver, you can just restart the redis-ambassador container that the\nconsumer is connected to. This pattern also allows you to transparently move the Redis server to a\ndifferent docker host from the consumer. Using the svendowideit/ambassador container, the link wiring is\ncontrolled entirely from the docker run parameters.",
|
|
"title": "Introduction"
|
|
},
|
|
{
|
|
"loc": "/articles/ambassador_pattern_linking#two-host-example",
|
|
"tags": "",
|
|
"text": "Start actual Redis server on one Docker host big-server $ sudo docker run -d --name redis crosbymichael/redis Then add an ambassador linked to the Redis server, mapping a port to the\noutside world big-server $ sudo docker run -d --link redis:redis --name redis_ambassador -p 6379:6379 svendowideit/ambassador On the other host, you can set up another ambassador setting environment\nvariables for each remote port we want to proxy to the big-server client-server $ sudo docker run -d --name redis_ambassador --expose 6379 -e REDIS_PORT_6379_TCP=tcp://192.168.1.52:6379 svendowideit/ambassador Then on the client-server host, you can use a Redis client container\nto talk to the remote Redis server, just by linking to the local Redis\nambassador. client-server $ sudo docker run -i -t --rm --link redis_ambassador:redis relateiq/redis-cli\nredis 172.17.0.160:6379 ping\nPONG",
|
|
"title": "Two host Example"
|
|
},
|
|
{
|
|
"loc": "/articles/ambassador_pattern_linking#how-it-works",
|
|
"tags": "",
|
|
"text": "The following example shows what the svendowideit/ambassador container\ndoes automatically (with a tiny amount of sed ) On the Docker host (192.168.1.52) that Redis will run on: # start actual redis server\n$ sudo docker run -d --name redis crosbymichael/redis\n\n# get a redis-cli container for connection testing\n$ sudo docker pull relateiq/redis-cli\n\n# test the redis server by talking to it directly\n$ sudo docker run -t -i --rm --link redis:redis relateiq/redis-cli\nredis 172.17.0.136:6379 ping\nPONG\nˆD\n\n# add redis ambassador\n$ sudo docker run -t -i --link redis:redis --name redis_ambassador -p 6379:6379 busybox sh In the redis_ambassador container, you can see the linked Redis\ncontainers env : $ env\nREDIS_PORT=tcp://172.17.0.136:6379\nREDIS_PORT_6379_TCP_ADDR=172.17.0.136\nREDIS_NAME=/redis_ambassador/redis\nHOSTNAME=19d7adf4705e\nREDIS_PORT_6379_TCP_PORT=6379\nHOME=/\nREDIS_PORT_6379_TCP_PROTO=tcp\ncontainer=lxc\nREDIS_PORT_6379_TCP=tcp://172.17.0.136:6379\nTERM=xterm\nPATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\nPWD=/ This environment is used by the ambassador socat script to expose Redis\nto the world (via the -p 6379:6379 port mapping): $ sudo docker rm redis_ambassador\n$ sudo ./contrib/mkimage-unittest.sh\n$ sudo docker run -t -i --link redis:redis --name redis_ambassador -p 6379:6379 docker-ut sh\n\n$ socat TCP4-LISTEN:6379,fork,reuseaddr TCP4:172.17.0.136:6379 Now ping the Redis server via the ambassador: Now go to a different server: $ sudo ./contrib/mkimage-unittest.sh\n$ sudo docker run -t -i --expose 6379 --name redis_ambassador docker-ut sh\n\n$ socat TCP4-LISTEN:6379,fork,reuseaddr TCP4:192.168.1.52:6379 And get the redis-cli image so we can talk over the ambassador bridge. $ sudo docker pull relateiq/redis-cli\n$ sudo docker run -i -t --rm --link redis_ambassador:redis relateiq/redis-cli\nredis 172.17.0.160:6379 ping\nPONG",
|
|
"title": "How it works"
|
|
},
|
|
{
|
|
"loc": "/articles/ambassador_pattern_linking#the-svendowideitambassador-dockerfile",
|
|
"tags": "",
|
|
"text": "The svendowideit/ambassador image is a small busybox image with socat built in. When you start the container, it uses a small sed \nscript to parse out the (possibly multiple) link environment variables\nto set up the port forwarding. On the remote host, you need to set the\nvariable using the -e command line option. --expose 1234 -e REDIS_PORT_1234_TCP=tcp://192.168.1.52:6379 Will forward the local 1234 port to the remote IP and port, in this\ncase 192.168.1.52:6379 . #\n#\n# first you need to build the docker-ut image\n# using ./contrib/mkimage-unittest.sh\n# then\n# docker build -t SvenDowideit/ambassador .\n# docker tag SvenDowideit/ambassador ambassador\n# then to run it (on the host that has the real backend on it)\n# docker run -t -i --link redis:redis --name redis_ambassador -p 6379:6379 ambassador\n# on the remote host, you can set up another ambassador\n# docker run -t -i --name redis_ambassador --expose 6379 sh\n\nFROM docker-ut\nMAINTAINER SvenDowideit@home.org.au\n\n\nCMD env | grep _TCP= | sed 's/.*_PORT_\\([0-9]*\\)_TCP=tcp:\\/\\/\\(.*\\):\\(.*\\)/socat TCP4-LISTEN:\\1,fork,reuseaddr TCP4:\\2:\\3 \\ /' | sh top",
|
|
"title": "The svendowideit/ambassador Dockerfile"
|
|
},
|
|
{
|
|
"loc": "/articles/runmetrics/",
|
|
"tags": "",
|
|
"text": "Runtime Metrics\nLinux Containers rely on control groups\nwhich not only track groups of processes, but also expose metrics about\nCPU, memory, and block I/O usage. You can access those metrics and\nobtain network usage metrics as well. This is relevant for \"pure\" LXC\ncontainers, as well as for Docker containers.\nControl Groups\nControl groups are exposed through a pseudo-filesystem. In recent\ndistros, you should find this filesystem under /sys/fs/cgroup. Under\nthat directory, you will see multiple sub-directories, called devices,\nfreezer, blkio, etc.; each sub-directory actually corresponds to a different\ncgroup hierarchy.\nOn older systems, the control groups might be mounted on /cgroup, without\ndistinct hierarchies. In that case, instead of seeing the sub-directories,\nyou will see a bunch of files in that directory, and possibly some directories\ncorresponding to existing containers.\nTo figure out where your control groups are mounted, you can run:\n$ grep cgroup /proc/mounts\n\nEnumerating Cgroups\nYou can look into /proc/cgroups to see the different control group subsystems\nknown to the system, the hierarchy they belong to, and how many groups they contain.\nYou can also look at /proc/pid/cgroup to see which control groups a process\nbelongs to. The control group will be shown as a path relative to the root of\nthe hierarchy mountpoint; e.g., / means \u201cthis process has not been assigned into\na particular group\u201d, while /lxc/pumpkin means that the process is likely to be\na member of a container named pumpkin.\nFinding the Cgroup for a Given Container\nFor each container, one cgroup will be created in each hierarchy. On\nolder systems with older versions of the LXC userland tools, the name of\nthe cgroup will be the name of the container. With more recent versions\nof the LXC tools, the cgroup will be lxc/container_name.\nFor Docker containers using cgroups, the container name will be the full\nID or long ID of the container. If a container shows up as ae836c95b4c3\nin docker ps, its long ID might be something like\nae836c95b4c3c9e9179e0e91015512da89fdec91612f63cebae57df9a5444c79. You can\nlook it up with docker inspect or docker ps --no-trunc.\nPutting everything together to look at the memory metrics for a Docker\ncontainer, take a look at /sys/fs/cgroup/memory/lxc/longid/.\nMetrics from Cgroups: Memory, CPU, Block IO\nFor each subsystem (memory, CPU, and block I/O), you will find one or\nmore pseudo-files containing statistics.\nMemory Metrics: memory.stat\nMemory metrics are found in the \"memory\" cgroup. Note that the memory\ncontrol group adds a little overhead, because it does very fine-grained\naccounting of the memory usage on your host. Therefore, many distros\nchose to not enable it by default. Generally, to enable it, all you have\nto do is to add some kernel command-line parameters:\ncgroup_enable=memory swapaccount=1.\nThe metrics are in the pseudo-file memory.stat.\nHere is what it will look like:\ncache 11492564992\nrss 1930993664\nmapped_file 306728960\npgpgin 406632648\npgpgout 403355412\nswap 0\npgfault 728281223\npgmajfault 1724\ninactive_anon 46608384\nactive_anon 1884520448\ninactive_file 7003344896\nactive_file 4489052160\nunevictable 32768\nhierarchical_memory_limit 9223372036854775807\nhierarchical_memsw_limit 9223372036854775807\ntotal_cache 11492564992\ntotal_rss 1930993664\ntotal_mapped_file 306728960\ntotal_pgpgin 406632648\ntotal_pgpgout 403355412\ntotal_swap 0\ntotal_pgfault 728281223\ntotal_pgmajfault 1724\ntotal_inactive_anon 46608384\ntotal_active_anon 1884520448\ntotal_inactive_file 7003344896\ntotal_active_file 4489052160\ntotal_unevictable 32768\n\nThe first half (without the total_ prefix) contains statistics relevant\nto the processes within the cgroup, excluding sub-cgroups. The second half\n(with the total_ prefix) includes sub-cgroups as well.\nSome metrics are \"gauges\", i.e., values that can increase or decrease\n(e.g., swap, the amount of swap space used by the members of the cgroup).\nSome others are \"counters\", i.e., values that can only go up, because\nthey represent occurrences of a specific event (e.g., pgfault, which\nindicates the number of page faults which happened since the creation of\nthe cgroup; this number can never decrease).\n\n\ncache:\n the amount of memory used by the processes of this control group\n that can be associated precisely with a block on a block device.\n When you read from and write to files on disk, this amount will\n increase. This will be the case if you use \"conventional\" I/O\n (open, read,\n write syscalls) as well as mapped files (with\n mmap). It also accounts for the memory used by\n tmpfs mounts, though the reasons are unclear.\n\n\nrss:\n the amount of memory that doesn't correspond to anything on disk:\n stacks, heaps, and anonymous memory maps.\n\n\nmapped_file:\n indicates the amount of memory mapped by the processes in the\n control group. It doesn't give you information about how much\n memory is used; it rather tells you how it is used.\n\n\npgfault and pgmajfault:\n indicate the number of times that a process of the cgroup triggered\n a \"page fault\" and a \"major fault\", respectively. A page fault\n happens when a process accesses a part of its virtual memory space\n which is nonexistent or protected. The former can happen if the\n process is buggy and tries to access an invalid address (it will\n then be sent a SIGSEGV signal, typically\n killing it with the famous Segmentation fault\n message). The latter can happen when the process reads from a memory\n zone which has been swapped out, or which corresponds to a mapped\n file: in that case, the kernel will load the page from disk, and let\n the CPU complete the memory access. It can also happen when the\n process writes to a copy-on-write memory zone: likewise, the kernel\n will preempt the process, duplicate the memory page, and resume the\n write operation on the process` own copy of the page. \"Major\" faults\n happen when the kernel actually has to read the data from disk. When\n it just has to duplicate an existing page, or allocate an empty\n page, it's a regular (or \"minor\") fault.\n\n\nswap:\n the amount of swap currently used by the processes in this cgroup.\n\n\nactive_anon and inactive_anon:\n the amount of anonymous memory that has been identified has\n respectively active and inactive by the kernel. \"Anonymous\"\n memory is the memory that is not linked to disk pages. In other\n words, that's the equivalent of the rss counter described above. In\n fact, the very definition of the rss counter is active_anon +\n inactive_anon - tmpfs (where tmpfs is the amount of memory\n used up by tmpfs filesystems mounted by this\n control group). Now, what's the difference between \"active\" and\n \"inactive\"? Pages are initially \"active\"; and at regular intervals,\n the kernel sweeps over the memory, and tags some pages as\n \"inactive\". Whenever they are accessed again, they are immediately\n retagged \"active\". When the kernel is almost out of memory, and time\n comes to swap out to disk, the kernel will swap \"inactive\" pages.\n\n\nactive_file and inactive_file:\n cache memory, with active and inactive similar to the anon\n memory above. The exact formula is cache = active_file +\n inactive_file + tmpfs. The exact rules used by the kernel\n to move memory pages between active and inactive sets are different\n from the ones used for anonymous memory, but the general principle\n is the same. Note that when the kernel needs to reclaim memory, it\n is cheaper to reclaim a clean (=non modified) page from this pool,\n since it can be reclaimed immediately (while anonymous pages and\n dirty/modified pages have to be written to disk first).\n\n\nunevictable:\n the amount of memory that cannot be reclaimed; generally, it will\n account for memory that has been \"locked\" with mlock.\n It is often used by crypto frameworks to make sure that\n secret keys and other sensitive material never gets swapped out to\n disk.\n\n\nmemory and memsw limits:\n These are not really metrics, but a reminder of the limits applied\n to this cgroup. The first one indicates the maximum amount of\n physical memory that can be used by the processes of this control\n group; the second one indicates the maximum amount of RAM+swap.\n\n\nAccounting for memory in the page cache is very complex. If two\nprocesses in different control groups both read the same file\n(ultimately relying on the same blocks on disk), the corresponding\nmemory charge will be split between the control groups. It's nice, but\nit also means that when a cgroup is terminated, it could increase the\nmemory usage of another cgroup, because they are not splitting the cost\nanymore for those memory pages.\nCPU metrics: cpuacct.stat\nNow that we've covered memory metrics, everything else will look very\nsimple in comparison. CPU metrics will be found in the\ncpuacct controller.\nFor each container, you will find a pseudo-file cpuacct.stat,\ncontaining the CPU usage accumulated by the processes of the container,\nbroken down between user and system time. If you're not familiar\nwith the distinction, user is the time during which the processes were\nin direct control of the CPU (i.e., executing process code), and system\nis the time during which the CPU was executing system calls on behalf of\nthose processes.\nThose times are expressed in ticks of 1/100th of a second. Actually,\nthey are expressed in \"user jiffies\". There are USER_HZ\n\"jiffies\" per second, and on x86 systems,\nUSER_HZ is 100. This used to map exactly to the\nnumber of scheduler \"ticks\" per second; but with the advent of higher\nfrequency scheduling, as well as tickless kernels, the number of kernel ticks\nwasn't relevant anymore. It stuck around anyway, mainly for legacy and\ncompatibility reasons.\nBlock I/O metrics\nBlock I/O is accounted in the blkio controller.\nDifferent metrics are scattered across different files. While you can\nfind in-depth details in the blkio-controller\nfile in the kernel documentation, here is a short list of the most\nrelevant ones:\n\n\nblkio.sectors:\n contain the number of 512-bytes sectors read and written by the\n processes member of the cgroup, device by device. Reads and writes\n are merged in a single counter.\n\n\nblkio.io_service_bytes:\n indicates the number of bytes read and written by the cgroup. It has\n 4 counters per device, because for each device, it differentiates\n between synchronous vs. asynchronous I/O, and reads vs. writes.\n\n\nblkio.io_serviced:\n the number of I/O operations performed, regardless of their size. It\n also has 4 counters per device.\n\n\nblkio.io_queued:\n indicates the number of I/O operations currently queued for this\n cgroup. In other words, if the cgroup isn't doing any I/O, this will\n be zero. Note that the opposite is not true. In other words, if\n there is no I/O queued, it does not mean that the cgroup is idle\n (I/O-wise). It could be doing purely synchronous reads on an\n otherwise quiescent device, which is therefore able to handle them\n immediately, without queuing. Also, while it is helpful to figure\n out which cgroup is putting stress on the I/O subsystem, keep in\n mind that is is a relative quantity. Even if a process group does\n not perform more I/O, its queue size can increase just because the\n device load increases because of other devices.\n\n\nNetwork Metrics\nNetwork metrics are not exposed directly by control groups. There is a\ngood explanation for that: network interfaces exist within the context\nof network namespaces. The kernel could probably accumulate metrics\nabout packets and bytes sent and received by a group of processes, but\nthose metrics wouldn't be very useful. You want per-interface metrics\n(because traffic happening on the local lo\ninterface doesn't really count). But since processes in a single cgroup\ncan belong to multiple network namespaces, those metrics would be harder\nto interpret: multiple network namespaces means multiple lo\ninterfaces, potentially multiple eth0\ninterfaces, etc.; so this is why there is no easy way to gather network\nmetrics with control groups.\nInstead we can gather network metrics from other sources:\nIPtables\nIPtables (or rather, the netfilter framework for which iptables is just\nan interface) can do some serious accounting.\nFor instance, you can setup a rule to account for the outbound HTTP\ntraffic on a web server:\n$ iptables -I OUTPUT -p tcp --sport 80\n\nThere is no -j or -g flag,\nso the rule will just count matched packets and go to the following\nrule.\nLater, you can check the values of the counters, with:\n$ iptables -nxvL OUTPUT\n\nTechnically, -n is not required, but it will\nprevent iptables from doing DNS reverse lookups, which are probably\nuseless in this scenario.\nCounters include packets and bytes. If you want to setup metrics for\ncontainer traffic like this, you could execute a for\nloop to add two iptables rules per\ncontainer IP address (one in each direction), in the FORWARD\nchain. This will only meter traffic going through the NAT\nlayer; you will also have to add traffic going through the userland\nproxy.\nThen, you will need to check those counters on a regular basis. If you\nhappen to use collectd, there is a nice plugin\nto automate iptables counters collection.\nInterface-level counters\nSince each container has a virtual Ethernet interface, you might want to\ncheck directly the TX and RX counters of this interface. You will notice\nthat each container is associated to a virtual Ethernet interface in\nyour host, with a name like vethKk8Zqi. Figuring\nout which interface corresponds to which container is, unfortunately,\ndifficult.\nBut for now, the best way is to check the metrics from within the\ncontainers. To accomplish this, you can run an executable from the host\nenvironment within the network namespace of a container using ip-netns\nmagic.\nThe ip-netns exec command will let you execute any\nprogram (present in the host system) within any network namespace\nvisible to the current process. This means that your host will be able\nto enter the network namespace of your containers, but your containers\nwon't be able to access the host, nor their sibling containers.\nContainers will be able to \u201csee\u201d and affect their sub-containers,\nthough.\nThe exact format of the command is:\n$ ip netns exec nsname command...\n\nFor example:\n$ ip netns exec mycontainer netstat -i\n\nip netns finds the \"mycontainer\" container by\nusing namespaces pseudo-files. Each process belongs to one network\nnamespace, one PID namespace, one mnt namespace,\netc., and those namespaces are materialized under\n/proc/pid/ns/. For example, the network\nnamespace of PID 42 is materialized by the pseudo-file\n/proc/42/ns/net.\nWhen you run ip netns exec mycontainer ..., it\nexpects /var/run/netns/mycontainer to be one of\nthose pseudo-files. (Symlinks are accepted.)\nIn other words, to execute a command within the network namespace of a\ncontainer, we need to:\n\nFind out the PID of any process within the container that we want to investigate;\nCreate a symlink from /var/run/netns/somename to /proc/thepid/ns/net\nExecute ip netns exec somename ....\n\nPlease review Enumerating Cgroups to learn how to find\nthe cgroup of a process running in the container of which you want to\nmeasure network usage. From there, you can examine the pseudo-file named\ntasks, which contains the PIDs that are in the\ncontrol group (i.e., in the container). Pick any one of them.\nPutting everything together, if the \"short ID\" of a container is held in\nthe environment variable $CID, then you can do this:\n$ TASKS=/sys/fs/cgroup/devices/$CID*/tasks\n$ PID=$(head -n 1 $TASKS)\n$ mkdir -p /var/run/netns\n$ ln -sf /proc/$PID/ns/net /var/run/netns/$CID\n$ ip netns exec $CID netstat -i\n\nTips for high-performance metric collection\nNote that running a new process each time you want to update metrics is\n(relatively) expensive. If you want to collect metrics at high\nresolutions, and/or over a large number of containers (think 1000\ncontainers on a single host), you do not want to fork a new process each\ntime.\nHere is how to collect metrics from a single process. You will have to\nwrite your metric collector in C (or any language that lets you do\nlow-level system calls). You need to use a special system call,\nsetns(), which lets the current process enter any\narbitrary namespace. It requires, however, an open file descriptor to\nthe namespace pseudo-file (remember: that's the pseudo-file in\n/proc/pid/ns/net).\nHowever, there is a catch: you must not keep this file descriptor open.\nIf you do, when the last process of the control group exits, the\nnamespace will not be destroyed, and its network resources (like the\nvirtual interface of the container) will stay around for ever (or until\nyou close that file descriptor).\nThe right approach would be to keep track of the first PID of each\ncontainer, and re-open the namespace pseudo-file each time.\nCollecting metrics when a container exits\nSometimes, you do not care about real time metric collection, but when a\ncontainer exits, you want to know how much CPU, memory, etc. it has\nused.\nDocker makes this difficult because it relies on lxc-start, which\ncarefully cleans up after itself, but it is still possible. It is\nusually easier to collect metrics at regular intervals (e.g., every\nminute, with the collectd LXC plugin) and rely on that instead.\nBut, if you'd still like to gather the stats when a container stops,\nhere is how:\nFor each container, start a collection process, and move it to the\ncontrol groups that you want to monitor by writing its PID to the tasks\nfile of the cgroup. The collection process should periodically re-read\nthe tasks file to check if it's the last process of the control group.\n(If you also want to collect network statistics as explained in the\nprevious section, you should also move the process to the appropriate\nnetwork namespace.)\nWhen the container exits, lxc-start will try to\ndelete the control groups. It will fail, since the control group is\nstill in use; but that's fine. You process should now detect that it is\nthe only one remaining in the group. Now is the right time to collect\nall the metrics you need!\nFinally, your process should move itself back to the root control group,\nand remove the container control group. To remove a control group, just\nrmdir its directory. It's counter-intuitive to\nrmdir a directory as it still contains files; but\nremember that this is a pseudo-filesystem, so usual rules don't apply.\nAfter the cleanup is done, the collection process can exit safely.",
|
|
"title": "Runtime metrics"
|
|
},
|
|
{
|
|
"loc": "/articles/runmetrics#runtime-metrics",
|
|
"tags": "",
|
|
"text": "Linux Containers rely on control groups \nwhich not only track groups of processes, but also expose metrics about\nCPU, memory, and block I/O usage. You can access those metrics and\nobtain network usage metrics as well. This is relevant for \"pure\" LXC\ncontainers, as well as for Docker containers.",
|
|
"title": "Runtime Metrics"
|
|
},
|
|
{
|
|
"loc": "/articles/runmetrics#control-groups",
|
|
"tags": "",
|
|
"text": "Control groups are exposed through a pseudo-filesystem. In recent\ndistros, you should find this filesystem under /sys/fs/cgroup . Under\nthat directory, you will see multiple sub-directories, called devices,\nfreezer, blkio, etc.; each sub-directory actually corresponds to a different\ncgroup hierarchy. On older systems, the control groups might be mounted on /cgroup , without\ndistinct hierarchies. In that case, instead of seeing the sub-directories,\nyou will see a bunch of files in that directory, and possibly some directories\ncorresponding to existing containers. To figure out where your control groups are mounted, you can run: $ grep cgroup /proc/mounts",
|
|
"title": "Control Groups"
|
|
},
|
|
{
|
|
"loc": "/articles/runmetrics#enumerating-cgroups",
|
|
"tags": "",
|
|
"text": "You can look into /proc/cgroups to see the different control group subsystems\nknown to the system, the hierarchy they belong to, and how many groups they contain. You can also look at /proc/ pid /cgroup to see which control groups a process\nbelongs to. The control group will be shown as a path relative to the root of\nthe hierarchy mountpoint; e.g., / means \u201cthis process has not been assigned into\na particular group\u201d, while /lxc/pumpkin means that the process is likely to be\na member of a container named pumpkin .",
|
|
"title": "Enumerating Cgroups"
|
|
},
|
|
{
|
|
"loc": "/articles/runmetrics#finding-the-cgroup-for-a-given-container",
|
|
"tags": "",
|
|
"text": "For each container, one cgroup will be created in each hierarchy. On\nolder systems with older versions of the LXC userland tools, the name of\nthe cgroup will be the name of the container. With more recent versions\nof the LXC tools, the cgroup will be lxc/ container_name . For Docker containers using cgroups, the container name will be the full\nID or long ID of the container. If a container shows up as ae836c95b4c3\nin docker ps , its long ID might be something like ae836c95b4c3c9e9179e0e91015512da89fdec91612f63cebae57df9a5444c79 . You can\nlook it up with docker inspect or docker ps --no-trunc . Putting everything together to look at the memory metrics for a Docker\ncontainer, take a look at /sys/fs/cgroup/memory/lxc/ longid / .",
|
|
"title": "Finding the Cgroup for a Given Container"
|
|
},
|
|
{
|
|
"loc": "/articles/runmetrics#metrics-from-cgroups-memory-cpu-block-io",
|
|
"tags": "",
|
|
"text": "For each subsystem (memory, CPU, and block I/O), you will find one or\nmore pseudo-files containing statistics. Memory Metrics: memory.stat Memory metrics are found in the \"memory\" cgroup. Note that the memory\ncontrol group adds a little overhead, because it does very fine-grained\naccounting of the memory usage on your host. Therefore, many distros\nchose to not enable it by default. Generally, to enable it, all you have\nto do is to add some kernel command-line parameters: cgroup_enable=memory swapaccount=1 . The metrics are in the pseudo-file memory.stat .\nHere is what it will look like: cache 11492564992\nrss 1930993664\nmapped_file 306728960\npgpgin 406632648\npgpgout 403355412\nswap 0\npgfault 728281223\npgmajfault 1724\ninactive_anon 46608384\nactive_anon 1884520448\ninactive_file 7003344896\nactive_file 4489052160\nunevictable 32768\nhierarchical_memory_limit 9223372036854775807\nhierarchical_memsw_limit 9223372036854775807\ntotal_cache 11492564992\ntotal_rss 1930993664\ntotal_mapped_file 306728960\ntotal_pgpgin 406632648\ntotal_pgpgout 403355412\ntotal_swap 0\ntotal_pgfault 728281223\ntotal_pgmajfault 1724\ntotal_inactive_anon 46608384\ntotal_active_anon 1884520448\ntotal_inactive_file 7003344896\ntotal_active_file 4489052160\ntotal_unevictable 32768 The first half (without the total_ prefix) contains statistics relevant\nto the processes within the cgroup, excluding sub-cgroups. The second half\n(with the total_ prefix) includes sub-cgroups as well. Some metrics are \"gauges\", i.e., values that can increase or decrease\n(e.g., swap, the amount of swap space used by the members of the cgroup).\nSome others are \"counters\", i.e., values that can only go up, because\nthey represent occurrences of a specific event (e.g., pgfault, which\nindicates the number of page faults which happened since the creation of\nthe cgroup; this number can never decrease). cache: \n the amount of memory used by the processes of this control group\n that can be associated precisely with a block on a block device.\n When you read from and write to files on disk, this amount will\n increase. This will be the case if you use \"conventional\" I/O\n ( open , read ,\n write syscalls) as well as mapped files (with\n mmap ). It also accounts for the memory used by\n tmpfs mounts, though the reasons are unclear. rss: \n the amount of memory that doesn't correspond to anything on disk:\n stacks, heaps, and anonymous memory maps. mapped_file: \n indicates the amount of memory mapped by the processes in the\n control group. It doesn't give you information about how much \n memory is used; it rather tells you how it is used. pgfault and pgmajfault: \n indicate the number of times that a process of the cgroup triggered\n a \"page fault\" and a \"major fault\", respectively. A page fault\n happens when a process accesses a part of its virtual memory space\n which is nonexistent or protected. The former can happen if the\n process is buggy and tries to access an invalid address (it will\n then be sent a SIGSEGV signal, typically\n killing it with the famous Segmentation fault \n message). The latter can happen when the process reads from a memory\n zone which has been swapped out, or which corresponds to a mapped\n file: in that case, the kernel will load the page from disk, and let\n the CPU complete the memory access. It can also happen when the\n process writes to a copy-on-write memory zone: likewise, the kernel\n will preempt the process, duplicate the memory page, and resume the\n write operation on the process` own copy of the page. \"Major\" faults\n happen when the kernel actually has to read the data from disk. When\n it just has to duplicate an existing page, or allocate an empty\n page, it's a regular (or \"minor\") fault. swap: \n the amount of swap currently used by the processes in this cgroup. active_anon and inactive_anon: \n the amount of anonymous memory that has been identified has\n respectively active and inactive by the kernel. \"Anonymous\"\n memory is the memory that is not linked to disk pages. In other\n words, that's the equivalent of the rss counter described above. In\n fact, the very definition of the rss counter is active_anon +\n inactive_anon - tmpfs (where tmpfs is the amount of memory\n used up by tmpfs filesystems mounted by this\n control group). Now, what's the difference between \"active\" and\n \"inactive\"? Pages are initially \"active\"; and at regular intervals,\n the kernel sweeps over the memory, and tags some pages as\n \"inactive\". Whenever they are accessed again, they are immediately\n retagged \"active\". When the kernel is almost out of memory, and time\n comes to swap out to disk, the kernel will swap \"inactive\" pages. active_file and inactive_file: \n cache memory, with active and inactive similar to the anon \n memory above. The exact formula is cache = active_file +\n inactive_file + tmpfs . The exact rules used by the kernel\n to move memory pages between active and inactive sets are different\n from the ones used for anonymous memory, but the general principle\n is the same. Note that when the kernel needs to reclaim memory, it\n is cheaper to reclaim a clean (=non modified) page from this pool,\n since it can be reclaimed immediately (while anonymous pages and\n dirty/modified pages have to be written to disk first). unevictable: \n the amount of memory that cannot be reclaimed; generally, it will\n account for memory that has been \"locked\" with mlock .\n It is often used by crypto frameworks to make sure that\n secret keys and other sensitive material never gets swapped out to\n disk. memory and memsw limits: \n These are not really metrics, but a reminder of the limits applied\n to this cgroup. The first one indicates the maximum amount of\n physical memory that can be used by the processes of this control\n group; the second one indicates the maximum amount of RAM+swap. Accounting for memory in the page cache is very complex. If two\nprocesses in different control groups both read the same file\n(ultimately relying on the same blocks on disk), the corresponding\nmemory charge will be split between the control groups. It's nice, but\nit also means that when a cgroup is terminated, it could increase the\nmemory usage of another cgroup, because they are not splitting the cost\nanymore for those memory pages. CPU metrics: cpuacct.stat Now that we've covered memory metrics, everything else will look very\nsimple in comparison. CPU metrics will be found in the cpuacct controller. For each container, you will find a pseudo-file cpuacct.stat ,\ncontaining the CPU usage accumulated by the processes of the container,\nbroken down between user and system time. If you're not familiar\nwith the distinction, user is the time during which the processes were\nin direct control of the CPU (i.e., executing process code), and system \nis the time during which the CPU was executing system calls on behalf of\nthose processes. Those times are expressed in ticks of 1/100th of a second. Actually,\nthey are expressed in \"user jiffies\". There are USER_HZ \"jiffies\" per second, and on x86 systems, USER_HZ is 100. This used to map exactly to the\nnumber of scheduler \"ticks\" per second; but with the advent of higher\nfrequency scheduling, as well as tickless kernels , the number of kernel ticks\nwasn't relevant anymore. It stuck around anyway, mainly for legacy and\ncompatibility reasons. Block I/O metrics Block I/O is accounted in the blkio controller.\nDifferent metrics are scattered across different files. While you can\nfind in-depth details in the blkio-controller \nfile in the kernel documentation, here is a short list of the most\nrelevant ones: blkio.sectors: \n contain the number of 512-bytes sectors read and written by the\n processes member of the cgroup, device by device. Reads and writes\n are merged in a single counter. blkio.io_service_bytes: \n indicates the number of bytes read and written by the cgroup. It has\n 4 counters per device, because for each device, it differentiates\n between synchronous vs. asynchronous I/O, and reads vs. writes. blkio.io_serviced: \n the number of I/O operations performed, regardless of their size. It\n also has 4 counters per device. blkio.io_queued: \n indicates the number of I/O operations currently queued for this\n cgroup. In other words, if the cgroup isn't doing any I/O, this will\n be zero. Note that the opposite is not true. In other words, if\n there is no I/O queued, it does not mean that the cgroup is idle\n (I/O-wise). It could be doing purely synchronous reads on an\n otherwise quiescent device, which is therefore able to handle them\n immediately, without queuing. Also, while it is helpful to figure\n out which cgroup is putting stress on the I/O subsystem, keep in\n mind that is is a relative quantity. Even if a process group does\n not perform more I/O, its queue size can increase just because the\n device load increases because of other devices.",
|
|
"title": "Metrics from Cgroups: Memory, CPU, Block IO"
|
|
},
|
|
{
|
|
"loc": "/articles/runmetrics#network-metrics",
|
|
"tags": "",
|
|
"text": "Network metrics are not exposed directly by control groups. There is a\ngood explanation for that: network interfaces exist within the context\nof network namespaces . The kernel could probably accumulate metrics\nabout packets and bytes sent and received by a group of processes, but\nthose metrics wouldn't be very useful. You want per-interface metrics\n(because traffic happening on the local lo \ninterface doesn't really count). But since processes in a single cgroup\ncan belong to multiple network namespaces, those metrics would be harder\nto interpret: multiple network namespaces means multiple lo \ninterfaces, potentially multiple eth0 \ninterfaces, etc.; so this is why there is no easy way to gather network\nmetrics with control groups. Instead we can gather network metrics from other sources: IPtables IPtables (or rather, the netfilter framework for which iptables is just\nan interface) can do some serious accounting. For instance, you can setup a rule to account for the outbound HTTP\ntraffic on a web server: $ iptables -I OUTPUT -p tcp --sport 80 There is no -j or -g flag,\nso the rule will just count matched packets and go to the following\nrule. Later, you can check the values of the counters, with: $ iptables -nxvL OUTPUT Technically, -n is not required, but it will\nprevent iptables from doing DNS reverse lookups, which are probably\nuseless in this scenario. Counters include packets and bytes. If you want to setup metrics for\ncontainer traffic like this, you could execute a for \nloop to add two iptables rules per\ncontainer IP address (one in each direction), in the FORWARD \nchain. This will only meter traffic going through the NAT\nlayer; you will also have to add traffic going through the userland\nproxy. Then, you will need to check those counters on a regular basis. If you\nhappen to use collectd , there is a nice plugin \nto automate iptables counters collection. Interface-level counters Since each container has a virtual Ethernet interface, you might want to\ncheck directly the TX and RX counters of this interface. You will notice\nthat each container is associated to a virtual Ethernet interface in\nyour host, with a name like vethKk8Zqi . Figuring\nout which interface corresponds to which container is, unfortunately,\ndifficult. But for now, the best way is to check the metrics from within the\ncontainers . To accomplish this, you can run an executable from the host\nenvironment within the network namespace of a container using ip-netns\nmagic . The ip-netns exec command will let you execute any\nprogram (present in the host system) within any network namespace\nvisible to the current process. This means that your host will be able\nto enter the network namespace of your containers, but your containers\nwon't be able to access the host, nor their sibling containers.\nContainers will be able to \u201csee\u201d and affect their sub-containers,\nthough. The exact format of the command is: $ ip netns exec nsname command... For example: $ ip netns exec mycontainer netstat -i ip netns finds the \"mycontainer\" container by\nusing namespaces pseudo-files. Each process belongs to one network\nnamespace, one PID namespace, one mnt namespace,\netc., and those namespaces are materialized under /proc/ pid /ns/ . For example, the network\nnamespace of PID 42 is materialized by the pseudo-file /proc/42/ns/net . When you run ip netns exec mycontainer ... , it\nexpects /var/run/netns/mycontainer to be one of\nthose pseudo-files. (Symlinks are accepted.) In other words, to execute a command within the network namespace of a\ncontainer, we need to: Find out the PID of any process within the container that we want to investigate; Create a symlink from /var/run/netns/ somename to /proc/ thepid /ns/net Execute ip netns exec somename .... Please review Enumerating Cgroups to learn how to find\nthe cgroup of a process running in the container of which you want to\nmeasure network usage. From there, you can examine the pseudo-file named tasks , which contains the PIDs that are in the\ncontrol group (i.e., in the container). Pick any one of them. Putting everything together, if the \"short ID\" of a container is held in\nthe environment variable $CID , then you can do this: $ TASKS=/sys/fs/cgroup/devices/$CID*/tasks\n$ PID=$(head -n 1 $TASKS)\n$ mkdir -p /var/run/netns\n$ ln -sf /proc/$PID/ns/net /var/run/netns/$CID\n$ ip netns exec $CID netstat -i",
|
|
"title": "Network Metrics"
|
|
},
|
|
{
|
|
"loc": "/articles/runmetrics#tips-for-high-performance-metric-collection",
|
|
"tags": "",
|
|
"text": "Note that running a new process each time you want to update metrics is\n(relatively) expensive. If you want to collect metrics at high\nresolutions, and/or over a large number of containers (think 1000\ncontainers on a single host), you do not want to fork a new process each\ntime. Here is how to collect metrics from a single process. You will have to\nwrite your metric collector in C (or any language that lets you do\nlow-level system calls). You need to use a special system call, setns() , which lets the current process enter any\narbitrary namespace. It requires, however, an open file descriptor to\nthe namespace pseudo-file (remember: that's the pseudo-file in /proc/ pid /ns/net ). However, there is a catch: you must not keep this file descriptor open.\nIf you do, when the last process of the control group exits, the\nnamespace will not be destroyed, and its network resources (like the\nvirtual interface of the container) will stay around for ever (or until\nyou close that file descriptor). The right approach would be to keep track of the first PID of each\ncontainer, and re-open the namespace pseudo-file each time.",
|
|
"title": "Tips for high-performance metric collection"
|
|
},
|
|
{
|
|
"loc": "/articles/runmetrics#collecting-metrics-when-a-container-exits",
|
|
"tags": "",
|
|
"text": "Sometimes, you do not care about real time metric collection, but when a\ncontainer exits, you want to know how much CPU, memory, etc. it has\nused. Docker makes this difficult because it relies on lxc-start , which\ncarefully cleans up after itself, but it is still possible. It is\nusually easier to collect metrics at regular intervals (e.g., every\nminute, with the collectd LXC plugin) and rely on that instead. But, if you'd still like to gather the stats when a container stops,\nhere is how: For each container, start a collection process, and move it to the\ncontrol groups that you want to monitor by writing its PID to the tasks\nfile of the cgroup. The collection process should periodically re-read\nthe tasks file to check if it's the last process of the control group.\n(If you also want to collect network statistics as explained in the\nprevious section, you should also move the process to the appropriate\nnetwork namespace.) When the container exits, lxc-start will try to\ndelete the control groups. It will fail, since the control group is\nstill in use; but that's fine. You process should now detect that it is\nthe only one remaining in the group. Now is the right time to collect\nall the metrics you need! Finally, your process should move itself back to the root control group,\nand remove the container control group. To remove a control group, just rmdir its directory. It's counter-intuitive to rmdir a directory as it still contains files; but\nremember that this is a pseudo-filesystem, so usual rules don't apply.\nAfter the cleanup is done, the collection process can exit safely.",
|
|
"title": "Collecting metrics when a container exits"
|
|
},
|
|
{
|
|
"loc": "/articles/b2d_volume_resize/",
|
|
"tags": "",
|
|
"text": "Getting \u201cno space left on device\u201d errors with Boot2Docker?\nIf you're using Boot2Docker with a large number of images, or the images you're\nworking with are very large, your pulls might start failing with \"no space left \non device\" errors when the Boot2Docker volume fills up. The solution is to \nincrease the volume size by first cloning it, then resizing it using a disk \npartitioning tool. \nWe recommend GParted.\nThe tool comes as a bootable ISO, is a free download, and works well with \nVirtualBox.\n1. Stop Boot2Docker\nIssue the command to stop the Boot2Docker VM on the command line:\n$ boot2docker stop\n\n2. Clone the VMDK image to a VDI image\nBoot2Docker ships with a VMDK image, which can\u2019t be resized by VirtualBox\u2019s \nnative tools. We will instead create a VDI volume and clone the VMDK volume to \nit. \nUsing the command line VirtualBox tools, clone the VMDK image to a VDI image:\n$ vboxmanage clonehd /full/path/to/boot2docker-hd.vmdk /full/path/to/newVDIimage.vdi --format VDI --variant Standard\n\n3. Resize the VDI volume\nChoose a size that will be appropriate for your needs. If you\u2019re spinning up a \nlot of containers, or your containers are particularly large, larger will be \nbetter:\n$ vboxmanage modifyhd /full/path/to/newVDIimage.vdi --resize size in MB\n\n4. Download a disk partitioning tool ISO\nTo resize the volume, we'll use GParted. \nOnce you've downloaded the tool, add the ISO to the Boot2Docker VM IDE bus. \nYou might need to create the bus before you can add the ISO. \n\nNote: \nIt's important that you choose a partitioning tool that is available as an ISO so \nthat the Boot2Docker VM can be booted with it.\n\n\n \n \n \n \n \n \n\n\n5. Add the new VDI image\nIn the settings for the Boot2Docker image in VirtualBox, remove the VMDK image \nfrom the SATA contoller and add the VDI image.\n\n6. Verify the boot order\nIn the System settings for the Boot2Docker VM, make sure that CD/DVD is \nat the top of the Boot Order list.\n\n7. Boot to the disk partitioning ISO\nManually start the Boot2Docker VM in VirtualBox, and the disk partitioning ISO \nshould start up. Using GParted, choose the GParted Live (default settings) \noption. Choose the default keyboard, language, and XWindows settings, and the \nGParted tool will start up and display the VDI volume you created. Right click \non the VDI and choose Resize/Move. \n\nDrag the slider representing the volume to the maximum available size, click \nResize/Move, and then Apply. \n\nQuit GParted and shut down the VM. Remove the GParted ISO from the IDE controller \nfor the Boot2Docker VM in VirtualBox.\n8. Start the Boot2Docker VM\nFire up the Boot2Docker VM manually in VirtualBox. The VM should log in \nautomatically, but if it doesn't, the credentials are docker/tcuser. Using \nthe df -h command, verify that your changes took effect.\n\nYou\u2019re done!",
|
|
"title": "Increasing a Boot2Docker volume"
|
|
},
|
|
{
|
|
"loc": "/articles/b2d_volume_resize#getting-no-space-left-on-device-errors-with-boot2docker",
|
|
"tags": "",
|
|
"text": "If you're using Boot2Docker with a large number of images, or the images you're\nworking with are very large, your pulls might start failing with \"no space left \non device\" errors when the Boot2Docker volume fills up. The solution is to \nincrease the volume size by first cloning it, then resizing it using a disk \npartitioning tool. We recommend GParted .\nThe tool comes as a bootable ISO, is a free download, and works well with \nVirtualBox.",
|
|
"title": "Getting \u201cno space left on device\u201d errors with Boot2Docker?"
|
|
},
|
|
{
|
|
"loc": "/articles/b2d_volume_resize#1-stop-boot2docker",
|
|
"tags": "",
|
|
"text": "Issue the command to stop the Boot2Docker VM on the command line: $ boot2docker stop",
|
|
"title": "1. Stop Boot2Docker"
|
|
},
|
|
{
|
|
"loc": "/articles/b2d_volume_resize#2-clone-the-vmdk-image-to-a-vdi-image",
|
|
"tags": "",
|
|
"text": "Boot2Docker ships with a VMDK image, which can\u2019t be resized by VirtualBox\u2019s \nnative tools. We will instead create a VDI volume and clone the VMDK volume to \nit. Using the command line VirtualBox tools, clone the VMDK image to a VDI image: $ vboxmanage clonehd /full/path/to/boot2docker-hd.vmdk /full/path/to/ newVDIimage .vdi --format VDI --variant Standard",
|
|
"title": "2. Clone the VMDK image to a VDI image"
|
|
},
|
|
{
|
|
"loc": "/articles/b2d_volume_resize#3-resize-the-vdi-volume",
|
|
"tags": "",
|
|
"text": "Choose a size that will be appropriate for your needs. If you\u2019re spinning up a \nlot of containers, or your containers are particularly large, larger will be \nbetter: $ vboxmanage modifyhd /full/path/to/ newVDIimage .vdi --resize size in MB",
|
|
"title": "3. Resize the VDI volume"
|
|
},
|
|
{
|
|
"loc": "/articles/b2d_volume_resize#4-download-a-disk-partitioning-tool-iso",
|
|
"tags": "",
|
|
"text": "To resize the volume, we'll use GParted . \nOnce you've downloaded the tool, add the ISO to the Boot2Docker VM IDE bus. \nYou might need to create the bus before you can add the ISO. Note: \nIt's important that you choose a partitioning tool that is available as an ISO so \nthat the Boot2Docker VM can be booted with it.",
|
|
"title": "4. Download a disk partitioning tool ISO"
|
|
},
|
|
{
|
|
"loc": "/articles/b2d_volume_resize#5-add-the-new-vdi-image",
|
|
"tags": "",
|
|
"text": "In the settings for the Boot2Docker image in VirtualBox, remove the VMDK image \nfrom the SATA contoller and add the VDI image.",
|
|
"title": "5. Add the new VDI image"
|
|
},
|
|
{
|
|
"loc": "/articles/b2d_volume_resize#6-verify-the-boot-order",
|
|
"tags": "",
|
|
"text": "In the System settings for the Boot2Docker VM, make sure that CD/DVD is \nat the top of the Boot Order list.",
|
|
"title": "6. Verify the boot order"
|
|
},
|
|
{
|
|
"loc": "/articles/b2d_volume_resize#7-boot-to-the-disk-partitioning-iso",
|
|
"tags": "",
|
|
"text": "Manually start the Boot2Docker VM in VirtualBox, and the disk partitioning ISO \nshould start up. Using GParted, choose the GParted Live (default settings) \noption. Choose the default keyboard, language, and XWindows settings, and the \nGParted tool will start up and display the VDI volume you created. Right click \non the VDI and choose Resize/Move . Drag the slider representing the volume to the maximum available size, click Resize/Move , and then Apply . Quit GParted and shut down the VM. Remove the GParted ISO from the IDE controller \nfor the Boot2Docker VM in VirtualBox.",
|
|
"title": "7. Boot to the disk partitioning ISO"
|
|
},
|
|
{
|
|
"loc": "/articles/b2d_volume_resize#8-start-the-boot2docker-vm",
|
|
"tags": "",
|
|
"text": "Fire up the Boot2Docker VM manually in VirtualBox. The VM should log in \nautomatically, but if it doesn't, the credentials are docker/tcuser . Using \nthe df -h command, verify that your changes took effect. You\u2019re done!",
|
|
"title": "8. Start the Boot2Docker VM"
|
|
},
|
|
{
|
|
"loc": "/articles/systemd/",
|
|
"tags": "",
|
|
"text": "Controlling and configuring Docker using Systemd\nMany Linux distributions use systemd to start the Docker daemon. This document\nshows a few examples of how to customise Docker's settings.\nStarting the Docker daemon\nOnce Docker is installed, you will need to start the Docker daemon.\n$ sudo systemctl start docker\n# or on older distributions, you may need to use\n$ sudo service docker start\n\nIf you want Docker to start at boot, you should also:\n$ sudo systemctl enable docker\n# or on older distributions, you may need to use\n$ sudo chkconfig docker on\n\nCustom Docker daemon options\nThere are a number of ways to configure the daemon flags and environment variables\nfor your Docker daemon. \nIf the docker.service file is set to use an EnvironmentFile\n(often pointing to /etc/sysconfig/docker) then you can modify the\nreferenced file.\nOr, you may need to edit the docker.service file, which can be in /usr/lib/systemd/system\nor /etc/systemd/service.\nRuntime directory and storage driver\nYou may want to control the disk space used for Docker images, containers\nand volumes by moving it to a separate partition.\nIn this example, we'll assume that your docker.service file looks something like:\n[Unit]\nDescription=Docker Application Container Engine\nDocumentation=http://docs.docker.com\nAfter=network.target docker.socket\nRequires=docker.socket\n\n[Service]\nType=notify\nEnvironmentFile=-/etc/sysconfig/docker\nExecStart=/usr/bin/docker -d -H fd:// $OPTIONS\nLimitNOFILE=1048576\nLimitNPROC=1048576\n\n[Install]\nAlso=docker.socket\n\nThis will allow us to add extra flags to the /etc/sysconfig/docker file by\nsetting OPTIONS:\nOPTIONS=\"--graph /mnt/docker-data --storage-driver btrfs\"\n\nYou can also set other environment variables in this file, for example, the\nHTTP_PROXY environment variables described below.\nHTTP Proxy\nThis example overrides the default docker.service file.\nIf you are behind a HTTP proxy server, for example in corporate settings,\nyou will need to add this configuration in the Docker systemd service file.\nFirst, create a systemd drop-in directory for the docker service:\nmkdir /etc/systemd/system/docker.service.d\n\nNow create a file called /etc/systemd/system/docker.service.d/http-proxy.conf\nthat adds the HTTP_PROXY environment variable:\n[Service]\nEnvironment=\"HTTP_PROXY=http://proxy.example.com:80/\"\n\nIf you have internal Docker registries that you need to contact without\nproxying you can specify them via the NO_PROXY environment variable:\nEnvironment=\"HTTP_PROXY=http://proxy.example.com:80/\" \"NO_PROXY=localhost,127.0.0.0/8,docker-registry.somecorporation.com\"\n\nFlush changes:\n$ sudo systemctl daemon-reload\n\nRestart Docker:\n$ sudo systemctl restart docker\n\nManually creating the systemd unit files\nWhen installing the binary without a package, you may want\nto integrate Docker with systemd. For this, simply install the two unit files\n(service and socket) from the github\nrepository\nto /etc/systemd/system.",
|
|
"title": "Controlling and configuring Docker using Systemd"
|
|
},
|
|
{
|
|
"loc": "/articles/systemd#controlling-and-configuring-docker-using-systemd",
|
|
"tags": "",
|
|
"text": "Many Linux distributions use systemd to start the Docker daemon. This document\nshows a few examples of how to customise Docker's settings.",
|
|
"title": "Controlling and configuring Docker using Systemd"
|
|
},
|
|
{
|
|
"loc": "/articles/systemd#starting-the-docker-daemon",
|
|
"tags": "",
|
|
"text": "Once Docker is installed, you will need to start the Docker daemon. $ sudo systemctl start docker\n# or on older distributions, you may need to use\n$ sudo service docker start If you want Docker to start at boot, you should also: $ sudo systemctl enable docker\n# or on older distributions, you may need to use\n$ sudo chkconfig docker on",
|
|
"title": "Starting the Docker daemon"
|
|
},
|
|
{
|
|
"loc": "/articles/systemd#custom-docker-daemon-options",
|
|
"tags": "",
|
|
"text": "There are a number of ways to configure the daemon flags and environment variables\nfor your Docker daemon. If the docker.service file is set to use an EnvironmentFile \n(often pointing to /etc/sysconfig/docker ) then you can modify the\nreferenced file. Or, you may need to edit the docker.service file, which can be in /usr/lib/systemd/system \nor /etc/systemd/service . Runtime directory and storage driver You may want to control the disk space used for Docker images, containers\nand volumes by moving it to a separate partition. In this example, we'll assume that your docker.service file looks something like: [Unit]\nDescription=Docker Application Container Engine\nDocumentation=http://docs.docker.com\nAfter=network.target docker.socket\nRequires=docker.socket\n\n[Service]\nType=notify\nEnvironmentFile=-/etc/sysconfig/docker\nExecStart=/usr/bin/docker -d -H fd:// $OPTIONS\nLimitNOFILE=1048576\nLimitNPROC=1048576\n\n[Install]\nAlso=docker.socket This will allow us to add extra flags to the /etc/sysconfig/docker file by\nsetting OPTIONS : OPTIONS=\"--graph /mnt/docker-data --storage-driver btrfs\" You can also set other environment variables in this file, for example, the HTTP_PROXY environment variables described below. HTTP Proxy This example overrides the default docker.service file. If you are behind a HTTP proxy server, for example in corporate settings,\nyou will need to add this configuration in the Docker systemd service file. First, create a systemd drop-in directory for the docker service: mkdir /etc/systemd/system/docker.service.d Now create a file called /etc/systemd/system/docker.service.d/http-proxy.conf \nthat adds the HTTP_PROXY environment variable: [Service]\nEnvironment=\"HTTP_PROXY=http://proxy.example.com:80/\" If you have internal Docker registries that you need to contact without\nproxying you can specify them via the NO_PROXY environment variable: Environment=\"HTTP_PROXY=http://proxy.example.com:80/\" \"NO_PROXY=localhost,127.0.0.0/8,docker-registry.somecorporation.com\" Flush changes: $ sudo systemctl daemon-reload Restart Docker: $ sudo systemctl restart docker",
|
|
"title": "Custom Docker daemon options"
|
|
},
|
|
{
|
|
"loc": "/articles/systemd#manually-creating-the-systemd-unit-files",
|
|
"tags": "",
|
|
"text": "When installing the binary without a package, you may want\nto integrate Docker with systemd. For this, simply install the two unit files\n(service and socket) from the github\nrepository \nto /etc/systemd/system .",
|
|
"title": "Manually creating the systemd unit files"
|
|
},
|
|
{
|
|
"loc": "/reference/",
|
|
"tags": "",
|
|
"text": "Table of Contents\nAbout\n\n\nDocker\n\n\nRelease Notes\n\n\nUnderstanding Docker\n\n\nInstallation\n\n\nUbuntu\n\n\nMac OS X\n\n\nMicrosoft Windows\n\n\nAmazon EC2\n\n\nArch Linux\n\n\nBinaries\n\n\nCentOS\n\n\nCRUX Linux\n\n\nDebian\n\n\nFedora\n\n\nFrugalWare\n\n\nGoogle Cloud Platform\n\n\nGentoo\n\n\nIBM Softlayer\n\n\nRackspace Cloud\n\n\nRed Hat Enterprise Linux\n\n\nOracle Linux\n\n\nSUSE\n\n\nDocker Compose\n\n\nUser Guide\n\n\nThe Docker User Guide\n\n\nGetting Started with Docker Hub\n\n\nDockerizing Applications\n\n\nWorking with Containers\n\n\nWorking with Docker Images\n\n\nLinking containers together\n\n\nManaging data in containers\n\n\nWorking with Docker Hub\n\n\nDocker Compose\n\n\nDocker Machine\n\n\nDocker Swarm\n\n\nDocker Hub\n\n\nDocker Hub\n\n\nAccounts\n\n\nRepositories\n\n\nAutomated Builds\n\n\nOfficial Repo Guidelines\n\n\nExamples\n\n\nDockerizing a Node.js web application\n\n\nDockerizing MongoDB\n\n\nDockerizing a Redis service\n\n\nDockerizing a PostgreSQL service\n\n\nDockerizing a Riak service\n\n\nDockerizing an SSH service\n\n\nDockerizing a CouchDB service\n\n\nDockerizing an Apt-Cacher-ng service\n\n\nGetting started with Compose and Django\n\n\nGetting started with Compose and Rails\n\n\nGetting started with Compose and Wordpress\n\n\nArticles\n\n\nDocker basics\n\n\nAdvanced networking\n\n\nSecurity\n\n\nRunning Docker with HTTPS\n\n\nRun a local registry mirror\n\n\nAutomatically starting containers\n\n\nCreating a base image\n\n\nBest practices for writing Dockerfiles\n\n\nUsing certificates for repository client verification\n\n\nUsing Supervisor\n\n\nProcess management with CFEngine\n\n\nUsing Puppet\n\n\nUsing Chef\n\n\nUsing PowerShell DSC\n\n\nCross-Host linking using ambassador containers\n\n\nRuntime metrics\n\n\nIncreasing a Boot2Docker volume\n\n\nControlling and configuring Docker using Systemd\n\n\nReference\n\n\nCommand line\n\n\nDockerfile\n\n\nFAQ\n\n\nRun Reference\n\n\nCompose command line\n\n\nCompose yml\n\n\nCompose ENV variables\n\n\nCompose commandline completion\n\n\nSwarm discovery\n\n\nSwarm strategies\n\n\nSwarm filters\n\n\nSwarm API\n\n\nDocker Hub API\n\n\nDocker Registry API\n\n\nDocker Registry API Client Libraries\n\n\nDocker Hub and Registry Spec\n\n\nDocker Remote API\n\n\nDocker Remote API v1.17\n\n\nDocker Remote API v1.16\n\n\nDocker Remote API Client Libraries\n\n\nDocker Hub Accounts API\n\n\nContributor Guide\n\n\nREADME first\n\n\nGet required software\n\n\nConfigure Git for contributing\n\n\nWork with a development container\n\n\nRun tests and test documentation\n\n\nUnderstand contribution workflow\n\n\nFind an issue\n\n\nWork on an issue\n\n\nCreate a pull request\n\n\nParticipate in the PR review\n\n\nAdvanced contributing\n\n\nWhere to get help\n\n\nCoding style guide\n\n\nDocumentation style guide",
|
|
"title": "**HIDDEN**"
|
|
},
|
|
{
|
|
"loc": "/reference#table-of-contents",
|
|
"tags": "",
|
|
"text": "",
|
|
"title": "Table of Contents"
|
|
},
|
|
{
|
|
"loc": "/reference#about",
|
|
"tags": "",
|
|
"text": "Docker Release Notes Understanding Docker",
|
|
"title": "About"
|
|
},
|
|
{
|
|
"loc": "/reference#installation",
|
|
"tags": "",
|
|
"text": "Ubuntu Mac OS X Microsoft Windows Amazon EC2 Arch Linux Binaries CentOS CRUX Linux Debian Fedora FrugalWare Google Cloud Platform Gentoo IBM Softlayer Rackspace Cloud Red Hat Enterprise Linux Oracle Linux SUSE Docker Compose",
|
|
"title": "Installation"
|
|
},
|
|
{
|
|
"loc": "/reference#user-guide",
|
|
"tags": "",
|
|
"text": "The Docker User Guide Getting Started with Docker Hub Dockerizing Applications Working with Containers Working with Docker Images Linking containers together Managing data in containers Working with Docker Hub Docker Compose Docker Machine Docker Swarm",
|
|
"title": "User Guide"
|
|
},
|
|
{
|
|
"loc": "/reference#docker-hub",
|
|
"tags": "",
|
|
"text": "Docker Hub Accounts Repositories Automated Builds Official Repo Guidelines",
|
|
"title": "Docker Hub"
|
|
},
|
|
{
|
|
"loc": "/reference#examples",
|
|
"tags": "",
|
|
"text": "Dockerizing a Node.js web application Dockerizing MongoDB Dockerizing a Redis service Dockerizing a PostgreSQL service Dockerizing a Riak service Dockerizing an SSH service Dockerizing a CouchDB service Dockerizing an Apt-Cacher-ng service Getting started with Compose and Django Getting started with Compose and Rails Getting started with Compose and Wordpress",
|
|
"title": "Examples"
|
|
},
|
|
{
|
|
"loc": "/reference#articles",
|
|
"tags": "",
|
|
"text": "Docker basics Advanced networking Security Running Docker with HTTPS Run a local registry mirror Automatically starting containers Creating a base image Best practices for writing Dockerfiles Using certificates for repository client verification Using Supervisor Process management with CFEngine Using Puppet Using Chef Using PowerShell DSC Cross-Host linking using ambassador containers Runtime metrics Increasing a Boot2Docker volume Controlling and configuring Docker using Systemd",
|
|
"title": "Articles"
|
|
},
|
|
{
|
|
"loc": "/reference#reference",
|
|
"tags": "",
|
|
"text": "Command line Dockerfile FAQ Run Reference Compose command line Compose yml Compose ENV variables Compose commandline completion Swarm discovery Swarm strategies Swarm filters Swarm API Docker Hub API Docker Registry API Docker Registry API Client Libraries Docker Hub and Registry Spec Docker Remote API Docker Remote API v1.17 Docker Remote API v1.16 Docker Remote API Client Libraries Docker Hub Accounts API",
|
|
"title": "Reference"
|
|
},
|
|
{
|
|
"loc": "/reference#contributor-guide",
|
|
"tags": "",
|
|
"text": "README first Get required software Configure Git for contributing Work with a development container Run tests and test documentation Understand contribution workflow Find an issue Work on an issue Create a pull request Participate in the PR review Advanced contributing Where to get help Coding style guide Documentation style guide",
|
|
"title": "Contributor Guide"
|
|
},
|
|
{
|
|
"loc": "/reference/commandline/",
|
|
"tags": "",
|
|
"text": "Table of Contents\nAbout\n\n\nDocker\n\n\nRelease Notes\n\n\nUnderstanding Docker\n\n\nInstallation\n\n\nUbuntu\n\n\nMac OS X\n\n\nMicrosoft Windows\n\n\nAmazon EC2\n\n\nArch Linux\n\n\nBinaries\n\n\nCentOS\n\n\nCRUX Linux\n\n\nDebian\n\n\nFedora\n\n\nFrugalWare\n\n\nGoogle Cloud Platform\n\n\nGentoo\n\n\nIBM Softlayer\n\n\nRackspace Cloud\n\n\nRed Hat Enterprise Linux\n\n\nOracle Linux\n\n\nSUSE\n\n\nDocker Compose\n\n\nUser Guide\n\n\nThe Docker User Guide\n\n\nGetting Started with Docker Hub\n\n\nDockerizing Applications\n\n\nWorking with Containers\n\n\nWorking with Docker Images\n\n\nLinking containers together\n\n\nManaging data in containers\n\n\nWorking with Docker Hub\n\n\nDocker Compose\n\n\nDocker Machine\n\n\nDocker Swarm\n\n\nDocker Hub\n\n\nDocker Hub\n\n\nAccounts\n\n\nRepositories\n\n\nAutomated Builds\n\n\nOfficial Repo Guidelines\n\n\nExamples\n\n\nDockerizing a Node.js web application\n\n\nDockerizing MongoDB\n\n\nDockerizing a Redis service\n\n\nDockerizing a PostgreSQL service\n\n\nDockerizing a Riak service\n\n\nDockerizing an SSH service\n\n\nDockerizing a CouchDB service\n\n\nDockerizing an Apt-Cacher-ng service\n\n\nGetting started with Compose and Django\n\n\nGetting started with Compose and Rails\n\n\nGetting started with Compose and Wordpress\n\n\nArticles\n\n\nDocker basics\n\n\nAdvanced networking\n\n\nSecurity\n\n\nRunning Docker with HTTPS\n\n\nRun a local registry mirror\n\n\nAutomatically starting containers\n\n\nCreating a base image\n\n\nBest practices for writing Dockerfiles\n\n\nUsing certificates for repository client verification\n\n\nUsing Supervisor\n\n\nProcess management with CFEngine\n\n\nUsing Puppet\n\n\nUsing Chef\n\n\nUsing PowerShell DSC\n\n\nCross-Host linking using ambassador containers\n\n\nRuntime metrics\n\n\nIncreasing a Boot2Docker volume\n\n\nControlling and configuring Docker using Systemd\n\n\nReference\n\n\nCommand line\n\n\nDockerfile\n\n\nFAQ\n\n\nRun Reference\n\n\nCompose command line\n\n\nCompose yml\n\n\nCompose ENV variables\n\n\nCompose commandline completion\n\n\nSwarm discovery\n\n\nSwarm strategies\n\n\nSwarm filters\n\n\nSwarm API\n\n\nDocker Hub API\n\n\nDocker Registry API\n\n\nDocker Registry API Client Libraries\n\n\nDocker Hub and Registry Spec\n\n\nDocker Remote API\n\n\nDocker Remote API v1.17\n\n\nDocker Remote API v1.16\n\n\nDocker Remote API Client Libraries\n\n\nDocker Hub Accounts API\n\n\nContributor Guide\n\n\nREADME first\n\n\nGet required software\n\n\nConfigure Git for contributing\n\n\nWork with a development container\n\n\nRun tests and test documentation\n\n\nUnderstand contribution workflow\n\n\nFind an issue\n\n\nWork on an issue\n\n\nCreate a pull request\n\n\nParticipate in the PR review\n\n\nAdvanced contributing\n\n\nWhere to get help\n\n\nCoding style guide\n\n\nDocumentation style guide",
|
|
"title": "**HIDDEN**"
|
|
},
|
|
{
|
|
"loc": "/reference/commandline#table-of-contents",
|
|
"tags": "",
|
|
"text": "",
|
|
"title": "Table of Contents"
|
|
},
|
|
{
|
|
"loc": "/reference/commandline#about",
|
|
"tags": "",
|
|
"text": "Docker Release Notes Understanding Docker",
|
|
"title": "About"
|
|
},
|
|
{
|
|
"loc": "/reference/commandline#installation",
|
|
"tags": "",
|
|
"text": "Ubuntu Mac OS X Microsoft Windows Amazon EC2 Arch Linux Binaries CentOS CRUX Linux Debian Fedora FrugalWare Google Cloud Platform Gentoo IBM Softlayer Rackspace Cloud Red Hat Enterprise Linux Oracle Linux SUSE Docker Compose",
|
|
"title": "Installation"
|
|
},
|
|
{
|
|
"loc": "/reference/commandline#user-guide",
|
|
"tags": "",
|
|
"text": "The Docker User Guide Getting Started with Docker Hub Dockerizing Applications Working with Containers Working with Docker Images Linking containers together Managing data in containers Working with Docker Hub Docker Compose Docker Machine Docker Swarm",
|
|
"title": "User Guide"
|
|
},
|
|
{
|
|
"loc": "/reference/commandline#docker-hub",
|
|
"tags": "",
|
|
"text": "Docker Hub Accounts Repositories Automated Builds Official Repo Guidelines",
|
|
"title": "Docker Hub"
|
|
},
|
|
{
|
|
"loc": "/reference/commandline#examples",
|
|
"tags": "",
|
|
"text": "Dockerizing a Node.js web application Dockerizing MongoDB Dockerizing a Redis service Dockerizing a PostgreSQL service Dockerizing a Riak service Dockerizing an SSH service Dockerizing a CouchDB service Dockerizing an Apt-Cacher-ng service Getting started with Compose and Django Getting started with Compose and Rails Getting started with Compose and Wordpress",
|
|
"title": "Examples"
|
|
},
|
|
{
|
|
"loc": "/reference/commandline#articles",
|
|
"tags": "",
|
|
"text": "Docker basics Advanced networking Security Running Docker with HTTPS Run a local registry mirror Automatically starting containers Creating a base image Best practices for writing Dockerfiles Using certificates for repository client verification Using Supervisor Process management with CFEngine Using Puppet Using Chef Using PowerShell DSC Cross-Host linking using ambassador containers Runtime metrics Increasing a Boot2Docker volume Controlling and configuring Docker using Systemd",
|
|
"title": "Articles"
|
|
},
|
|
{
|
|
"loc": "/reference/commandline#reference",
|
|
"tags": "",
|
|
"text": "Command line Dockerfile FAQ Run Reference Compose command line Compose yml Compose ENV variables Compose commandline completion Swarm discovery Swarm strategies Swarm filters Swarm API Docker Hub API Docker Registry API Docker Registry API Client Libraries Docker Hub and Registry Spec Docker Remote API Docker Remote API v1.17 Docker Remote API v1.16 Docker Remote API Client Libraries Docker Hub Accounts API",
|
|
"title": "Reference"
|
|
},
|
|
{
|
|
"loc": "/reference/commandline#contributor-guide",
|
|
"tags": "",
|
|
"text": "README first Get required software Configure Git for contributing Work with a development container Run tests and test documentation Understand contribution workflow Find an issue Work on an issue Create a pull request Participate in the PR review Advanced contributing Where to get help Coding style guide Documentation style guide",
|
|
"title": "Contributor Guide"
|
|
},
|
|
{
|
|
"loc": "/reference/commandline/cli/",
|
|
"tags": "",
|
|
"text": "Command Line\n\nNote: if you are using a remote Docker daemon, such as Boot2Docker, \nthen do not type the sudo before the docker commands shown in the\ndocumentation's examples.\n\nTo list available commands, either run docker with no parameters\nor execute docker help:\n$ sudo docker\n Usage: docker [OPTIONS] COMMAND [arg...]\n -H, --host=[]: The socket(s) to bind to in daemon mode, specified using one or more tcp://host:port, unix:///path/to/socket, fd://* or fd://socketfd.\n\n A self-sufficient runtime for Linux containers.\n\n ...\n\nHelp\nTo list the help on any command just execute the command, followed by the --help option.\n$ sudo docker run --help\n\nUsage: docker run [OPTIONS] IMAGE [COMMAND] [ARG...]\n\nRun a command in a new container\n\n -a, --attach=[] Attach to STDIN, STDOUT or STDERR.\n -c, --cpu-shares=0 CPU shares (relative weight)\n...\n\nOption types\nSingle character command line options can be combined, so rather than\ntyping docker run -i -t --name test busybox sh,\nyou can write docker run -it --name test busybox sh.\nBoolean\nBoolean options take the form -d=false. The value you see in the help text is the\ndefault value which is set if you do not specify that flag. If you specify\na Boolean flag without a value, this will set the flag to true, irrespective\nof the default value.\nFor example, running docker run -d will set the value to true, so\nyour container will run in \"detached\" mode, in the background.\nOptions which default to true (e.g., docker build --rm=true) can only\nbe set to the non-default value by explicitly setting them to false:\n$ docker build --rm=false .\n\nMulti\nOptions like -a=[] indicate they can be specified multiple times:\n$ sudo docker run -a stdin -a stdout -a stderr -i -t ubuntu /bin/bash\n\nSometimes this can use a more complex value string, as for -v:\n$ sudo docker run -v /host:/container example/mysql\n\nStrings and Integers\nOptions like --name=\"\" expect a string, and they\ncan only be specified once. Options like -c=0\nexpect an integer, and they can only be specified once.\ndaemon\nUsage: docker [OPTIONS] COMMAND [arg...]\n\nA self-sufficient runtime for linux containers.\n\nOptions:\n --api-enable-cors=false Enable CORS headers in the remote API\n -b, --bridge=\"\" Attach containers to a pre-existing network bridge\n use 'none' to disable container networking\n --bip=\"\" Use this CIDR notation address for the network bridge's IP, not compatible with -b\n -D, --debug=false Enable debug mode\n -d, --daemon=false Enable daemon mode\n --dns=[] Force Docker to use specific DNS servers\n --dns-search=[] Force Docker to use specific DNS search domains\n -e, --exec-driver=\"native\" Force the Docker runtime to use a specific exec driver\n --fixed-cidr=\"\" IPv4 subnet for fixed IPs (e.g.: 10.20.0.0/16)\n this subnet must be nested in the bridge subnet (which is defined by -b or --bip)\n --fixed-cidr-v6=\"\" IPv6 subnet for global IPs (e.g.: 2a00:1450::/64)\n -G, --group=\"docker\" Group to assign the unix socket specified by -H when running in daemon mode\n use '' (the empty string) to disable setting of a group\n -g, --graph=\"/var/lib/docker\" Path to use as the root of the Docker runtime\n -H, --host=[] The socket(s) to bind to in daemon mode or connect to in client mode, specified using one or more tcp://host:port, unix:///path/to/socket, fd://* or fd://socketfd.\n --icc=true Allow unrestricted inter-container and Docker daemon host communication\n --insecure-registry=[] Enable insecure communication with specified registries (disables certificate verification for HTTPS and enables HTTP fallback) (e.g., localhost:5000 or 10.20.0.0/16)\n --ip=0.0.0.0 Default IP address to use when binding container ports\n --ip-forward=true Enable net.ipv4.ip_forward and IPv6 forwarding if --fixed-cidr-v6 is defined. IPv6 forwarding may interfere with your existing IPv6 configuration when using Router Advertisement.\n --ip-masq=true Enable IP masquerading for bridge's IP range\n --iptables=true Enable Docker's addition of iptables rules\n --ipv6=false Enable Docker IPv6 support\n -l, --log-level=\"info\" Set the logging level (debug, info, warn, error, fatal)\n --label=[] Set key=value labels to the daemon (displayed in `docker info`)\n --mtu=0 Set the containers network MTU\n if no value is provided: default to the default route MTU or 1500 if no default route is available\n -p, --pidfile=\"/var/run/docker.pid\" Path to use for daemon PID file\n --registry-mirror=[] Specify a preferred Docker registry mirror\n -s, --storage-driver=\"\" Force the Docker runtime to use a specific storage driver\n --selinux-enabled=false Enable selinux support. SELinux does not presently support the BTRFS storage driver\n --storage-opt=[] Set storage driver options\n --tls=false Use TLS; implied by --tlsverify flag\n --tlscacert=\"/home/sven/.docker/ca.pem\" Trust only remotes providing a certificate signed by the CA given here\n --tlscert=\"/home/sven/.docker/cert.pem\" Path to TLS certificate file\n --tlskey=\"/home/sven/.docker/key.pem\" Path to TLS key file\n --tlsverify=false Use TLS and verify the remote (daemon: verify client, client: verify daemon)\n -v, --version=false Print version information and quit\n\nOptions with [] may be specified multiple times.\nThe Docker daemon is the persistent process that manages containers.\nDocker uses the same binary for both the daemon and client. To run the\ndaemon you provide the -d flag.\nTo run the daemon with debug output, use docker -d -D.\nDaemon socket option\nThe Docker daemon can listen for Docker Remote API\nrequests via three different types of Socket: unix, tcp, and fd.\nBy default, a unix domain socket (or IPC socket) is created at /var/run/docker.sock,\nrequiring either root permission, or docker group membership.\nIf you need to access the Docker daemon remotely, you need to enable the tcp\nSocket. Beware that the default setup provides un-encrypted and un-authenticated\ndirect access to the Docker daemon - and should be secured either using the\nbuilt in HTTPS encrypted socket, or by putting a secure web\nproxy in front of it. You can listen on port 2375 on all network interfaces\nwith -H tcp://0.0.0.0:2375, or on a particular network interface using its IP\naddress: -H tcp://192.168.59.103:2375. It is conventional to use port 2375\nfor un-encrypted, and port 2376 for encrypted communication with the daemon.\n\nNote If you're using an HTTPS encrypted socket, keep in mind that only TLS1.0\nand greater are supported. Protocols SSLv3 and under are not supported anymore\nfor security reasons.\n\nOn Systemd based systems, you can communicate with the daemon via\nSystemd socket activation, use\ndocker -d -H fd://. Using fd:// will work perfectly for most setups but\nyou can also specify individual sockets: docker -d -H fd://3. If the\nspecified socket activated files aren't found, then Docker will exit. You\ncan find examples of using Systemd socket activation with Docker and\nSystemd in the Docker source tree.\nYou can configure the Docker daemon to listen to multiple sockets at the same\ntime using multiple -H options:\n# listen using the default unix socket, and on 2 specific IP addresses on this host.\ndocker -d -H unix:///var/run/docker.sock -H tcp://192.168.59.106 -H tcp://10.10.10.2\n\nThe Docker client will honor the DOCKER_HOST environment variable to set\nthe -H flag for the client.\n$ sudo docker -H tcp://0.0.0.0:2375 ps\n# or\n$ export DOCKER_HOST=\"tcp://0.0.0.0:2375\"\n$ sudo docker ps\n# both are equal\n\nSetting the DOCKER_TLS_VERIFY environment variable to any value other than the empty\nstring is equivalent to setting the --tlsverify flag. The following are equivalent:\n$ sudo docker --tlsverify ps\n# or\n$ export DOCKER_TLS_VERIFY=1\n$ sudo docker ps\n\nThe Docker client will honor the HTTP_PROXY, HTTPS_PROXY, and NO_PROXY\nenvironment variables (or the lowercase versions thereof). HTTPS_PROXY takes\nprecedence over HTTP_PROXY. If you happen to have a proxy configured with the\nHTTP_PROXY or HTTPS_PROXY environment variables but still want to\ncommunicate with the Docker daemon over its default unix domain socket,\nsetting the NO_PROXY environment variable to the path of the socket\n(/var/run/docker.sock) is required.\nDaemon storage-driver option\nThe Docker daemon has support for several different image layer storage drivers: aufs,\ndevicemapper, btrfs and overlay.\nThe aufs driver is the oldest, but is based on a Linux kernel patch-set that\nis unlikely to be merged into the main kernel. These are also known to cause some\nserious kernel crashes. However, aufs is also the only storage driver that allows\ncontainers to share executable and shared library memory, so is a useful choice\nwhen running thousands of containers with the same program or libraries.\nThe devicemapper driver uses thin provisioning and Copy on Write (CoW)\nsnapshots. For each devicemapper graph location \u2013 typically\n/var/lib/docker/devicemapper \u2013 a thin pool is created based on two block\ndevices, one for data and one for metadata. By default, these block devices\nare created automatically by using loopback mounts of automatically created\nsparse files. Refer to Storage driver options below\nfor a way how to customize this setup.\n~jpetazzo/Resizing Docker containers with the Device Mapper plugin article\nexplains how to tune your existing setup without the use of options.\nThe btrfs driver is very fast for docker build - but like devicemapper does not\nshare executable memory between devices. Use docker -d -s btrfs -g /mnt/btrfs_partition.\nThe overlay is a very fast union filesystem. It is now merged in the main\nLinux kernel as of 3.18.0.\nCall docker -d -s overlay to use it. \n\nNote: \nIt is currently unsupported on btrfs or any Copy on Write filesystem\nand should only be used over ext4 partitions.\n\nStorage driver options\nParticular storage-driver can be configured with options specified with\n--storage-opt flags. The only driver accepting options is devicemapper as\nof now. All its options are prefixed with dm.\nCurrently supported options are:\n\n\ndm.basesize\nSpecifies the size to use when creating the base device, which limits the\nsize of images and containers. The default value is 10G. Note, thin devices\nare inherently \"sparse\", so a 10G device which is mostly empty doesn't use\n10 GB of space on the pool. However, the filesystem will use more space for\nthe empty case the larger the device is.\nWarning: This value affects the system-wide \"base\" empty filesystem\n that may already be initialized and inherited by pulled images. Typically,\n a change to this value will require additional steps to take effect:\n$ sudo service docker stop\n$ sudo rm -rf /var/lib/docker\n$ sudo service docker start\n\nExample use:\n$ sudo docker -d --storage-opt dm.basesize=20G\n\n\n\ndm.loopdatasize\nSpecifies the size to use when creating the loopback file for the \"data\"\ndevice which is used for the thin pool. The default size is 100G. Note that\nthe file is sparse, so it will not initially take up this much space.\nExample use:\n$ sudo docker -d --storage-opt dm.loopdatasize=200G\n\n\n\ndm.loopmetadatasize\nSpecifies the size to use when creating the loopback file for the\n\"metadata\" device which is used for the thin pool. The default size is 2G.\nNote that the file is sparse, so it will not initially take up this much\nspace.\nExample use:\n$ sudo docker -d --storage-opt dm.loopmetadatasize=4G\n\n\n\ndm.fs\nSpecifies the filesystem type to use for the base device. The supported\noptions are \"ext4\" and \"xfs\". The default is \"ext4\"\nExample use:\n$ sudo docker -d --storage-opt dm.fs=xfs\n\n\n\ndm.mkfsarg\nSpecifies extra mkfs arguments to be used when creating the base device.\nExample use:\n$ sudo docker -d --storage-opt \"dm.mkfsarg=-O ˆhas_journal\"\n\n\n\ndm.mountopt\nSpecifies extra mount options used when mounting the thin devices.\nExample use:\n$ sudo docker -d --storage-opt dm.mountopt=nodiscard\n\n\n\ndm.datadev\nSpecifies a custom blockdevice to use for data for the thin pool.\nIf using a block device for device mapper storage, ideally both datadev and\nmetadatadev should be specified to completely avoid using the loopback\ndevice.\nExample use:\n$ sudo docker -d \\\n --storage-opt dm.datadev=/dev/sdb1 \\\n --storage-opt dm.metadatadev=/dev/sdc1\n\n\n\ndm.metadatadev\nSpecifies a custom blockdevice to use for metadata for the thin pool.\nFor best performance the metadata should be on a different spindle than the\ndata, or even better on an SSD.\nIf setting up a new metadata pool it is required to be valid. This can be\nachieved by zeroing the first 4k to indicate empty metadata, like this:\n$ dd if=/dev/zero of=$metadata_dev bs=4096 count=1\n\nExample use:\n$ sudo docker -d \\\n --storage-opt dm.datadev=/dev/sdb1 \\\n --storage-opt dm.metadatadev=/dev/sdc1\n\n\n\ndm.blocksize\nSpecifies a custom blocksize to use for the thin pool. The default\nblocksize is 64K.\nExample use:\n$ sudo docker -d --storage-opt dm.blocksize=512K\n\n\n\ndm.blkdiscard\nEnables or disables the use of blkdiscard when removing devicemapper\ndevices. This is enabled by default (only) if using loopback devices and is\nrequired to resparsify the loopback file on image/container removal.\nDisabling this on loopback can lead to much faster container removal\ntimes, but will make the space used in /var/lib/docker directory not be\nreturned to the system for other use when containers are removed.\nExample use:\n$ sudo docker -d --storage-opt dm.blkdiscard=false\n\n\n\nDocker exec-driver option\nThe Docker daemon uses a specifically built libcontainer execution driver as its\ninterface to the Linux kernel namespaces, cgroups, and SELinux.\nThere is still legacy support for the original LXC userspace tools via the lxc execution driver, however, this is\nnot where the primary development of new functionality is taking place.\nAdd -e lxc to the daemon flags to use the lxc execution driver.\nDaemon DNS options\nTo set the DNS server for all Docker containers, use\ndocker -d --dns 8.8.8.8.\nTo set the DNS search domain for all Docker containers, use\ndocker -d --dns-search example.com.\nInsecure registries\nDocker considers a private registry either secure or insecure.\nIn the rest of this section, registry is used for private registry, and myregistry:5000\nis a placeholder example for a private registry.\nA secure registry uses TLS and a copy of its CA certificate is placed on the Docker host at\n/etc/docker/certs.d/myregistry:5000/ca.crt.\nAn insecure registry is either not using TLS (i.e., listening on plain text HTTP), or is using\nTLS with a CA certificate not known by the Docker daemon. The latter can happen when the\ncertificate was not found under /etc/docker/certs.d/myregistry:5000/, or if the certificate\nverification failed (i.e., wrong CA).\nBy default, Docker assumes all, but local (see local registries below), registries are secure.\nCommunicating with an insecure registry is not possible if Docker assumes that registry is secure.\nIn order to communicate with an insecure registry, the Docker daemon requires --insecure-registry\nin one of the following two forms: \n\n--insecure-registry myregistry:5000 tells the Docker daemon that myregistry:5000 should be considered insecure.\n--insecure-registry 10.1.0.0/16 tells the Docker daemon that all registries whose domain resolve to an IP address is part\nof the subnet described by the CIDR syntax, should be considered insecure.\n\nThe flag can be used multiple times to allow multiple registries to be marked as insecure.\nIf an insecure registry is not marked as insecure, docker pull, docker push, and docker search\nwill result in an error message prompting the user to either secure or pass the --insecure-registry\nflag to the Docker daemon as described above.\nLocal registries, whose IP address falls in the 127.0.0.0/8 range, are automatically marked as insecure\nas of Docker 1.3.2. It is not recommended to rely on this, as it may change in the future.\nRunning a Docker daemon behind a HTTPS_PROXY\nWhen running inside a LAN that uses a HTTPS proxy, the Docker Hub certificates\nwill be replaced by the proxy's certificates. These certificates need to be added\nto your Docker host's configuration:\n\nInstall the ca-certificates package for your distribution\nAsk your network admin for the proxy's CA certificate and append them to\n /etc/pki/tls/certs/ca-bundle.crt\nThen start your Docker daemon with HTTPS_PROXY=http://username:password@proxy:port/ docker -d.\n The username: and password@ are optional - and are only needed if your proxy\n is set up to require authentication.\n\nThis will only add the proxy and authentication to the Docker daemon's requests -\nyour docker builds and running containers will need extra configuration to use\nthe proxy\nMiscellaneous options\nIP masquerading uses address translation to allow containers without a public IP to talk\nto other machines on the Internet. This may interfere with some network topologies and\ncan be disabled with --ip-masq=false.\nDocker supports softlinks for the Docker data directory\n(/var/lib/docker) and for /var/lib/docker/tmp. The DOCKER_TMPDIR and the data directory can be set like this:\nDOCKER_TMPDIR=/mnt/disk2/tmp /usr/local/bin/docker -d -D -g /var/lib/docker -H unix:// /var/lib/boot2docker/docker.log 21\n# or\nexport DOCKER_TMPDIR=/mnt/disk2/tmp\n/usr/local/bin/docker -d -D -g /var/lib/docker -H unix:// /var/lib/boot2docker/docker.log 21\n\nattach\nUsage: docker attach [OPTIONS] CONTAINER\n\nAttach to a running container\n\n --no-stdin=false Do not attach STDIN\n --sig-proxy=true Proxy all received signals to the process (non-TTY mode only). SIGCHLD, SIGKILL, and SIGSTOP are not proxied.\n\nThe docker attach command allows you to attach to a running container using\nthe container's ID or name, either to view its ongoing output or to control it\ninteractively. You can attach to the same contained process multiple times\nsimultaneously, screen sharing style, or quickly view the progress of your\ndaemonized process.\nYou can detach from the container (and leave it running) with CTRL-p CTRL-q\n(for a quiet exit) or CTRL-c which will send a SIGKILL to the container.\nWhen you are attached to a container, and exit its main process, the process's\nexit code will be returned to the client.\nIt is forbidden to redirect the standard input of a docker attach command while\nattaching to a tty-enabled container (i.e.: launched with -t).\nExamples\n$ sudo docker run -d --name topdemo ubuntu /usr/bin/top -b)\n$ sudo docker attach topdemo\ntop - 02:05:52 up 3:05, 0 users, load average: 0.01, 0.02, 0.05\nTasks: 1 total, 1 running, 0 sleeping, 0 stopped, 0 zombie\nCpu(s): 0.1%us, 0.2%sy, 0.0%ni, 99.7%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st\nMem: 373572k total, 355560k used, 18012k free, 27872k buffers\nSwap: 786428k total, 0k used, 786428k free, 221740k cached\n\nPID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND\n 1 root 20 0 17200 1116 912 R 0 0.3 0:00.03 top\n\n top - 02:05:55 up 3:05, 0 users, load average: 0.01, 0.02, 0.05\n Tasks: 1 total, 1 running, 0 sleeping, 0 stopped, 0 zombie\n Cpu(s): 0.0%us, 0.2%sy, 0.0%ni, 99.8%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st\n Mem: 373572k total, 355244k used, 18328k free, 27872k buffers\n Swap: 786428k total, 0k used, 786428k free, 221776k cached\n\n PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND\n 1 root 20 0 17208 1144 932 R 0 0.3 0:00.03 top\n\n\n top - 02:05:58 up 3:06, 0 users, load average: 0.01, 0.02, 0.05\n Tasks: 1 total, 1 running, 0 sleeping, 0 stopped, 0 zombie\n Cpu(s): 0.2%us, 0.3%sy, 0.0%ni, 99.5%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st\n Mem: 373572k total, 355780k used, 17792k free, 27880k buffers\n Swap: 786428k total, 0k used, 786428k free, 221776k cached\n\n PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND\n 1 root 20 0 17208 1144 932 R 0 0.3 0:00.03 top\nˆC$\n$ echo $?\n0\n$ docker ps -a | grep topdemo\n7998ac8581f9 ubuntu:14.04 \"/usr/bin/top -b\" 38 seconds ago Exited (0) 21 seconds ago topdemo\n\nAnd in this second example, you can see the exit code returned by the bash process\nis returned by the docker attach command to its caller too:\n$ sudo docker run --name test -d -it debian\n275c44472aebd77c926d4527885bb09f2f6db21d878c75f0a1c212c03d3bcfab\n$ sudo docker attach test\n$$ exit 13\nexit\n$ echo $?\n13\n$ sudo docker ps -a | grep test\n275c44472aeb debian:7 \"/bin/bash\" 26 seconds ago Exited (13) 17 seconds ago test\n\nbuild\nUsage: docker build [OPTIONS] PATH | URL | -\n\nBuild a new image from the source code at PATH\n\n --force-rm=false Always remove intermediate containers, even after unsuccessful builds\n --no-cache=false Do not use cache when building the image\n --pull=false Always attempt to pull a newer version of the image\n -q, --quiet=false Suppress the verbose output generated by the containers\n --rm=true Remove intermediate containers after a successful build\n -t, --tag=\"\" Repository name (and optionally a tag) to be applied to the resulting image in case of success\n\nUse this command to build Docker images from a Dockerfile and a\n\"context\".\nThe files at PATH or URL are called the \"context\" of the build. The\nbuild process may refer to any of the files in the context, for example\nwhen using an ADD instruction.\nWhen a single Dockerfile is given as URL or is piped through STDIN\n(docker build - Dockerfile), then no context is set.\nWhen a Git repository is set as URL, then the repository is used as\nthe context. The Git repository is cloned with its submodules\n(git clone -recursive). A fresh git clone occurs in a temporary directory\non your local host, and then this is sent to the Docker daemon as the\ncontext. This way, your local user credentials and VPN's etc can be\nused to access private repositories.\nIf a file named .dockerignore exists in the root of PATH then it\nis interpreted as a newline-separated list of exclusion patterns.\nExclusion patterns match files or directories relative to PATH that\nwill be excluded from the context. Globbing is done using Go's\nfilepath.Match rules.\nPlease note that .dockerignore files in other subdirectories are\nconsidered as normal files. Filepaths in .dockerignore are absolute with\nthe current directory as the root. Wildcards are allowed but the search\nis not recursive.\nExample .dockerignore file\n*/temp*\n*/*/temp*\ntemp?\n\nThe first line above */temp*, would ignore all files with names starting with\ntemp from any subdirectory below the root directory. For example, a file named\n/somedir/temporary.txt would be ignored. The second line */*/temp*, will\nignore files starting with name temp from any subdirectory that is two levels\nbelow the root directory. For example, the file /somedir/subdir/temporary.txt\nwould get ignored in this case. The last line in the above example temp?\nwill ignore the files that match the pattern from the root directory.\nFor example, the files tempa, tempb are ignored from the root directory.\nCurrently there is no support for regular expressions. Formats\nlike [ˆtemp*] are ignored.\nBy default the docker build command will look for a Dockerfile at the\nroot of the build context. The -f, --file, option lets you specify\nthe path to an alternative file to use instead. This is useful\nin cases where the same set of files are used for multiple builds. The path\nmust be to a file within the build context. If a relative path is specified\nthen it must to be relative to the current directory.\nSee also:\nDockerfile Reference.\nExamples\n$ sudo docker build .\nUploading context 10240 bytes\nStep 1 : FROM busybox\nPulling repository busybox\n --- e9aa60c60128MB/2.284 MB (100%) endpoint: https://cdn-registry-1.docker.io/v1/\nStep 2 : RUN ls -lh /\n --- Running in 9c9e81692ae9\ntotal 24\ndrwxr-xr-x 2 root root 4.0K Mar 12 2013 bin\ndrwxr-xr-x 5 root root 4.0K Oct 19 00:19 dev\ndrwxr-xr-x 2 root root 4.0K Oct 19 00:19 etc\ndrwxr-xr-x 2 root root 4.0K Nov 15 23:34 lib\nlrwxrwxrwx 1 root root 3 Mar 12 2013 lib64 - lib\ndr-xr-xr-x 116 root root 0 Nov 15 23:34 proc\nlrwxrwxrwx 1 root root 3 Mar 12 2013 sbin - bin\ndr-xr-xr-x 13 root root 0 Nov 15 23:34 sys\ndrwxr-xr-x 2 root root 4.0K Mar 12 2013 tmp\ndrwxr-xr-x 2 root root 4.0K Nov 15 23:34 usr\n --- b35f4035db3f\nStep 3 : CMD echo Hello world\n --- Running in 02071fceb21b\n --- f52f38b7823e\nSuccessfully built f52f38b7823e\nRemoving intermediate container 9c9e81692ae9\nRemoving intermediate container 02071fceb21b\n\nThis example specifies that the PATH is\n., and so all the files in the local directory get\ntard and sent to the Docker daemon. The PATH\nspecifies where to find the files for the \"context\" of the build on the\nDocker daemon. Remember that the daemon could be running on a remote\nmachine and that no parsing of the Dockerfile\nhappens at the client side (where you're running\ndocker build). That means that all the files at\nPATH get sent, not just the ones listed to\nADD in the Dockerfile.\nThe transfer of context from the local machine to the Docker daemon is\nwhat the docker client means when you see the\n\"Sending build context\" message.\nIf you wish to keep the intermediate containers after the build is\ncomplete, you must use --rm=false. This does not\naffect the build cache.\n$ sudo docker build .\nUploading context 18.829 MB\nUploading context\nStep 0 : FROM busybox\n --- 769b9341d937\nStep 1 : CMD echo Hello world\n --- Using cache\n --- 99cc1ad10469\nSuccessfully built 99cc1ad10469\n$ echo \".git\" .dockerignore\n$ sudo docker build .\nUploading context 6.76 MB\nUploading context\nStep 0 : FROM busybox\n --- 769b9341d937\nStep 1 : CMD echo Hello world\n --- Using cache\n --- 99cc1ad10469\nSuccessfully built 99cc1ad10469\n\nThis example shows the use of the .dockerignore file to exclude the .git\ndirectory from the context. Its effect can be seen in the changed size of the\nuploaded context.\n$ sudo docker build -t vieux/apache:2.0 .\n\nThis will build like the previous example, but it will then tag the\nresulting image. The repository name will be vieux/apache\nand the tag will be 2.0\n$ sudo docker build - Dockerfile\n\nThis will read a Dockerfile from STDIN without context. Due to the\nlack of a context, no contents of any local directory will be sent to\nthe Docker daemon. Since there is no context, a Dockerfile ADD only\nworks if it refers to a remote URL.\n$ sudo docker build - context.tar.gz\n\nThis will build an image for a compressed context read from STDIN.\nSupported formats are: bzip2, gzip and xz.\n$ sudo docker build github.com/creack/docker-firefox\n\nThis will clone the GitHub repository and use the cloned repository as\ncontext. The Dockerfile at the root of the\nrepository is used as Dockerfile. Note that you\ncan specify an arbitrary Git repository by using the git:// or git@\nschema.\n$ sudo docker build -f Dockerfile.debug .\n\nThis will use a file called Dockerfile.debug for the build\ninstructions instead of Dockerfile.\n$ sudo docker build -f dockerfiles/Dockerfile.debug -t myapp_debug .\n$ sudo docker build -f dockerfiles/Dockerfile.prod -t myapp_prod .\n\nThe above commands will build the current build context (as specified by \nthe .) twice, once using a debug version of a Dockerfile and once using \na production version.\n$ cd /home/me/myapp/some/dir/really/deep\n$ sudo docker build -f /home/me/myapp/dockerfiles/debug /home/me/myapp\n$ sudo docker build -f ../../../../dockerfiles/debug /home/me/myapp\n\nThese two docker build commands do the exact same thing. They both\nuse the contents of the debug file instead of looking for a Dockerfile \nand will use /home/me/myapp as the root of the build context. Note that \ndebug is in the directory structure of the build context, regardless of how \nyou refer to it on the command line.\n\nNote: docker build will return a no such file or directory error\nif the file or directory does not exist in the uploaded context. This may\nhappen if there is no context, or if you specify a file that is elsewhere\non the Host system. The context is limited to the current directory (and its\nchildren) for security reasons, and to ensure repeatable builds on remote\nDocker hosts. This is also the reason why ADD ../file will not work.\n\ncommit\nUsage: docker commit [OPTIONS] CONTAINER [REPOSITORY[:TAG]]\n\nCreate a new image from a container's changes\n\n -a, --author=\"\" Author (e.g., \"John Hannibal Smith hannibal@a-team.com\")\n -m, --message=\"\" Commit message\n -p, --pause=true Pause container during commit\n\nIt can be useful to commit a container's file changes or settings into a\nnew image. This allows you debug a container by running an interactive\nshell, or to export a working dataset to another server. Generally, it\nis better to use Dockerfiles to manage your images in a documented and\nmaintainable way.\nBy default, the container being committed and its processes will be paused\nwhile the image is committed. This reduces the likelihood of\nencountering data corruption during the process of creating the commit.\nIf this behavior is undesired, set the 'p' option to false.\nCommit an existing container\n$ sudo docker ps\nID IMAGE COMMAND CREATED STATUS PORTS\nc3f279d17e0a ubuntu:12.04 /bin/bash 7 days ago Up 25 hours\n197387f1b436 ubuntu:12.04 /bin/bash 7 days ago Up 25 hours\n$ sudo docker commit c3f279d17e0a SvenDowideit/testimage:version3\nf5283438590d\n$ sudo docker images | head\nREPOSITORY TAG ID CREATED VIRTUAL SIZE\nSvenDowideit/testimage version3 f5283438590d 16 seconds ago 335.7 MB\n\ncp\nCopy files/folders from a container's filesystem to the host\npath. Paths are relative to the root of the filesystem.\nUsage: docker cp CONTAINER:PATH HOSTPATH\n\nCopy files/folders from the PATH to the HOSTPATH\n\ncreate\nCreates a new container.\nUsage: docker create [OPTIONS] IMAGE [COMMAND] [ARG...]\n\nCreate a new container\n\n -a, --attach=[] Attach to STDIN, STDOUT or STDERR.\n --add-host=[] Add a custom host-to-IP mapping (host:ip)\n -c, --cpu-shares=0 CPU shares (relative weight)\n --cap-add=[] Add Linux capabilities\n --cap-drop=[] Drop Linux capabilities\n --cidfile=\"\" Write the container ID to the file\n --cpuset=\"\" CPUs in which to allow execution (0-3, 0,1)\n --device=[] Add a host device to the container (e.g. --device=/dev/sdc:/dev/xvdc:rwm)\n --dns=[] Set custom DNS servers\n --dns-search=[] Set custom DNS search domains (Use --dns-search=. if you don't wish to set the search domain)\n -e, --env=[] Set environment variables\n --entrypoint=\"\" Overwrite the default ENTRYPOINT of the image\n --env-file=[] Read in a line delimited file of environment variables\n --expose=[] Expose a port or a range of ports (e.g. --expose=3300-3310) from the container without publishing it to your host\n -h, --hostname=\"\" Container host name\n -i, --interactive=false Keep STDIN open even if not attached\n --ipc=\"\" Default is to create a private IPC namespace (POSIX SysV IPC) for the container\n 'container:name|id': reuses another container shared memory, semaphores and message queues\n 'host': use the host shared memory,semaphores and message queues inside the container. Note: the host mode gives the container full access to local shared memory and is therefore considered insecure.\n --link=[] Add link to another container in the form of name or id:alias\n --lxc-conf=[] (lxc exec-driver only) Add custom lxc options --lxc-conf=\"lxc.cgroup.cpuset.cpus = 0,1\"\n -m, --memory=\"\" Memory limit (format: numberoptional unit, where unit = b, k, m or g)\n --mac-address=\"\" Container MAC address (e.g. 92:d0:c6:0a:29:33)\n --name=\"\" Assign a name to the container\n --net=\"bridge\" Set the Network mode for the container\n 'bridge': creates a new network stack for the container on the docker bridge\n 'none': no networking for this container\n 'container:name|id': reuses another container network stack\n 'host': use the host network stack inside the container. Note: the host mode gives the container full access to local system services such as D-bus and is therefore considered insecure.\n -P, --publish-all=false Publish all exposed ports to random ports on the host interfaces\n -p, --publish=[] Publish a container's port, or a range of ports (e.g., `-p 3300-3310`), to the host\n format: ip:hostPort:containerPort | ip::containerPort | hostPort:containerPort | containerPort\n Both hostPort and containerPort can be specified as a range of ports. \n When specifying ranges for both, the number of container ports in the range must match the number of host ports in the range. (e.g., `-p 1234-1236:1234-1236/tcp`)\n (use 'docker port' to see the actual mapping)\n --privileged=false Give extended privileges to this container\n --read-only=false Mount the container's root filesystem as read only\n --restart=\"\" Restart policy to apply when a container exits (no, on-failure[:max-retry], always)\n --security-opt=[] Security Options\n -t, --tty=false Allocate a pseudo-TTY\n -u, --user=\"\" Username or UID\n -v, --volume=[] Bind mount a volume (e.g., from the host: -v /host:/container, from Docker: -v /container)\n --volumes-from=[] Mount volumes from the specified container(s)\n -w, --workdir=\"\" Working directory inside the container\n\nThe docker create command creates a writeable container layer over\nthe specified image and prepares it for running the specified command.\nThe container ID is then printed to STDOUT.\nThis is similar to docker run -d except the container is never started.\nYou can then use the docker start container_id command to start the\ncontainer at any point.\nThis is useful when you want to set up a container configuration ahead\nof time so that it is ready to start when you need it.\nPlease see the run command section for more details.\nExamples\n$ sudo docker create -t -i fedora bash\n6d8af538ec541dd581ebc2a24153a28329acb5268abe5ef868c1f1a261221752\n$ sudo docker start -a -i 6d8af538ec5\nbash-4.2#\n\nAs of v1.4.0 container volumes are initialized during the docker create\nphase (i.e., docker run too). For example, this allows you to create the\ndata volume container, and then use it from another container:\n$ docker create -v /data --name data ubuntu\n240633dfbb98128fa77473d3d9018f6123b99c454b3251427ae190a7d951ad57\n$ docker run --rm --volumes-from data ubuntu ls -la /data\ntotal 8\ndrwxr-xr-x 2 root root 4096 Dec 5 04:10 .\ndrwxr-xr-x 48 root root 4096 Dec 5 04:11 ..\n\nSimilarly, create a host directory bind mounted volume container, which\ncan then be used from the subsequent container:\n$ docker create -v /home/docker:/docker --name docker ubuntu\n9aa88c08f319cd1e4515c3c46b0de7cc9aa75e878357b1e96f91e2c773029f03\n$ docker run --rm --volumes-from docker ubuntu ls -la /docker\ntotal 20\ndrwxr-sr-x 5 1000 staff 180 Dec 5 04:00 .\ndrwxr-xr-x 48 root root 4096 Dec 5 04:13 ..\n-rw-rw-r-- 1 1000 staff 3833 Dec 5 04:01 .ash_history\n-rw-r--r-- 1 1000 staff 446 Nov 28 11:51 .ashrc\n-rw-r--r-- 1 1000 staff 25 Dec 5 04:00 .gitconfig\ndrwxr-sr-x 3 1000 staff 60 Dec 1 03:28 .local\n-rw-r--r-- 1 1000 staff 920 Nov 28 11:51 .profile\ndrwx--S--- 2 1000 staff 460 Dec 5 00:51 .ssh\ndrwxr-xr-x 32 1000 staff 1140 Dec 5 04:01 docker\n\ndiff\nList the changed files and directories in a container\u1fbfs filesystem\nUsage: docker diff CONTAINER\n\nInspect changes on a container's filesystem\n\nThere are 3 events that are listed in the diff:\n\nA - Add\nD - Delete\nC - Change\n\nFor example:\n$ sudo docker diff 7bb0e258aefe\n\nC /dev\nA /dev/kmsg\nC /etc\nA /etc/mtab\nA /go\nA /go/src\nA /go/src/github.com\nA /go/src/github.com/docker\nA /go/src/github.com/docker/docker\nA /go/src/github.com/docker/docker/.git\n....\n\nevents\nUsage: docker events [OPTIONS]\n\nGet real time events from the server\n\n -f, --filter=[] Provide filter values (i.e., 'event=stop')\n --since=\"\" Show all events created since timestamp\n --until=\"\" Stream events until this timestamp\n\nDocker containers will report the following events:\ncreate, destroy, die, export, kill, oom, pause, restart, start, stop, unpause\n\nand Docker images will report:\nuntag, delete\n\nFiltering\nThe filtering flag (-f or --filter) format is of \"key=value\". If you would like to use\nmultiple filters, pass multiple flags (e.g., --filter \"foo=bar\" --filter \"bif=baz\")\nUsing the same filter multiple times will be handled as a OR; for example\n--filter container=588a23dac085 --filter container=a8f7720b8c22 will display events for\ncontainer 588a23dac085 OR container a8f7720b8c22\nUsing multiple filters will be handled as a AND; for example\n--filter container=588a23dac085 --filter event=start will display events for container\ncontainer 588a23dac085 AND the event type is start\nCurrent filters:\n * event\n * image\n * container\nExamples\nYou'll need two shells for this example.\nShell 1: Listening for events:\n$ sudo docker events\n\nShell 2: Start and Stop containers:\n$ sudo docker start 4386fb97867d\n$ sudo docker stop 4386fb97867d\n$ sudo docker stop 7805c1d35632\n\nShell 1: (Again .. now showing events):\n2014-05-10T17:42:14.999999999Z07:00 4386fb97867d: (from ubuntu-1:14.04) start\n2014-05-10T17:42:14.999999999Z07:00 4386fb97867d: (from ubuntu-1:14.04) die\n2014-05-10T17:42:14.999999999Z07:00 4386fb97867d: (from ubuntu-1:14.04) stop\n2014-05-10T17:42:14.999999999Z07:00 7805c1d35632: (from redis:2.8) die\n2014-05-10T17:42:14.999999999Z07:00 7805c1d35632: (from redis:2.8) stop\n\nShow events in the past from a specified time:\n$ sudo docker events --since 1378216169\n2014-03-10T17:42:14.999999999Z07:00 4386fb97867d: (from ubuntu-1:14.04) die\n2014-05-10T17:42:14.999999999Z07:00 4386fb97867d: (from ubuntu-1:14.04) stop\n2014-05-10T17:42:14.999999999Z07:00 7805c1d35632: (from redis:2.8) die\n2014-03-10T17:42:14.999999999Z07:00 7805c1d35632: (from redis:2.8) stop\n\n$ sudo docker events --since '2013-09-03'\n2014-09-03T17:42:14.999999999Z07:00 4386fb97867d: (from ubuntu-1:14.04) start\n2014-09-03T17:42:14.999999999Z07:00 4386fb97867d: (from ubuntu-1:14.04) die\n2014-05-10T17:42:14.999999999Z07:00 4386fb97867d: (from ubuntu-1:14.04) stop\n2014-05-10T17:42:14.999999999Z07:00 7805c1d35632: (from redis:2.8) die\n2014-09-03T17:42:14.999999999Z07:00 7805c1d35632: (from redis:2.8) stop\n\n$ sudo docker events --since '2013-09-03T15:49:29'\n2014-09-03T15:49:29.999999999Z07:00 4386fb97867d: (from ubuntu-1:14.04) die\n2014-05-10T17:42:14.999999999Z07:00 4386fb97867d: (from ubuntu-1:14.04) stop\n2014-05-10T17:42:14.999999999Z07:00 7805c1d35632: (from redis:2.8) die\n2014-09-03T15:49:29.999999999Z07:00 7805c1d35632: (from redis:2.8) stop\n\nFilter events:\n$ sudo docker events --filter 'event=stop'\n2014-05-10T17:42:14.999999999Z07:00 4386fb97867d: (from ubuntu-1:14.04) stop\n2014-09-03T17:42:14.999999999Z07:00 7805c1d35632: (from redis:2.8) stop\n\n$ sudo docker events --filter 'image=ubuntu-1:14.04'\n2014-05-10T17:42:14.999999999Z07:00 4386fb97867d: (from ubuntu-1:14.04) start\n2014-05-10T17:42:14.999999999Z07:00 4386fb97867d: (from ubuntu-1:14.04) die\n2014-05-10T17:42:14.999999999Z07:00 4386fb97867d: (from ubuntu-1:14.04) stop\n\n$ sudo docker events --filter 'container=7805c1d35632'\n2014-05-10T17:42:14.999999999Z07:00 7805c1d35632: (from redis:2.8) die\n2014-09-03T15:49:29.999999999Z07:00 7805c1d35632: (from redis:2.8) stop\n\n$ sudo docker events --filter 'container=7805c1d35632' --filter 'container=4386fb97867d'\n2014-09-03T15:49:29.999999999Z07:00 4386fb97867d: (from ubuntu-1:14.04) die\n2014-05-10T17:42:14.999999999Z07:00 4386fb97867d: (from ubuntu-1:14.04) stop\n2014-05-10T17:42:14.999999999Z07:00 7805c1d35632: (from redis:2.8) die\n2014-09-03T15:49:29.999999999Z07:00 7805c1d35632: (from redis:2.8) stop\n\n$ sudo docker events --filter 'container=7805c1d35632' --filter 'event=stop'\n2014-09-03T15:49:29.999999999Z07:00 7805c1d35632: (from redis:2.8) stop\n\nexec\nUsage: docker exec [OPTIONS] CONTAINER COMMAND [ARG...]\n\nRun a command in a running container\n\n -d, --detach=false Detached mode: run command in the background\n -i, --interactive=false Keep STDIN open even if not attached\n -t, --tty=false Allocate a pseudo-TTY\n\nThe docker exec command runs a new command in a running container.\nThe command started using docker exec will only run while the container's primary\nprocess (PID 1) is running, and will not be restarted if the container is restarted.\nIf the container is paused, then the docker exec command will fail with an error:\n$ docker pause test\ntest\n$ docker ps\nCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES\n1ae3b36715d2 ubuntu:latest \"bash\" 17 seconds ago Up 16 seconds (Paused) test\n$ docker exec test ls\nFATA[0000] Error response from daemon: Container test is paused, unpause the container before exec\n$ echo $?\n1\n\nExamples\n$ sudo docker run --name ubuntu_bash --rm -i -t ubuntu bash\n\nThis will create a container named ubuntu_bash and start a Bash session.\n$ sudo docker exec -d ubuntu_bash touch /tmp/execWorks\n\nThis will create a new file /tmp/execWorks inside the running container\nubuntu_bash, in the background.\n$ sudo docker exec -it ubuntu_bash bash\n\nThis will create a new Bash session in the container ubuntu_bash.\nexport\nUsage: docker export CONTAINER\n\nExport the contents of a filesystem as a tar archive to STDOUT\n\nFor example:\n$ sudo docker export red_panda latest.tar\n\n\nNote:\ndocker export does not export the contents of volumes associated with the\ncontainer. If a volume is mounted on top of an existing directory in the \ncontainer, docker export will export the contents of the underlying \ndirectory, not the contents of the volume.\nRefer to Backup, restore, or migrate data volumes\nin the user guide for examples on exporting data in a volume.\n\nhistory\nUsage: docker history [OPTIONS] IMAGE\n\nShow the history of an image\n\n --no-trunc=false Don't truncate output\n -q, --quiet=false Only show numeric IDs\n\nTo see how the docker:latest image was built:\n$ sudo docker history docker\nIMAGE CREATED CREATED BY SIZE\n3e23a5875458790b7a806f95f7ec0d0b2a5c1659bfc899c89f939f6d5b8f7094 8 days ago /bin/sh -c #(nop) ENV LC_ALL=C.UTF-8 0 B\n8578938dd17054dce7993d21de79e96a037400e8d28e15e7290fea4f65128a36 8 days ago /bin/sh -c dpkg-reconfigure locales locale-gen C.UTF-8 /usr/sbin/update-locale LANG=C.UTF-8 1.245 MB\nbe51b77efb42f67a5e96437b3e102f81e0a1399038f77bf28cea0ed23a65cf60 8 days ago /bin/sh -c apt-get update apt-get install -y git libxml2-dev python build-essential make gcc python-dev locales python-pip 338.3 MB\n4b137612be55ca69776c7f30c2d2dd0aa2e7d72059820abf3e25b629f887a084 6 weeks ago /bin/sh -c #(nop) ADD jessie.tar.xz in / 121 MB\n750d58736b4b6cc0f9a9abe8f258cef269e3e9dceced1146503522be9f985ada 6 weeks ago /bin/sh -c #(nop) MAINTAINER Tianon Gravi admwiggin@gmail.com - mkimage-debootstrap.sh -t jessie.tar.xz jessie http://http.debian.net/debian 0 B\n511136ea3c5a64f264b78b5433614aec563103b4d4702f3ba7d4d2698e22c158 9 months ago 0 B\n\nimages\nUsage: docker images [OPTIONS] [REPOSITORY]\n\nList images\n\n -a, --all=false Show all images (by default filter out the intermediate image layers)\n -f, --filter=[] Provide filter values (i.e., 'dangling=true')\n --no-trunc=false Don't truncate output\n -q, --quiet=false Only show numeric IDs\n\nThe default docker images will show all top level\nimages, their repository and tags, and their virtual size.\nDocker images have intermediate layers that increase reusability,\ndecrease disk usage, and speed up docker build by\nallowing each step to be cached. These intermediate layers are not shown\nby default.\nThe VIRTUAL SIZE is the cumulative space taken up by the image and all\nits parent images. This is also the disk space used by the contents of the\nTar file created when you docker save an image.\nAn image will be listed more than once if it has multiple repository names\nor tags. This single image (identifiable by its matching IMAGE ID)\nuses up the VIRTUAL SIZE listed only once.\nListing the most recently created images\n$ sudo docker images | head\nREPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE\nnone none 77af4d6b9913 19 hours ago 1.089 GB\ncommitt latest b6fa739cedf5 19 hours ago 1.089 GB\nnone none 78a85c484f71 19 hours ago 1.089 GB\ndocker latest 30557a29d5ab 20 hours ago 1.089 GB\nnone none 5ed6274db6ce 24 hours ago 1.089 GB\npostgres 9 746b819f315e 4 days ago 213.4 MB\npostgres 9.3 746b819f315e 4 days ago 213.4 MB\npostgres 9.3.5 746b819f315e 4 days ago 213.4 MB\npostgres latest 746b819f315e 4 days ago 213.4 MB\n\nListing the full length image IDs\n$ sudo docker images --no-trunc | head\nREPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE\nnone none 77af4d6b9913e693e8d0b4b294fa62ade6054e6b2f1ffb617ac955dd63fb0182 19 hours ago 1.089 GB\ncommittest latest b6fa739cedf5ea12a620a439402b6004d057da800f91c7524b5086a5e4749c9f 19 hours ago 1.089 GB\nnone none 78a85c484f71509adeaace20e72e941f6bdd2b25b4c75da8693efd9f61a37921 19 hours ago 1.089 GB\ndocker latest 30557a29d5abc51e5f1d5b472e79b7e296f595abcf19fe6b9199dbbc809c6ff4 20 hours ago 1.089 GB\nnone none 0124422dd9f9cf7ef15c0617cda3931ee68346455441d66ab8bdc5b05e9fdce5 20 hours ago 1.089 GB\nnone none 18ad6fad340262ac2a636efd98a6d1f0ea775ae3d45240d3418466495a19a81b 22 hours ago 1.082 GB\nnone none f9f1e26352f0a3ba6a0ff68167559f64f3e21ff7ada60366e2d44a04befd1d3a 23 hours ago 1.089 GB\ntryout latest 2629d1fa0b81b222fca63371ca16cbf6a0772d07759ff80e8d1369b926940074 23 hours ago 131.5 MB\nnone none 5ed6274db6ceb2397844896966ea239290555e74ef307030ebb01ff91b1914df 24 hours ago 1.089 GB\n\nFiltering\nThe filtering flag (-f or --filter) format is of \"key=value\". If there is more\nthan one filter, then pass multiple flags (e.g., --filter \"foo=bar\" --filter \"bif=baz\")\nCurrent filters:\n * dangling (boolean - true or false)\nUntagged images\n$ sudo docker images --filter \"dangling=true\"\n\nREPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE\nnone none 8abc22fbb042 4 weeks ago 0 B\nnone none 48e5f45168b9 4 weeks ago 2.489 MB\nnone none bf747efa0e2f 4 weeks ago 0 B\nnone none 980fe10e5736 12 weeks ago 101.4 MB\nnone none dea752e4e117 12 weeks ago 101.4 MB\nnone none 511136ea3c5a 8 months ago 0 B\n\nThis will display untagged images, that are the leaves of the images tree (not\nintermediary layers). These images occur when a new build of an image takes the\nrepo:tag away from the image ID, leaving it untagged. A warning will be issued\nif trying to remove an image when a container is presently using it.\nBy having this flag it allows for batch cleanup.\nReady for use by docker rmi ..., like:\n$ sudo docker rmi $(sudo docker images -f \"dangling=true\" -q)\n\n8abc22fbb042\n48e5f45168b9\nbf747efa0e2f\n980fe10e5736\ndea752e4e117\n511136ea3c5a\n\nNOTE: Docker will warn you if any containers exist that are using these untagged images.\nimport\nUsage: docker import URL|- [REPOSITORY[:TAG]]\n\nCreate an empty filesystem image and import the contents of the tarball (.tar, .tar.gz, .tgz, .bzip, .tar.xz, .txz) into it, then optionally tag it.\n\nURLs must start with http and point to a single file archive (.tar,\n.tar.gz, .tgz, .bzip, .tar.xz, or .txz) containing a root filesystem. If\nyou would like to import from a local directory or archive, you can use\nthe - parameter to take the data from STDIN.\nExamples\nImport from a remote location:\nThis will create a new untagged image.\n$ sudo docker import http://example.com/exampleimage.tgz\n\nImport from a local file:\nImport to docker via pipe and STDIN.\n$ cat exampleimage.tgz | sudo docker import - exampleimagelocal:new\n\nImport from a local directory:\n$ sudo tar -c . | sudo docker import - exampleimagedir\n\nNote the sudo in this example \u2013 you must preserve\nthe ownership of the files (especially root ownership) during the\narchiving with tar. If you are not root (or the sudo command) when you\ntar, then the ownerships might not get preserved.\ninfo\nUsage: docker info\n\nDisplay system-wide information\n\nFor example:\n$ sudo docker -D info\nContainers: 14\nImages: 52\nStorage Driver: aufs\n Root Dir: /var/lib/docker/aufs\n Backing Filesystem: extfs\n Dirs: 545\nExecution Driver: native-0.2\nKernel Version: 3.13.0-24-generic\nOperating System: Ubuntu 14.04 LTS\nCPUs: 1\nName: prod-server-42\nID: 7TRN:IPZB:QYBB:VPBQ:UMPP:KARE:6ZNR:XE6T:7EWV:PKF4:ZOJD:TPYS\nTotal Memory: 2 GiB\nDebug mode (server): false\nDebug mode (client): true\nFds: 10\nGoroutines: 9\nEventsListeners: 0\nInit Path: /usr/bin/docker\nDocker Root Dir: /var/lib/docker\nUsername: svendowideit\nRegistry: [https://index.docker.io/v1/]\nLabels:\n storage=ssd\n\nThe global -D option tells all docker commands to output debug information.\nWhen sending issue reports, please use docker version and docker -D info to\nensure we know how your setup is configured.\ninspect\nUsage: docker inspect [OPTIONS] CONTAINER|IMAGE [CONTAINER|IMAGE...]\n\nReturn low-level information on a container or image\n\n -f, --format=\"\" Format the output using the given go template.\n\nBy default, this will render all results in a JSON array. If a format is\nspecified, the given template will be executed for each result.\nGo's text/template package\ndescribes all the details of the format.\nExamples\nGet an instance's IP address:\nFor the most part, you can pick out any field from the JSON in a fairly\nstraightforward manner.\n$ sudo docker inspect --format='{{.NetworkSettings.IPAddress}}' $INSTANCE_ID\n\nGet an instance's MAC Address:\nFor the most part, you can pick out any field from the JSON in a fairly\nstraightforward manner.\n$ sudo docker inspect --format='{{.NetworkSettings.MacAddress}}' $INSTANCE_ID\n\nList All Port Bindings:\nOne can loop over arrays and maps in the results to produce simple text\noutput:\n$ sudo docker inspect --format='{{range $p, $conf := .NetworkSettings.Ports}} {{$p}} - {{(index $conf 0).HostPort}} {{end}}' $INSTANCE_ID\n\nFind a Specific Port Mapping:\nThe .Field syntax doesn't work when the field name begins with a\nnumber, but the template language's index function does. The\n.NetworkSettings.Ports section contains a map of the internal port\nmappings to a list of external address/port objects, so to grab just the\nnumeric public port, you use index to find the specific port map, and\nthen index 0 contains the first object inside of that. Then we ask for\nthe HostPort field to get the public address.\n$ sudo docker inspect --format='{{(index (index .NetworkSettings.Ports \"8787/tcp\") 0).HostPort}}' $INSTANCE_ID\n\nGet config:\nThe .Field syntax doesn't work when the field contains JSON data, but\nthe template language's custom json function does. The .config\nsection contains complex JSON object, so to grab it as JSON, you use\njson to convert the configuration object into JSON.\n$ sudo docker inspect --format='{{json .config}}' $INSTANCE_ID\n\nkill\nUsage: docker kill [OPTIONS] CONTAINER [CONTAINER...]\n\nKill a running container using SIGKILL or a specified signal\n\n -s, --signal=\"KILL\" Signal to send to the container\n\nThe main process inside the container will be sent SIGKILL, or any\nsignal specified with option --signal.\nload\nUsage: docker load [OPTIONS]\n\nLoad an image from a tar archive on STDIN\n\n -i, --input=\"\" Read from a tar archive file, instead of STDIN\n\nLoads a tarred repository from a file or the standard input stream.\nRestores both images and tags.\n$ sudo docker images\nREPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE\n$ sudo docker load busybox.tar\n$ sudo docker images\nREPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE\nbusybox latest 769b9341d937 7 weeks ago 2.489 MB\n$ sudo docker load --input fedora.tar\n$ sudo docker images\nREPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE\nbusybox latest 769b9341d937 7 weeks ago 2.489 MB\nfedora rawhide 0d20aec6529d 7 weeks ago 387 MB\nfedora 20 58394af37342 7 weeks ago 385.5 MB\nfedora heisenbug 58394af37342 7 weeks ago 385.5 MB\nfedora latest 58394af37342 7 weeks ago 385.5 MB\n\nlogin\nUsage: docker login [OPTIONS] [SERVER]\n\nRegister or log in to a Docker registry server, if no server is specified \"https://index.docker.io/v1/\" is the default.\n\n -e, --email=\"\" Email\n -p, --password=\"\" Password\n -u, --username=\"\" Username\n\nIf you want to login to a self-hosted registry you can specify this by\nadding the server name.\nexample:\n$ sudo docker login localhost:8080\n\nlogout\nUsage: docker logout [SERVER]\n\nLog out from a Docker registry, if no server is specified \"https://index.docker.io/v1/\" is the default.\n\nFor example:\n$ sudo docker logout localhost:8080\n\nlogs\nUsage: docker logs [OPTIONS] CONTAINER\n\nFetch the logs of a container\n\n -f, --follow=false Follow log output\n -t, --timestamps=false Show timestamps\n --tail=\"all\" Output the specified number of lines at the end of logs (defaults to all logs)\n\nThe docker logs command batch-retrieves logs present at the time of execution.\nThe docker logs --follow command will continue streaming the new output from\nthe container's STDOUT and STDERR.\nPassing a negative number or a non-integer to --tail is invalid and the\nvalue is set to all in that case. This behavior may change in the future.\nThe docker logs --timestamp commands will add an RFC3339Nano\ntimestamp, for example 2014-09-16T06:17:46.000000000Z, to each\nlog entry. To ensure that the timestamps for are aligned the\nnano-second part of the timestamp will be padded with zero when necessary.\npause\nUsage: docker pause CONTAINER\n\nPause all processes within a container\n\nThe docker pause command uses the cgroups freezer to suspend all processes in\na container. Traditionally, when suspending a process the SIGSTOP signal is\nused, which is observable by the process being suspended. With the cgroups freezer\nthe process is unaware, and unable to capture, that it is being suspended,\nand subsequently resumed.\nSee the\ncgroups freezer documentation\nfor further details.\nport\nUsage: docker port CONTAINER [PRIVATE_PORT[/PROTO]]\n\nList port mappings for the CONTAINER, or lookup the public-facing port that is NAT-ed to the PRIVATE_PORT\n\nYou can find out all the ports mapped by not specifying a PRIVATE_PORT, or\njust a specific mapping:\n$ sudo docker ps test\nCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES\nb650456536c7 busybox:latest top 54 minutes ago Up 54 minutes 0.0.0.0:1234-9876/tcp, 0.0.0.0:4321-7890/tcp test\n$ sudo docker port test\n7890/tcp - 0.0.0.0:4321\n9876/tcp - 0.0.0.0:1234\n$ sudo docker port test 7890/tcp\n0.0.0.0:4321\n$ sudo docker port test 7890/udp\n2014/06/24 11:53:36 Error: No public port '7890/udp' published for test\n$ sudo docker port test 7890\n0.0.0.0:4321\n\npause\nUsage: docker pause CONTAINER\n\nPause all processes within a container\n\nThe docker pause command uses the cgroups freezer to suspend all processes in\na container. Traditionally when suspending a process the SIGSTOP signal is\nused, which is observable by the process being suspended. With the cgroups freezer\nthe process is unaware, and unable to capture, that it is being suspended,\nand subsequently resumed.\nSee the\ncgroups freezer documentation\nfor further details.\nrename\nUsage: docker rename OLD_NAME NEW_NAME\n\nrename a existing container to a NEW_NAME\n\nThe docker rename command allows the container to be renamed to a different name. \nps\nUsage: docker ps [OPTIONS]\n\nList containers\n\n -a, --all=false Show all containers. Only running containers are shown by default.\n --before=\"\" Show only container created before Id or Name, include non-running ones.\n -f, --filter=[] Provide filter values. Valid filters:\n exited=int - containers with exit code of int\n status=(restarting|running|paused|exited)\n -l, --latest=false Show only the latest created container, include non-running ones.\n -n=-1 Show n last created containers, include non-running ones.\n --no-trunc=false Don't truncate output\n -q, --quiet=false Only display numeric IDs\n -s, --size=false Display total file sizes\n --since=\"\" Show only containers created since Id or Name, include non-running ones.\n\nRunning docker ps --no-trunc showing 2 linked containers.\n$ sudo docker ps\nCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES\nf7ee772232194fcc088c6bdec6ea09f7b3f6c54d53934658164b8602d7cd4744 ubuntu:12.04 bash 17 seconds ago Up 16 seconds webapp\nd0963715a061c7c7b7cc80b2646da913a959fbf13e80a971d4a60f6997a2f595 crosbymichael/redis:latest /redis-server --dir 33 minutes ago Up 33 minutes 6379/tcp redis,webapp/db\n\ndocker ps will show only running containers by default. To see all containers:\ndocker ps -a\nFiltering\nThe filtering flag (-f or --filter) format is a key=value pair. If there is more\nthan one filter, then pass multiple flags (e.g. --filter \"foo=bar\" --filter \"bif=baz\")\nCurrent filters:\n * exited (int - the code of exited containers. Only useful with '--all')\n * status (restarting|running|paused|exited)\nSuccessfully exited containers\n$ sudo docker ps -a --filter 'exited=0'\nCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES\nea09c3c82f6e registry:latest /srv/run.sh 2 weeks ago Exited (0) 2 weeks ago 127.0.0.1:5000-5000/tcp desperate_leakey\n106ea823fe4e fedora:latest /bin/sh -c 'bash -l' 2 weeks ago Exited (0) 2 weeks ago determined_albattani\n48ee228c9464 fedora:20 bash 2 weeks ago Exited (0) 2 weeks ago tender_torvalds\n\nThis shows all the containers that have exited with status of '0'\npull\nUsage: docker pull [OPTIONS] NAME[:TAG]\n\nPull an image or a repository from the registry\n\n -a, --all-tags=false Download all tagged images in the repository\n\nMost of your images will be created on top of a base image from the\nDocker Hub registry.\nDocker Hub contains many pre-built images that you\ncan pull and try without needing to define and configure your own.\nIt is also possible to manually specify the path of a registry to pull from.\nFor example, if you have set up a local registry, you can specify its path to\npull from it. A repository path is similar to a URL, but does not contain\na protocol specifier (https://, for example).\nTo download a particular image, or set of images (i.e., a repository),\nuse docker pull:\n$ sudo docker pull debian\n# will pull the debian:latest image, its intermediate layers\n# and any aliases of the same id\n$ sudo docker pull debian:testing\n# will pull the image named debian:testing and any intermediate\n# layers it is based on.\n# (Typically the empty `scratch` image, a MAINTAINER layer,\n# and the un-tarred base).\n$ sudo docker pull --all-tags centos\n# will pull all the images from the centos repository\n$ sudo docker pull registry.hub.docker.com/debian\n# manually specifies the path to the default Docker registry. This could\n# be replaced with the path to a local registry to pull from another source.\n\npush\nUsage: docker push NAME[:TAG]\n\nPush an image or a repository to the registry\n\nUse docker push to share your images to the Docker Hub\nregistry or to a self-hosted one.\nrestart\nUsage: docker restart [OPTIONS] CONTAINER [CONTAINER...]\n\nRestart a running container\n\n -t, --time=10 Number of seconds to try to stop for before killing the container. Once killed it will then be restarted. Default is 10 seconds.\n\nrm\nUsage: docker rm [OPTIONS] CONTAINER [CONTAINER...]\n\nRemove one or more containers\n\n -f, --force=false Force the removal of a running container (uses SIGKILL)\n -l, --link=false Remove the specified link and not the underlying container\n -v, --volumes=false Remove the volumes associated with the container\n\nExamples\n$ sudo docker rm /redis\n/redis\n\nThis will remove the container referenced under the link\n/redis.\n$ sudo docker rm --link /webapp/redis\n/webapp/redis\n\nThis will remove the underlying link between /webapp and the /redis\ncontainers removing all network communication.\n$ sudo docker rm --force redis\nredis\n\nThe main process inside the container referenced under the link /redis will receive\nSIGKILL, then the container will be removed.\nThis command will delete all stopped containers. The command docker ps\n-a -q will return all existing container IDs and pass them to the rm\ncommand which will delete them. Any running containers will not be\ndeleted.\nrmi\nUsage: docker rmi [OPTIONS] IMAGE [IMAGE...]\n\nRemove one or more images\n\n -f, --force=false Force removal of the image\n --no-prune=false Do not delete untagged parents\n\nRemoving tagged images\nImages can be removed either by their short or long IDs, or their image\nnames. If an image has more than one name, each of them needs to be\nremoved before the image is removed.\n$ sudo docker images\nREPOSITORY TAG IMAGE ID CREATED SIZE\ntest1 latest fd484f19954f 23 seconds ago 7 B (virtual 4.964 MB)\ntest latest fd484f19954f 23 seconds ago 7 B (virtual 4.964 MB)\ntest2 latest fd484f19954f 23 seconds ago 7 B (virtual 4.964 MB)\n\n$ sudo docker rmi fd484f19954f\nError: Conflict, cannot delete image fd484f19954f because it is tagged in multiple repositories\n2013/12/11 05:47:16 Error: failed to remove one or more images\n\n$ sudo docker rmi test1\nUntagged: fd484f19954f4920da7ff372b5067f5b7ddb2fd3830cecd17b96ea9e286ba5b8\n$ sudo docker rmi test2\nUntagged: fd484f19954f4920da7ff372b5067f5b7ddb2fd3830cecd17b96ea9e286ba5b8\n\n$ sudo docker images\nREPOSITORY TAG IMAGE ID CREATED SIZE\ntest latest fd484f19954f 23 seconds ago 7 B (virtual 4.964 MB)\n$ sudo docker rmi test\nUntagged: fd484f19954f4920da7ff372b5067f5b7ddb2fd3830cecd17b96ea9e286ba5b8\nDeleted: fd484f19954f4920da7ff372b5067f5b7ddb2fd3830cecd17b96ea9e286ba5b8\n\nrun\nUsage: docker run [OPTIONS] IMAGE [COMMAND] [ARG...]\n\nRun a command in a new container\n\n -a, --attach=[] Attach to STDIN, STDOUT or STDERR.\n --add-host=[] Add a custom host-to-IP mapping (host:ip)\n -c, --cpu-shares=0 CPU shares (relative weight)\n --cap-add=[] Add Linux capabilities\n --cap-drop=[] Drop Linux capabilities\n --cidfile=\"\" Write the container ID to the file\n --cpuset=\"\" CPUs in which to allow execution (0-3, 0,1)\n -d, --detach=false Detached mode: run the container in the background and print the new container ID\n --device=[] Add a host device to the container (e.g. --device=/dev/sdc:/dev/xvdc:rwm)\n --dns=[] Set custom DNS servers\n --dns-search=[] Set custom DNS search domains (Use --dns-search=. if you don't wish to set the search domain)\n -e, --env=[] Set environment variables\n --entrypoint=\"\" Overwrite the default ENTRYPOINT of the image\n --env-file=[] Read in a line delimited file of environment variables\n --expose=[] Expose a port or a range of ports (e.g. --expose=3300-3310) from the container without publishing it to your host\n -h, --hostname=\"\" Container host name\n -i, --interactive=false Keep STDIN open even if not attached\n --ipc=\"\" Default is to create a private IPC namespace (POSIX SysV IPC) for the container\n 'container:name|id': reuses another container shared memory, semaphores and message queues\n 'host': use the host shared memory,semaphores and message queues inside the container. Note: the host mode gives the container full access to local shared memory and is therefore considered insecure.\n --link=[] Add link to another container in the form of name:alias\n --lxc-conf=[] (lxc exec-driver only) Add custom lxc options --lxc-conf=\"lxc.cgroup.cpuset.cpus = 0,1\"\n -m, --memory=\"\" Memory limit (format: numberoptional unit, where unit = b, k, m or g)\n -memory-swap=\"\" Total memory usage (memory + swap), set '-1' to disable swap (format: numberoptional unit, where unit = b, k, m or g)\n --mac-address=\"\" Container MAC address (e.g. 92:d0:c6:0a:29:33)\n --name=\"\" Assign a name to the container\n --net=\"bridge\" Set the Network mode for the container\n 'bridge': creates a new network stack for the container on the docker bridge\n 'none': no networking for this container\n 'container:name|id': reuses another container network stack\n 'host': use the host network stack inside the container. Note: the host mode gives the container full access to local system services such as D-bus and is therefore considered insecure.\n -P, --publish-all=false Publish all exposed ports to random ports on the host interfaces\n -p, --publish=[] Publish a container's port to the host\n format: ip:hostPort:containerPort | ip::containerPort | hostPort:containerPort | containerPort\n Both hostPort and containerPort can be specified as a range of ports. \n When specifying ranges for both, the number of container ports in the range must match the number of host ports in the range. (e.g., `-p 1234-1236:1234-1236/tcp`)\n (use 'docker port' to see the actual mapping)\n --pid=host 'host': use the host PID namespace inside the container. Note: the host mode gives the container full access to local system services such as D-bus and is therefore considered insecure.\n --privileged=false Give extended privileges to this container\n --read-only=false Mount the container's root filesystem as read only\n --restart=\"\" Restart policy to apply when a container exits (no, on-failure[:max-retry], always)\n --rm=false Automatically remove the container when it exits (incompatible with -d)\n --security-opt=[] Security Options\n --sig-proxy=true Proxy received signals to the process (non-TTY mode only). SIGCHLD, SIGSTOP, and SIGKILL are not proxied.\n -t, --tty=false Allocate a pseudo-TTY\n -u, --user=\"\" Username or UID\n -v, --volume=[] Bind mount a volume (e.g., from the host: -v /host:/container, from Docker: -v /container)\n --volumes-from=[] Mount volumes from the specified container(s)\n -w, --workdir=\"\" Working directory inside the container\n\nThe docker run command first creates a writeable container layer over the\nspecified image, and then starts it using the specified command. That is,\ndocker run is equivalent to the API /containers/create then\n/containers/(id)/start. A stopped container can be restarted with all its\nprevious changes intact using docker start. See docker ps -a to view a list\nof all containers.\nThere is detailed information about docker run in the Docker run reference.\nThe docker run command can be used in combination with docker commit to\nchange the command that a container runs.\nSee the Docker User Guide for more detailed\ninformation about the --expose, -p, -P and --link parameters,\nand linking containers.\nExamples\n$ sudo docker run --name test -it debian\n$$ exit 13\nexit\n$ echo $?\n13\n$ sudo docker ps -a | grep test\n275c44472aeb debian:7 \"/bin/bash\" 26 seconds ago Exited (13) 17 seconds ago test\n\nIn this example, we are running bash interactively in the debian:latest image, and giving\nthe container the name test. We then quit bash by running exit 13, which means bash\nwill have an exit code of 13. This is then passed on to the caller of docker run, and\nis recorded in the test container metadata.\n$ sudo docker run --cidfile /tmp/docker_test.cid ubuntu echo \"test\"\n\nThis will create a container and print test to the console. The cidfile\nflag makes Docker attempt to create a new file and write the container ID to it.\nIf the file exists already, Docker will return an error. Docker will close this\nfile when docker run exits.\n$ sudo docker run -t -i --rm ubuntu bash\nroot@bc338942ef20:/# mount -t tmpfs none /mnt\nmount: permission denied\n\nThis will not work, because by default, most potentially dangerous kernel\ncapabilities are dropped; including cap_sys_admin (which is required to mount\nfilesystems). However, the --privileged flag will allow it to run:\n$ sudo docker run --privileged ubuntu bash\nroot@50e3f57e16e6:/# mount -t tmpfs none /mnt\nroot@50e3f57e16e6:/# df -h\nFilesystem Size Used Avail Use% Mounted on\nnone 1.9G 0 1.9G 0% /mnt\n\nThe --privileged flag gives all capabilities to the container, and it also\nlifts all the limitations enforced by the device cgroup controller. In other\nwords, the container can then do almost everything that the host can do. This\nflag exists to allow special use-cases, like running Docker within Docker.\n$ sudo docker run -w /path/to/dir/ -i -t ubuntu pwd\n\nThe -w lets the command being executed inside directory given, here\n/path/to/dir/. If the path does not exists it is created inside the container.\n$ sudo docker run -v `pwd`:`pwd` -w `pwd` -i -t ubuntu pwd\n\nThe -v flag mounts the current working directory into the container. The -w\nlets the command being executed inside the current working directory, by\nchanging into the directory to the value returned by pwd. So this\ncombination executes the command using the container, but inside the\ncurrent working directory.\n$ sudo docker run -v /doesnt/exist:/foo -w /foo -i -t ubuntu bash\n\nWhen the host directory of a bind-mounted volume doesn't exist, Docker\nwill automatically create this directory on the host for you. In the\nexample above, Docker will create the /doesnt/exist\nfolder before starting your container.\n$ sudo docker run --read-only -v /icanwrite busybox touch /icanwrite here\n\nVolumes can be used in combination with --read-only to control where \na container writes files. The --read-only flag mounts the container's root\nfilesystem as read only prohibiting writes to locations other than the\nspecified volumes for the container.\n$ sudo docker run -t -i -v /var/run/docker.sock:/var/run/docker.sock -v ./static-docker:/usr/bin/docker busybox sh\n\nBy bind-mounting the docker unix socket and statically linked docker\nbinary (such as that provided by https://get.docker.com), you give the container the full access to create and\nmanipulate the host's Docker daemon.\n$ sudo docker run -p 127.0.0.1:80:8080 ubuntu bash\n\nThis binds port 8080 of the container to port 80 on 127.0.0.1 of\nthe host machine. The Docker User Guide\nexplains in detail how to manipulate ports in Docker.\n$ sudo docker run --expose 80 ubuntu bash\n\nThis exposes port 80 of the container for use within a link without\npublishing the port to the host system's interfaces. The Docker User\nGuide explains in detail how to manipulate\nports in Docker.\n$ sudo docker run -e MYVAR1 --env MYVAR2=foo --env-file ./env.list ubuntu bash\n\nThis sets environmental variables in the container. For illustration all three\nflags are shown here. Where -e, --env take an environment variable and\nvalue, or if no = is provided, then that variable's current value is passed\nthrough (i.e. $MYVAR1 from the host is set to $MYVAR1 in the container). \nWhen no = is provided and that variable is not defined in the client's\nenvironment then that variable will be removed from the container's list of\nenvironment variables.\nAll three flags, -e, --env and --env-file can be repeated.\nRegardless of the order of these three flags, the --env-file are processed\nfirst, and then -e, --env flags. This way, the -e or --env will\noverride variables as needed.\n$ cat ./env.list\nTEST_FOO=BAR\n$ sudo docker run --env TEST_FOO=\"This is a test\" --env-file ./env.list busybox env | grep TEST_FOO\nTEST_FOO=This is a test\n\nThe --env-file flag takes a filename as an argument and expects each line\nto be in the VAR=VAL format, mimicking the argument passed to --env. Comment\nlines need only be prefixed with #\nAn example of a file passed with --env-file\n$ cat ./env.list\nTEST_FOO=BAR\n\n# this is a comment\nTEST_APP_DEST_HOST=10.10.0.127\nTEST_APP_DEST_PORT=8888\n\n# pass through this variable from the caller\nTEST_PASSTHROUGH\n$ sudo TEST_PASSTHROUGH=howdy docker run --env-file ./env.list busybox env\nHOME=/\nPATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\nHOSTNAME=5198e0745561\nTEST_FOO=BAR\nTEST_APP_DEST_HOST=10.10.0.127\nTEST_APP_DEST_PORT=8888\nTEST_PASSTHROUGH=howdy\n\n$ sudo docker run --name console -t -i ubuntu bash\n\nThis will create and run a new container with the container name being\nconsole.\n$ sudo docker run --link /redis:redis --name console ubuntu bash\n\nThe --link flag will link the container named /redis into the newly\ncreated container with the alias redis. The new container can access the\nnetwork and environment of the redis container via environment variables.\nThe --name flag will assign the name console to the newly created\ncontainer.\n$ sudo docker run --volumes-from 777f7dc92da7 --volumes-from ba8c0c54f0f2:ro -i -t ubuntu pwd\n\nThe --volumes-from flag mounts all the defined volumes from the referenced\ncontainers. Containers can be specified by repetitions of the --volumes-from\nargument. The container ID may be optionally suffixed with :ro or :rw to\nmount the volumes in read-only or read-write mode, respectively. By default,\nthe volumes are mounted in the same mode (read write or read only) as\nthe reference container.\nThe -a flag tells docker run to bind to the container's STDIN, STDOUT or\nSTDERR. This makes it possible to manipulate the output and input as needed.\n$ echo \"test\" | sudo docker run -i -a stdin ubuntu cat -\n\nThis pipes data into a container and prints the container's ID by attaching\nonly to the container's STDIN.\n$ sudo docker run -a stderr ubuntu echo test\n\nThis isn't going to print anything unless there's an error because we've\nonly attached to the STDERR of the container. The container's logs\nstill store what's been written to STDERR and STDOUT.\n$ cat somefile | sudo docker run -i -a stdin mybuilder dobuild\n\nThis is how piping a file into a container could be done for a build.\nThe container's ID will be printed after the build is done and the build\nlogs could be retrieved using docker logs. This is\nuseful if you need to pipe a file or something else into a container and\nretrieve the container's ID once the container has finished running.\n$ sudo docker run --device=/dev/sdc:/dev/xvdc --device=/dev/sdd --device=/dev/zero:/dev/nulo -i -t ubuntu ls -l /dev/{xvdc,sdd,nulo}\n brw-rw---- 1 root disk 8, 2 Feb 9 16:05 /dev/xvdc\n brw-rw---- 1 root disk 8, 3 Feb 9 16:05 /dev/sdd\n crw-rw-rw- 1 root root 1, 5 Feb 9 16:05 /dev/nulo\nIt is often necessary to directly expose devices to a container. The --device\noption enables that. For example, a specific block storage device or loop\ndevice or audio device can be added to an otherwise unprivileged container\n(without the --privileged flag) and have the application directly access it.\nBy default, the container will be able to read, write and mknod these devices.\nThis can be overridden using a third :rwm set of options to each --device\nflag:\n $ sudo docker run --device=/dev/sda:/dev/xvdc --rm -it ubuntu fdisk /dev/xvdc\n\n Command (m for help): q\n $ sudo docker run --device=/dev/sda:/dev/xvdc:r --rm -it ubuntu fdisk /dev/xvdc\n You will not be able to write the partition table.\n\n Command (m for help): q\n\n $ sudo docker run --device=/dev/sda:/dev/xvdc --rm -it ubuntu fdisk /dev/xvdc\n\n Command (m for help): q\n\n $ sudo docker run --device=/dev/sda:/dev/xvdc:m --rm -it ubuntu fdisk /dev/xvdc\n fdisk: unable to open /dev/xvdc: Operation not permitted\n\n\n\nNote:\n--device cannot be safely used with ephemeral devices. Block devices that\nmay be removed should not be added to untrusted containers with --device.\n\nA complete example:\n$ sudo docker run -d --name static static-web-files sh\n$ sudo docker run -d --expose=8098 --name riak riakserver\n$ sudo docker run -d -m 100m -e DEVELOPMENT=1 -e BRANCH=example-code -v $(pwd):/app/bin:ro --name app appserver\n$ sudo docker run -d -p 1443:443 --dns=10.0.0.1 --dns-search=dev.org -v /var/log/httpd --volumes-from static --link riak --link app -h www.sven.dev.org --name web webserver\n$ sudo docker run -t -i --rm --volumes-from web -w /var/log/httpd busybox tail -f access.log\n\nThis example shows five containers that might be set up to test a web\napplication change:\n\nStart a pre-prepared volume image static-web-files (in the background)\n that has CSS, image and static HTML in it, (with a VOLUME instruction in\n the Dockerfile to allow the web server to use those files);\nStart a pre-prepared riakserver image, give the container name riak and\n expose port 8098 to any containers that link to it;\nStart the appserver image, restricting its memory usage to 100MB, setting\n two environment variables DEVELOPMENT and BRANCH and bind-mounting the\n current directory ($(pwd)) in the container in read-only mode as /app/bin;\nStart the webserver, mapping port 443 in the container to port 1443 on\n the Docker server, setting the DNS server to 10.0.0.1 and DNS search\n domain to dev.org, creating a volume to put the log files into (so we can\n access it from another container), then importing the files from the volume\n exposed by the static container, and linking to all exposed ports from\n riak and app. Lastly, we set the hostname to web.sven.dev.org so its\n consistent with the pre-generated SSL certificate;\nFinally, we create a container that runs tail -f access.log using the logs\n volume from the web container, setting the workdir to /var/log/httpd. The\n --rm option means that when the container exits, the container's layer is\n removed.\n\nRestart Policies\nUse Docker's --restart to specify a container's restart policy. A restart \npolicy controls whether the Docker daemon restarts a container after exit.\nDocker supports the following restart policies:\n\n \n \n Policy\n Result\n \n \n \n \n no\n \n Do not automatically restart the container when it exits. This is the \n default.\n \n \n \n \n \n on-failure[:max-retries]\n \n \n \n Restart only if the container exits with a non-zero exit status.\n Optionally, limit the number of restart retries the Docker \n daemon attempts.\n \n \n \n always\n \n Always restart the container regardless of the exit status.\n When you specify always, the Docker daemon will try to restart\n the container indefinitely.\n \n \n \n\n\n$ sudo docker run --restart=always redis\n\nThis will run the redis container with a restart policy of always\nso that if the container exits, Docker will restart it.\nMore detailed information on restart policies can be found in the \nRestart Policies (--restart) section\nof the Docker run reference page.\nAdding entries to a container hosts file\nYou can add other hosts into a container's /etc/hosts file by using one or more\n--add-host flags. This example adds a static address for a host named docker:\n $ docker run --add-host=docker:10.180.0.1 --rm -it debian\n $$ ping docker\n PING docker (10.180.0.1): 48 data bytes\n 56 bytes from 10.180.0.1: icmp_seq=0 ttl=254 time=7.600 ms\n 56 bytes from 10.180.0.1: icmp_seq=1 ttl=254 time=30.705 ms\n ˆC--- docker ping statistics ---\n 2 packets transmitted, 2 packets received, 0% packet loss\n round-trip min/avg/max/stddev = 7.600/19.152/30.705/11.553 ms\n\n\n\nNote:\nSometimes you need to connect to the Docker host, which means getting the IP\naddress of the host. You can use the following shell commands to simplify this\nprocess:\n $ alias hostip=\"ip route show 0.0.0.0/0 | grep -Eo 'via \\S+' | awk '{ print \\$2 }'\"\n $ docker run --add-host=docker:$(hostip) --rm -it debian\n\n\nsave\nUsage: docker save [OPTIONS] IMAGE [IMAGE...]\n\nSave an image(s) to a tar archive (streamed to STDOUT by default)\n\n -o, --output=\"\" Write to a file, instead of STDOUT\n\nProduces a tarred repository to the standard output stream.\nContains all parent layers, and all tags + versions, or specified repo:tag, for\neach argument provided.\nIt is used to create a backup that can then be used with docker load\n$ sudo docker save busybox busybox.tar\n$ ls -sh busybox.tar\n2.7M busybox.tar\n$ sudo docker save --output busybox.tar busybox\n$ ls -sh busybox.tar\n2.7M busybox.tar\n$ sudo docker save -o fedora-all.tar fedora\n$ sudo docker save -o fedora-latest.tar fedora:latest\n\nIt is even useful to cherry-pick particular tags of an image repository\n$ sudo docker save -o ubuntu.tar ubuntu:lucid ubuntu:saucy\nsearch\nSearch Docker Hub for images\nUsage: docker search [OPTIONS] TERM\n\nSearch the Docker Hub for images\n\n --automated=false Only show automated builds\n --no-trunc=false Don't truncate output\n -s, --stars=0 Only displays with at least x stars\n\nSee Find Public Images on Docker Hub for\nmore details on finding shared images from the command line.\n\nNote: \nSearch queries will only return up to 25 results \n\nstart\nUsage: docker start [OPTIONS] CONTAINER [CONTAINER...]\n\nRestart a stopped container\n\n -a, --attach=false Attach container's STDOUT and STDERR and forward all signals to the process\n -i, --interactive=false Attach container's STDIN\n\nstats\nUsage: docker stats CONTAINER [CONTAINER...]\n\nDisplay a live stream of one or more containers' resource usage statistics\n\n --help=false Print usage\n\n\nNote: this functionality currently only works when using the libcontainer exec-driver.\n\nRunning docker stats on multiple containers\n$ sudo docker stats redis1 redis2\nCONTAINER CPU % MEM USAGE/LIMIT MEM % NET I/O\nredis1 0.07% 796 KiB/64 MiB 1.21% 788 B/648 B\nredis2 0.07% 2.746 MiB/64 MiB 4.29% 1.266 KiB/648 B\n\nThe docker stats command will only return a live stream of data for running \ncontainers. Stopped containers will not return any data.\n\nNote:\nIf you want more detailed information about a container's resource usage, use the API endpoint.\n\nstop\nUsage: docker stop [OPTIONS] CONTAINER [CONTAINER...]\n\nStop a running container by sending SIGTERM and then SIGKILL after a grace period\n\n -t, --time=10 Number of seconds to wait for the container to stop before killing it. Default is 10 seconds.\n\nThe main process inside the container will receive SIGTERM, and after a\ngrace period, SIGKILL.\ntag\nUsage: docker tag [OPTIONS] IMAGE[:TAG] [REGISTRYHOST/][USERNAME/]NAME[:TAG]\n\nTag an image into a repository\n\n -f, --force=false Force\n\nYou can group your images together using names and tags, and then upload\nthem to Share Images via Repositories.\ntop\nUsage: docker top CONTAINER [ps OPTIONS]\n\nDisplay the running processes of a container\n\nunpause\nUsage: docker unpause CONTAINER\n\nUnpause all processes within a container\n\nThe docker unpause command uses the cgroups freezer to un-suspend all\nprocesses in a container.\nSee the\ncgroups freezer documentation\nfor further details.\nversion\nUsage: docker version\n\nShow the Docker version information.\n\nShow the Docker version, API version, Git commit, and Go version of\nboth Docker client and daemon.\nwait\nUsage: docker wait CONTAINER [CONTAINER...]\n\nBlock until a container stops, then print its exit code.",
|
|
"title": "Command line"
|
|
},
|
|
{
|
|
"loc": "/reference/commandline/cli#command-line",
|
|
"tags": "",
|
|
"text": "Note: if you are using a remote Docker daemon, such as Boot2Docker, \nthen do not type the sudo before the docker commands shown in the\ndocumentation's examples. To list available commands, either run docker with no parameters\nor execute docker help : $ sudo docker\n Usage: docker [OPTIONS] COMMAND [arg...]\n -H, --host=[]: The socket(s) to bind to in daemon mode, specified using one or more tcp://host:port, unix:///path/to/socket, fd://* or fd://socketfd.\n\n A self-sufficient runtime for Linux containers.\n\n ...",
|
|
"title": "Command Line"
|
|
},
|
|
{
|
|
"loc": "/reference/commandline/cli#help",
|
|
"tags": "",
|
|
"text": "To list the help on any command just execute the command, followed by the --help option. $ sudo docker run --help\n\nUsage: docker run [OPTIONS] IMAGE [COMMAND] [ARG...]\n\nRun a command in a new container\n\n -a, --attach=[] Attach to STDIN, STDOUT or STDERR.\n -c, --cpu-shares=0 CPU shares (relative weight)\n...",
|
|
"title": "Help"
|
|
},
|
|
{
|
|
"loc": "/reference/commandline/cli#option-types",
|
|
"tags": "",
|
|
"text": "Single character command line options can be combined, so rather than\ntyping docker run -i -t --name test busybox sh ,\nyou can write docker run -it --name test busybox sh . Boolean Boolean options take the form -d=false . The value you see in the help text is the\ndefault value which is set if you do not specify that flag. If you specify\na Boolean flag without a value, this will set the flag to true , irrespective\nof the default value. For example, running docker run -d will set the value to true , so\nyour container will run in \"detached\" mode, in the background. Options which default to true (e.g., docker build --rm=true ) can only\nbe set to the non-default value by explicitly setting them to false : $ docker build --rm=false . Multi Options like -a=[] indicate they can be specified multiple times: $ sudo docker run -a stdin -a stdout -a stderr -i -t ubuntu /bin/bash Sometimes this can use a more complex value string, as for -v : $ sudo docker run -v /host:/container example/mysql Strings and Integers Options like --name=\"\" expect a string, and they\ncan only be specified once. Options like -c=0 \nexpect an integer, and they can only be specified once.",
|
|
"title": "Option types"
|
|
},
|
|
{
|
|
"loc": "/reference/commandline/cli#daemon",
|
|
"tags": "",
|
|
"text": "Usage: docker [OPTIONS] COMMAND [arg...]\n\nA self-sufficient runtime for linux containers.\n\nOptions:\n --api-enable-cors=false Enable CORS headers in the remote API\n -b, --bridge=\"\" Attach containers to a pre-existing network bridge\n use 'none' to disable container networking\n --bip=\"\" Use this CIDR notation address for the network bridge's IP, not compatible with -b\n -D, --debug=false Enable debug mode\n -d, --daemon=false Enable daemon mode\n --dns=[] Force Docker to use specific DNS servers\n --dns-search=[] Force Docker to use specific DNS search domains\n -e, --exec-driver=\"native\" Force the Docker runtime to use a specific exec driver\n --fixed-cidr=\"\" IPv4 subnet for fixed IPs (e.g.: 10.20.0.0/16)\n this subnet must be nested in the bridge subnet (which is defined by -b or --bip)\n --fixed-cidr-v6=\"\" IPv6 subnet for global IPs (e.g.: 2a00:1450::/64)\n -G, --group=\"docker\" Group to assign the unix socket specified by -H when running in daemon mode\n use '' (the empty string) to disable setting of a group\n -g, --graph=\"/var/lib/docker\" Path to use as the root of the Docker runtime\n -H, --host=[] The socket(s) to bind to in daemon mode or connect to in client mode, specified using one or more tcp://host:port, unix:///path/to/socket, fd://* or fd://socketfd.\n --icc=true Allow unrestricted inter-container and Docker daemon host communication\n --insecure-registry=[] Enable insecure communication with specified registries (disables certificate verification for HTTPS and enables HTTP fallback) (e.g., localhost:5000 or 10.20.0.0/16)\n --ip=0.0.0.0 Default IP address to use when binding container ports\n --ip-forward=true Enable net.ipv4.ip_forward and IPv6 forwarding if --fixed-cidr-v6 is defined. IPv6 forwarding may interfere with your existing IPv6 configuration when using Router Advertisement.\n --ip-masq=true Enable IP masquerading for bridge's IP range\n --iptables=true Enable Docker's addition of iptables rules\n --ipv6=false Enable Docker IPv6 support\n -l, --log-level=\"info\" Set the logging level (debug, info, warn, error, fatal)\n --label=[] Set key=value labels to the daemon (displayed in `docker info`)\n --mtu=0 Set the containers network MTU\n if no value is provided: default to the default route MTU or 1500 if no default route is available\n -p, --pidfile=\"/var/run/docker.pid\" Path to use for daemon PID file\n --registry-mirror=[] Specify a preferred Docker registry mirror\n -s, --storage-driver=\"\" Force the Docker runtime to use a specific storage driver\n --selinux-enabled=false Enable selinux support. SELinux does not presently support the BTRFS storage driver\n --storage-opt=[] Set storage driver options\n --tls=false Use TLS; implied by --tlsverify flag\n --tlscacert=\"/home/sven/.docker/ca.pem\" Trust only remotes providing a certificate signed by the CA given here\n --tlscert=\"/home/sven/.docker/cert.pem\" Path to TLS certificate file\n --tlskey=\"/home/sven/.docker/key.pem\" Path to TLS key file\n --tlsverify=false Use TLS and verify the remote (daemon: verify client, client: verify daemon)\n -v, --version=false Print version information and quit Options with [] may be specified multiple times. The Docker daemon is the persistent process that manages containers.\nDocker uses the same binary for both the daemon and client. To run the\ndaemon you provide the -d flag. To run the daemon with debug output, use docker -d -D . Daemon socket option The Docker daemon can listen for Docker Remote API \nrequests via three different types of Socket: unix , tcp , and fd . By default, a unix domain socket (or IPC socket) is created at /var/run/docker.sock ,\nrequiring either root permission, or docker group membership. If you need to access the Docker daemon remotely, you need to enable the tcp \nSocket. Beware that the default setup provides un-encrypted and un-authenticated\ndirect access to the Docker daemon - and should be secured either using the built in HTTPS encrypted socket , or by putting a secure web\nproxy in front of it. You can listen on port 2375 on all network interfaces\nwith -H tcp://0.0.0.0:2375 , or on a particular network interface using its IP\naddress: -H tcp://192.168.59.103:2375 . It is conventional to use port 2375 \nfor un-encrypted, and port 2376 for encrypted communication with the daemon. Note If you're using an HTTPS encrypted socket, keep in mind that only TLS1.0\nand greater are supported. Protocols SSLv3 and under are not supported anymore\nfor security reasons. On Systemd based systems, you can communicate with the daemon via Systemd socket activation , use docker -d -H fd:// . Using fd:// will work perfectly for most setups but\nyou can also specify individual sockets: docker -d -H fd://3 . If the\nspecified socket activated files aren't found, then Docker will exit. You\ncan find examples of using Systemd socket activation with Docker and\nSystemd in the Docker source tree . You can configure the Docker daemon to listen to multiple sockets at the same\ntime using multiple -H options: # listen using the default unix socket, and on 2 specific IP addresses on this host.\ndocker -d -H unix:///var/run/docker.sock -H tcp://192.168.59.106 -H tcp://10.10.10.2 The Docker client will honor the DOCKER_HOST environment variable to set\nthe -H flag for the client. $ sudo docker -H tcp://0.0.0.0:2375 ps\n# or\n$ export DOCKER_HOST=\"tcp://0.0.0.0:2375\"\n$ sudo docker ps\n# both are equal Setting the DOCKER_TLS_VERIFY environment variable to any value other than the empty\nstring is equivalent to setting the --tlsverify flag. The following are equivalent: $ sudo docker --tlsverify ps\n# or\n$ export DOCKER_TLS_VERIFY=1\n$ sudo docker ps The Docker client will honor the HTTP_PROXY , HTTPS_PROXY , and NO_PROXY \nenvironment variables (or the lowercase versions thereof). HTTPS_PROXY takes\nprecedence over HTTP_PROXY . If you happen to have a proxy configured with the HTTP_PROXY or HTTPS_PROXY environment variables but still want to\ncommunicate with the Docker daemon over its default unix domain socket,\nsetting the NO_PROXY environment variable to the path of the socket\n( /var/run/docker.sock ) is required. Daemon storage-driver option The Docker daemon has support for several different image layer storage drivers: aufs , devicemapper , btrfs and overlay . The aufs driver is the oldest, but is based on a Linux kernel patch-set that\nis unlikely to be merged into the main kernel. These are also known to cause some\nserious kernel crashes. However, aufs is also the only storage driver that allows\ncontainers to share executable and shared library memory, so is a useful choice\nwhen running thousands of containers with the same program or libraries. The devicemapper driver uses thin provisioning and Copy on Write (CoW)\nsnapshots. For each devicemapper graph location \u2013 typically /var/lib/docker/devicemapper \u2013 a thin pool is created based on two block\ndevices, one for data and one for metadata. By default, these block devices\nare created automatically by using loopback mounts of automatically created\nsparse files. Refer to Storage driver options below\nfor a way how to customize this setup. ~jpetazzo/Resizing Docker containers with the Device Mapper plugin article\nexplains how to tune your existing setup without the use of options. The btrfs driver is very fast for docker build - but like devicemapper does not\nshare executable memory between devices. Use docker -d -s btrfs -g /mnt/btrfs_partition . The overlay is a very fast union filesystem. It is now merged in the main\nLinux kernel as of 3.18.0 .\nCall docker -d -s overlay to use it. Note: \nIt is currently unsupported on btrfs or any Copy on Write filesystem\nand should only be used over ext4 partitions. Storage driver options Particular storage-driver can be configured with options specified with --storage-opt flags. The only driver accepting options is devicemapper as\nof now. All its options are prefixed with dm . Currently supported options are: dm.basesize Specifies the size to use when creating the base device, which limits the\nsize of images and containers. The default value is 10G. Note, thin devices\nare inherently \"sparse\", so a 10G device which is mostly empty doesn't use\n10 GB of space on the pool. However, the filesystem will use more space for\nthe empty case the larger the device is. Warning : This value affects the system-wide \"base\" empty filesystem\n that may already be initialized and inherited by pulled images. Typically,\n a change to this value will require additional steps to take effect: $ sudo service docker stop\n$ sudo rm -rf /var/lib/docker\n$ sudo service docker start Example use: $ sudo docker -d --storage-opt dm.basesize=20G dm.loopdatasize Specifies the size to use when creating the loopback file for the \"data\"\ndevice which is used for the thin pool. The default size is 100G. Note that\nthe file is sparse, so it will not initially take up this much space. Example use: $ sudo docker -d --storage-opt dm.loopdatasize=200G dm.loopmetadatasize Specifies the size to use when creating the loopback file for the\n\"metadata\" device which is used for the thin pool. The default size is 2G.\nNote that the file is sparse, so it will not initially take up this much\nspace. Example use: $ sudo docker -d --storage-opt dm.loopmetadatasize=4G dm.fs Specifies the filesystem type to use for the base device. The supported\noptions are \"ext4\" and \"xfs\". The default is \"ext4\" Example use: $ sudo docker -d --storage-opt dm.fs=xfs dm.mkfsarg Specifies extra mkfs arguments to be used when creating the base device. Example use: $ sudo docker -d --storage-opt \"dm.mkfsarg=-O ˆhas_journal\" dm.mountopt Specifies extra mount options used when mounting the thin devices. Example use: $ sudo docker -d --storage-opt dm.mountopt=nodiscard dm.datadev Specifies a custom blockdevice to use for data for the thin pool. If using a block device for device mapper storage, ideally both datadev and\nmetadatadev should be specified to completely avoid using the loopback\ndevice. Example use: $ sudo docker -d \\\n --storage-opt dm.datadev=/dev/sdb1 \\\n --storage-opt dm.metadatadev=/dev/sdc1 dm.metadatadev Specifies a custom blockdevice to use for metadata for the thin pool. For best performance the metadata should be on a different spindle than the\ndata, or even better on an SSD. If setting up a new metadata pool it is required to be valid. This can be\nachieved by zeroing the first 4k to indicate empty metadata, like this: $ dd if=/dev/zero of=$metadata_dev bs=4096 count=1 Example use: $ sudo docker -d \\\n --storage-opt dm.datadev=/dev/sdb1 \\\n --storage-opt dm.metadatadev=/dev/sdc1 dm.blocksize Specifies a custom blocksize to use for the thin pool. The default\nblocksize is 64K. Example use: $ sudo docker -d --storage-opt dm.blocksize=512K dm.blkdiscard Enables or disables the use of blkdiscard when removing devicemapper\ndevices. This is enabled by default (only) if using loopback devices and is\nrequired to resparsify the loopback file on image/container removal. Disabling this on loopback can lead to much faster container removal\ntimes, but will make the space used in /var/lib/docker directory not be\nreturned to the system for other use when containers are removed. Example use: $ sudo docker -d --storage-opt dm.blkdiscard=false Docker exec-driver option The Docker daemon uses a specifically built libcontainer execution driver as its\ninterface to the Linux kernel namespaces , cgroups , and SELinux . There is still legacy support for the original LXC userspace tools via the lxc execution driver, however, this is\nnot where the primary development of new functionality is taking place.\nAdd -e lxc to the daemon flags to use the lxc execution driver. Daemon DNS options To set the DNS server for all Docker containers, use docker -d --dns 8.8.8.8 . To set the DNS search domain for all Docker containers, use docker -d --dns-search example.com . Insecure registries Docker considers a private registry either secure or insecure.\nIn the rest of this section, registry is used for private registry , and myregistry:5000 \nis a placeholder example for a private registry. A secure registry uses TLS and a copy of its CA certificate is placed on the Docker host at /etc/docker/certs.d/myregistry:5000/ca.crt .\nAn insecure registry is either not using TLS (i.e., listening on plain text HTTP), or is using\nTLS with a CA certificate not known by the Docker daemon. The latter can happen when the\ncertificate was not found under /etc/docker/certs.d/myregistry:5000/ , or if the certificate\nverification failed (i.e., wrong CA). By default, Docker assumes all, but local (see local registries below), registries are secure.\nCommunicating with an insecure registry is not possible if Docker assumes that registry is secure.\nIn order to communicate with an insecure registry, the Docker daemon requires --insecure-registry \nin one of the following two forms: --insecure-registry myregistry:5000 tells the Docker daemon that myregistry:5000 should be considered insecure. --insecure-registry 10.1.0.0/16 tells the Docker daemon that all registries whose domain resolve to an IP address is part\nof the subnet described by the CIDR syntax, should be considered insecure. The flag can be used multiple times to allow multiple registries to be marked as insecure. If an insecure registry is not marked as insecure, docker pull , docker push , and docker search \nwill result in an error message prompting the user to either secure or pass the --insecure-registry \nflag to the Docker daemon as described above. Local registries, whose IP address falls in the 127.0.0.0/8 range, are automatically marked as insecure\nas of Docker 1.3.2. It is not recommended to rely on this, as it may change in the future. Running a Docker daemon behind a HTTPS_PROXY When running inside a LAN that uses a HTTPS proxy, the Docker Hub certificates\nwill be replaced by the proxy's certificates. These certificates need to be added\nto your Docker host's configuration: Install the ca-certificates package for your distribution Ask your network admin for the proxy's CA certificate and append them to\n /etc/pki/tls/certs/ca-bundle.crt Then start your Docker daemon with HTTPS_PROXY=http://username:password@proxy:port/ docker -d .\n The username: and password@ are optional - and are only needed if your proxy\n is set up to require authentication. This will only add the proxy and authentication to the Docker daemon's requests -\nyour docker build s and running containers will need extra configuration to use\nthe proxy Miscellaneous options IP masquerading uses address translation to allow containers without a public IP to talk\nto other machines on the Internet. This may interfere with some network topologies and\ncan be disabled with --ip-masq=false. Docker supports softlinks for the Docker data directory\n( /var/lib/docker ) and for /var/lib/docker/tmp . The DOCKER_TMPDIR and the data directory can be set like this: DOCKER_TMPDIR=/mnt/disk2/tmp /usr/local/bin/docker -d -D -g /var/lib/docker -H unix:// /var/lib/boot2docker/docker.log 2 1\n# or\nexport DOCKER_TMPDIR=/mnt/disk2/tmp\n/usr/local/bin/docker -d -D -g /var/lib/docker -H unix:// /var/lib/boot2docker/docker.log 2 1",
|
|
"title": "daemon"
|
|
},
|
|
{
|
|
"loc": "/reference/commandline/cli#attach",
|
|
"tags": "",
|
|
"text": "Usage: docker attach [OPTIONS] CONTAINER\n\nAttach to a running container\n\n --no-stdin=false Do not attach STDIN\n --sig-proxy=true Proxy all received signals to the process (non-TTY mode only). SIGCHLD, SIGKILL, and SIGSTOP are not proxied. The docker attach command allows you to attach to a running container using\nthe container's ID or name, either to view its ongoing output or to control it\ninteractively. You can attach to the same contained process multiple times\nsimultaneously, screen sharing style, or quickly view the progress of your\ndaemonized process. You can detach from the container (and leave it running) with CTRL-p CTRL-q \n(for a quiet exit) or CTRL-c which will send a SIGKILL to the container.\nWhen you are attached to a container, and exit its main process, the process's\nexit code will be returned to the client. It is forbidden to redirect the standard input of a docker attach command while\nattaching to a tty-enabled container (i.e.: launched with -t ). Examples $ sudo docker run -d --name topdemo ubuntu /usr/bin/top -b)\n$ sudo docker attach topdemo\ntop - 02:05:52 up 3:05, 0 users, load average: 0.01, 0.02, 0.05\nTasks: 1 total, 1 running, 0 sleeping, 0 stopped, 0 zombie\nCpu(s): 0.1%us, 0.2%sy, 0.0%ni, 99.7%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st\nMem: 373572k total, 355560k used, 18012k free, 27872k buffers\nSwap: 786428k total, 0k used, 786428k free, 221740k cached\n\nPID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND\n 1 root 20 0 17200 1116 912 R 0 0.3 0:00.03 top\n\n top - 02:05:55 up 3:05, 0 users, load average: 0.01, 0.02, 0.05\n Tasks: 1 total, 1 running, 0 sleeping, 0 stopped, 0 zombie\n Cpu(s): 0.0%us, 0.2%sy, 0.0%ni, 99.8%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st\n Mem: 373572k total, 355244k used, 18328k free, 27872k buffers\n Swap: 786428k total, 0k used, 786428k free, 221776k cached\n\n PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND\n 1 root 20 0 17208 1144 932 R 0 0.3 0:00.03 top\n\n\n top - 02:05:58 up 3:06, 0 users, load average: 0.01, 0.02, 0.05\n Tasks: 1 total, 1 running, 0 sleeping, 0 stopped, 0 zombie\n Cpu(s): 0.2%us, 0.3%sy, 0.0%ni, 99.5%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st\n Mem: 373572k total, 355780k used, 17792k free, 27880k buffers\n Swap: 786428k total, 0k used, 786428k free, 221776k cached\n\n PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND\n 1 root 20 0 17208 1144 932 R 0 0.3 0:00.03 top\nˆC$\n$ echo $?\n0\n$ docker ps -a | grep topdemo\n7998ac8581f9 ubuntu:14.04 \"/usr/bin/top -b\" 38 seconds ago Exited (0) 21 seconds ago topdemo And in this second example, you can see the exit code returned by the bash process\nis returned by the docker attach command to its caller too: $ sudo docker run --name test -d -it debian\n275c44472aebd77c926d4527885bb09f2f6db21d878c75f0a1c212c03d3bcfab\n$ sudo docker attach test\n$$ exit 13\nexit\n$ echo $?\n13\n$ sudo docker ps -a | grep test\n275c44472aeb debian:7 \"/bin/bash\" 26 seconds ago Exited (13) 17 seconds ago test",
|
|
"title": "attach"
|
|
},
|
|
{
|
|
"loc": "/reference/commandline/cli#build",
|
|
"tags": "",
|
|
"text": "Usage: docker build [OPTIONS] PATH | URL | -\n\nBuild a new image from the source code at PATH\n\n --force-rm=false Always remove intermediate containers, even after unsuccessful builds\n --no-cache=false Do not use cache when building the image\n --pull=false Always attempt to pull a newer version of the image\n -q, --quiet=false Suppress the verbose output generated by the containers\n --rm=true Remove intermediate containers after a successful build\n -t, --tag=\"\" Repository name (and optionally a tag) to be applied to the resulting image in case of success Use this command to build Docker images from a Dockerfile and a\n\"context\". The files at PATH or URL are called the \"context\" of the build. The\nbuild process may refer to any of the files in the context, for example\nwhen using an ADD instruction.\nWhen a single Dockerfile is given as URL or is piped through STDIN \n( docker build - Dockerfile ), then no context is set. When a Git repository is set as URL , then the repository is used as\nthe context. The Git repository is cloned with its submodules\n( git clone -recursive ). A fresh git clone occurs in a temporary directory\non your local host, and then this is sent to the Docker daemon as the\ncontext. This way, your local user credentials and VPN's etc can be\nused to access private repositories. If a file named .dockerignore exists in the root of PATH then it\nis interpreted as a newline-separated list of exclusion patterns.\nExclusion patterns match files or directories relative to PATH that\nwill be excluded from the context. Globbing is done using Go's filepath.Match rules. Please note that .dockerignore files in other subdirectories are\nconsidered as normal files. Filepaths in .dockerignore are absolute with\nthe current directory as the root. Wildcards are allowed but the search\nis not recursive. Example .dockerignore file */temp*\n*/*/temp*\ntemp? The first line above */temp* , would ignore all files with names starting with temp from any subdirectory below the root directory. For example, a file named /somedir/temporary.txt would be ignored. The second line */*/temp* , will\nignore files starting with name temp from any subdirectory that is two levels\nbelow the root directory. For example, the file /somedir/subdir/temporary.txt \nwould get ignored in this case. The last line in the above example temp? \nwill ignore the files that match the pattern from the root directory.\nFor example, the files tempa , tempb are ignored from the root directory.\nCurrently there is no support for regular expressions. Formats\nlike [ˆtemp*] are ignored. By default the docker build command will look for a Dockerfile at the\nroot of the build context. The -f , --file , option lets you specify\nthe path to an alternative file to use instead. This is useful\nin cases where the same set of files are used for multiple builds. The path\nmust be to a file within the build context. If a relative path is specified\nthen it must to be relative to the current directory. See also: Dockerfile Reference . Examples $ sudo docker build .\nUploading context 10240 bytes\nStep 1 : FROM busybox\nPulling repository busybox\n --- e9aa60c60128MB/2.284 MB (100%) endpoint: https://cdn-registry-1.docker.io/v1/\nStep 2 : RUN ls -lh /\n --- Running in 9c9e81692ae9\ntotal 24\ndrwxr-xr-x 2 root root 4.0K Mar 12 2013 bin\ndrwxr-xr-x 5 root root 4.0K Oct 19 00:19 dev\ndrwxr-xr-x 2 root root 4.0K Oct 19 00:19 etc\ndrwxr-xr-x 2 root root 4.0K Nov 15 23:34 lib\nlrwxrwxrwx 1 root root 3 Mar 12 2013 lib64 - lib\ndr-xr-xr-x 116 root root 0 Nov 15 23:34 proc\nlrwxrwxrwx 1 root root 3 Mar 12 2013 sbin - bin\ndr-xr-xr-x 13 root root 0 Nov 15 23:34 sys\ndrwxr-xr-x 2 root root 4.0K Mar 12 2013 tmp\ndrwxr-xr-x 2 root root 4.0K Nov 15 23:34 usr\n --- b35f4035db3f\nStep 3 : CMD echo Hello world\n --- Running in 02071fceb21b\n --- f52f38b7823e\nSuccessfully built f52f38b7823e\nRemoving intermediate container 9c9e81692ae9\nRemoving intermediate container 02071fceb21b This example specifies that the PATH is . , and so all the files in the local directory get tar d and sent to the Docker daemon. The PATH \nspecifies where to find the files for the \"context\" of the build on the\nDocker daemon. Remember that the daemon could be running on a remote\nmachine and that no parsing of the Dockerfile\nhappens at the client side (where you're running docker build ). That means that all the files at PATH get sent, not just the ones listed to ADD in the Dockerfile. The transfer of context from the local machine to the Docker daemon is\nwhat the docker client means when you see the\n\"Sending build context\" message. If you wish to keep the intermediate containers after the build is\ncomplete, you must use --rm=false . This does not\naffect the build cache. $ sudo docker build .\nUploading context 18.829 MB\nUploading context\nStep 0 : FROM busybox\n --- 769b9341d937\nStep 1 : CMD echo Hello world\n --- Using cache\n --- 99cc1ad10469\nSuccessfully built 99cc1ad10469\n$ echo \".git\" .dockerignore\n$ sudo docker build .\nUploading context 6.76 MB\nUploading context\nStep 0 : FROM busybox\n --- 769b9341d937\nStep 1 : CMD echo Hello world\n --- Using cache\n --- 99cc1ad10469\nSuccessfully built 99cc1ad10469 This example shows the use of the .dockerignore file to exclude the .git \ndirectory from the context. Its effect can be seen in the changed size of the\nuploaded context. $ sudo docker build -t vieux/apache:2.0 . This will build like the previous example, but it will then tag the\nresulting image. The repository name will be vieux/apache \nand the tag will be 2.0 $ sudo docker build - Dockerfile This will read a Dockerfile from STDIN without context. Due to the\nlack of a context, no contents of any local directory will be sent to\nthe Docker daemon. Since there is no context, a Dockerfile ADD only\nworks if it refers to a remote URL. $ sudo docker build - context.tar.gz This will build an image for a compressed context read from STDIN .\nSupported formats are: bzip2, gzip and xz. $ sudo docker build github.com/creack/docker-firefox This will clone the GitHub repository and use the cloned repository as\ncontext. The Dockerfile at the root of the\nrepository is used as Dockerfile. Note that you\ncan specify an arbitrary Git repository by using the git:// or git@ \nschema. $ sudo docker build -f Dockerfile.debug . This will use a file called Dockerfile.debug for the build\ninstructions instead of Dockerfile . $ sudo docker build -f dockerfiles/Dockerfile.debug -t myapp_debug .\n$ sudo docker build -f dockerfiles/Dockerfile.prod -t myapp_prod . The above commands will build the current build context (as specified by \nthe . ) twice, once using a debug version of a Dockerfile and once using \na production version. $ cd /home/me/myapp/some/dir/really/deep\n$ sudo docker build -f /home/me/myapp/dockerfiles/debug /home/me/myapp\n$ sudo docker build -f ../../../../dockerfiles/debug /home/me/myapp These two docker build commands do the exact same thing. They both\nuse the contents of the debug file instead of looking for a Dockerfile \nand will use /home/me/myapp as the root of the build context. Note that debug is in the directory structure of the build context, regardless of how \nyou refer to it on the command line. Note: docker build will return a no such file or directory error\nif the file or directory does not exist in the uploaded context. This may\nhappen if there is no context, or if you specify a file that is elsewhere\non the Host system. The context is limited to the current directory (and its\nchildren) for security reasons, and to ensure repeatable builds on remote\nDocker hosts. This is also the reason why ADD ../file will not work.",
|
|
"title": "build"
|
|
},
|
|
{
|
|
"loc": "/reference/commandline/cli#commit",
|
|
"tags": "",
|
|
"text": "Usage: docker commit [OPTIONS] CONTAINER [REPOSITORY[:TAG]]\n\nCreate a new image from a container's changes\n\n -a, --author=\"\" Author (e.g., \"John Hannibal Smith hannibal@a-team.com \")\n -m, --message=\"\" Commit message\n -p, --pause=true Pause container during commit It can be useful to commit a container's file changes or settings into a\nnew image. This allows you debug a container by running an interactive\nshell, or to export a working dataset to another server. Generally, it\nis better to use Dockerfiles to manage your images in a documented and\nmaintainable way. By default, the container being committed and its processes will be paused\nwhile the image is committed. This reduces the likelihood of\nencountering data corruption during the process of creating the commit.\nIf this behavior is undesired, set the 'p' option to false. Commit an existing container $ sudo docker ps\nID IMAGE COMMAND CREATED STATUS PORTS\nc3f279d17e0a ubuntu:12.04 /bin/bash 7 days ago Up 25 hours\n197387f1b436 ubuntu:12.04 /bin/bash 7 days ago Up 25 hours\n$ sudo docker commit c3f279d17e0a SvenDowideit/testimage:version3\nf5283438590d\n$ sudo docker images | head\nREPOSITORY TAG ID CREATED VIRTUAL SIZE\nSvenDowideit/testimage version3 f5283438590d 16 seconds ago 335.7 MB",
|
|
"title": "commit"
|
|
},
|
|
{
|
|
"loc": "/reference/commandline/cli#cp",
|
|
"tags": "",
|
|
"text": "Copy files/folders from a container's filesystem to the host\npath. Paths are relative to the root of the filesystem. Usage: docker cp CONTAINER:PATH HOSTPATH\n\nCopy files/folders from the PATH to the HOSTPATH",
|
|
"title": "cp"
|
|
},
|
|
{
|
|
"loc": "/reference/commandline/cli#create",
|
|
"tags": "",
|
|
"text": "Creates a new container. Usage: docker create [OPTIONS] IMAGE [COMMAND] [ARG...]\n\nCreate a new container\n\n -a, --attach=[] Attach to STDIN, STDOUT or STDERR.\n --add-host=[] Add a custom host-to-IP mapping (host:ip)\n -c, --cpu-shares=0 CPU shares (relative weight)\n --cap-add=[] Add Linux capabilities\n --cap-drop=[] Drop Linux capabilities\n --cidfile=\"\" Write the container ID to the file\n --cpuset=\"\" CPUs in which to allow execution (0-3, 0,1)\n --device=[] Add a host device to the container (e.g. --device=/dev/sdc:/dev/xvdc:rwm)\n --dns=[] Set custom DNS servers\n --dns-search=[] Set custom DNS search domains (Use --dns-search=. if you don't wish to set the search domain)\n -e, --env=[] Set environment variables\n --entrypoint=\"\" Overwrite the default ENTRYPOINT of the image\n --env-file=[] Read in a line delimited file of environment variables\n --expose=[] Expose a port or a range of ports (e.g. --expose=3300-3310) from the container without publishing it to your host\n -h, --hostname=\"\" Container host name\n -i, --interactive=false Keep STDIN open even if not attached\n --ipc=\"\" Default is to create a private IPC namespace (POSIX SysV IPC) for the container\n 'container: name|id ': reuses another container shared memory, semaphores and message queues\n 'host': use the host shared memory,semaphores and message queues inside the container. Note: the host mode gives the container full access to local shared memory and is therefore considered insecure.\n --link=[] Add link to another container in the form of name or id :alias\n --lxc-conf=[] (lxc exec-driver only) Add custom lxc options --lxc-conf=\"lxc.cgroup.cpuset.cpus = 0,1\"\n -m, --memory=\"\" Memory limit (format: number optional unit , where unit = b, k, m or g)\n --mac-address=\"\" Container MAC address (e.g. 92:d0:c6:0a:29:33)\n --name=\"\" Assign a name to the container\n --net=\"bridge\" Set the Network mode for the container\n 'bridge': creates a new network stack for the container on the docker bridge\n 'none': no networking for this container\n 'container: name|id ': reuses another container network stack\n 'host': use the host network stack inside the container. Note: the host mode gives the container full access to local system services such as D-bus and is therefore considered insecure.\n -P, --publish-all=false Publish all exposed ports to random ports on the host interfaces\n -p, --publish=[] Publish a container's port, or a range of ports (e.g., `-p 3300-3310`), to the host\n format: ip:hostPort:containerPort | ip::containerPort | hostPort:containerPort | containerPort\n Both hostPort and containerPort can be specified as a range of ports. \n When specifying ranges for both, the number of container ports in the range must match the number of host ports in the range. (e.g., `-p 1234-1236:1234-1236/tcp`)\n (use 'docker port' to see the actual mapping)\n --privileged=false Give extended privileges to this container\n --read-only=false Mount the container's root filesystem as read only\n --restart=\"\" Restart policy to apply when a container exits (no, on-failure[:max-retry], always)\n --security-opt=[] Security Options\n -t, --tty=false Allocate a pseudo-TTY\n -u, --user=\"\" Username or UID\n -v, --volume=[] Bind mount a volume (e.g., from the host: -v /host:/container, from Docker: -v /container)\n --volumes-from=[] Mount volumes from the specified container(s)\n -w, --workdir=\"\" Working directory inside the container The docker create command creates a writeable container layer over\nthe specified image and prepares it for running the specified command.\nThe container ID is then printed to STDOUT .\nThis is similar to docker run -d except the container is never started.\nYou can then use the docker start container_id command to start the\ncontainer at any point. This is useful when you want to set up a container configuration ahead\nof time so that it is ready to start when you need it. Please see the run command section for more details. Examples $ sudo docker create -t -i fedora bash\n6d8af538ec541dd581ebc2a24153a28329acb5268abe5ef868c1f1a261221752\n$ sudo docker start -a -i 6d8af538ec5\nbash-4.2# As of v1.4.0 container volumes are initialized during the docker create \nphase (i.e., docker run too). For example, this allows you to create the data volume container, and then use it from another container: $ docker create -v /data --name data ubuntu\n240633dfbb98128fa77473d3d9018f6123b99c454b3251427ae190a7d951ad57\n$ docker run --rm --volumes-from data ubuntu ls -la /data\ntotal 8\ndrwxr-xr-x 2 root root 4096 Dec 5 04:10 .\ndrwxr-xr-x 48 root root 4096 Dec 5 04:11 .. Similarly, create a host directory bind mounted volume container, which\ncan then be used from the subsequent container: $ docker create -v /home/docker:/docker --name docker ubuntu\n9aa88c08f319cd1e4515c3c46b0de7cc9aa75e878357b1e96f91e2c773029f03\n$ docker run --rm --volumes-from docker ubuntu ls -la /docker\ntotal 20\ndrwxr-sr-x 5 1000 staff 180 Dec 5 04:00 .\ndrwxr-xr-x 48 root root 4096 Dec 5 04:13 ..\n-rw-rw-r-- 1 1000 staff 3833 Dec 5 04:01 .ash_history\n-rw-r--r-- 1 1000 staff 446 Nov 28 11:51 .ashrc\n-rw-r--r-- 1 1000 staff 25 Dec 5 04:00 .gitconfig\ndrwxr-sr-x 3 1000 staff 60 Dec 1 03:28 .local\n-rw-r--r-- 1 1000 staff 920 Nov 28 11:51 .profile\ndrwx--S--- 2 1000 staff 460 Dec 5 00:51 .ssh\ndrwxr-xr-x 32 1000 staff 1140 Dec 5 04:01 docker",
|
|
"title": "create"
|
|
},
|
|
{
|
|
"loc": "/reference/commandline/cli#diff",
|
|
"tags": "",
|
|
"text": "List the changed files and directories in a container\u1fbfs filesystem Usage: docker diff CONTAINER\n\nInspect changes on a container's filesystem There are 3 events that are listed in the diff : A - Add D - Delete C - Change For example: $ sudo docker diff 7bb0e258aefe\n\nC /dev\nA /dev/kmsg\nC /etc\nA /etc/mtab\nA /go\nA /go/src\nA /go/src/github.com\nA /go/src/github.com/docker\nA /go/src/github.com/docker/docker\nA /go/src/github.com/docker/docker/.git\n....",
|
|
"title": "diff"
|
|
},
|
|
{
|
|
"loc": "/reference/commandline/cli#events",
|
|
"tags": "",
|
|
"text": "Usage: docker events [OPTIONS]\n\nGet real time events from the server\n\n -f, --filter=[] Provide filter values (i.e., 'event=stop')\n --since=\"\" Show all events created since timestamp\n --until=\"\" Stream events until this timestamp Docker containers will report the following events: create, destroy, die, export, kill, oom, pause, restart, start, stop, unpause and Docker images will report: untag, delete Filtering The filtering flag ( -f or --filter ) format is of \"key=value\". If you would like to use\nmultiple filters, pass multiple flags (e.g., --filter \"foo=bar\" --filter \"bif=baz\" ) Using the same filter multiple times will be handled as a OR ; for example --filter container=588a23dac085 --filter container=a8f7720b8c22 will display events for\ncontainer 588a23dac085 OR container a8f7720b8c22 Using multiple filters will be handled as a AND ; for example --filter container=588a23dac085 --filter event=start will display events for container\ncontainer 588a23dac085 AND the event type is start Current filters:\n * event\n * image\n * container Examples You'll need two shells for this example. Shell 1: Listening for events: $ sudo docker events Shell 2: Start and Stop containers: $ sudo docker start 4386fb97867d\n$ sudo docker stop 4386fb97867d\n$ sudo docker stop 7805c1d35632 Shell 1: (Again .. now showing events): 2014-05-10T17:42:14.999999999Z07:00 4386fb97867d: (from ubuntu-1:14.04) start\n2014-05-10T17:42:14.999999999Z07:00 4386fb97867d: (from ubuntu-1:14.04) die\n2014-05-10T17:42:14.999999999Z07:00 4386fb97867d: (from ubuntu-1:14.04) stop\n2014-05-10T17:42:14.999999999Z07:00 7805c1d35632: (from redis:2.8) die\n2014-05-10T17:42:14.999999999Z07:00 7805c1d35632: (from redis:2.8) stop Show events in the past from a specified time: $ sudo docker events --since 1378216169\n2014-03-10T17:42:14.999999999Z07:00 4386fb97867d: (from ubuntu-1:14.04) die\n2014-05-10T17:42:14.999999999Z07:00 4386fb97867d: (from ubuntu-1:14.04) stop\n2014-05-10T17:42:14.999999999Z07:00 7805c1d35632: (from redis:2.8) die\n2014-03-10T17:42:14.999999999Z07:00 7805c1d35632: (from redis:2.8) stop\n\n$ sudo docker events --since '2013-09-03'\n2014-09-03T17:42:14.999999999Z07:00 4386fb97867d: (from ubuntu-1:14.04) start\n2014-09-03T17:42:14.999999999Z07:00 4386fb97867d: (from ubuntu-1:14.04) die\n2014-05-10T17:42:14.999999999Z07:00 4386fb97867d: (from ubuntu-1:14.04) stop\n2014-05-10T17:42:14.999999999Z07:00 7805c1d35632: (from redis:2.8) die\n2014-09-03T17:42:14.999999999Z07:00 7805c1d35632: (from redis:2.8) stop\n\n$ sudo docker events --since '2013-09-03T15:49:29'\n2014-09-03T15:49:29.999999999Z07:00 4386fb97867d: (from ubuntu-1:14.04) die\n2014-05-10T17:42:14.999999999Z07:00 4386fb97867d: (from ubuntu-1:14.04) stop\n2014-05-10T17:42:14.999999999Z07:00 7805c1d35632: (from redis:2.8) die\n2014-09-03T15:49:29.999999999Z07:00 7805c1d35632: (from redis:2.8) stop Filter events: $ sudo docker events --filter 'event=stop'\n2014-05-10T17:42:14.999999999Z07:00 4386fb97867d: (from ubuntu-1:14.04) stop\n2014-09-03T17:42:14.999999999Z07:00 7805c1d35632: (from redis:2.8) stop\n\n$ sudo docker events --filter 'image=ubuntu-1:14.04'\n2014-05-10T17:42:14.999999999Z07:00 4386fb97867d: (from ubuntu-1:14.04) start\n2014-05-10T17:42:14.999999999Z07:00 4386fb97867d: (from ubuntu-1:14.04) die\n2014-05-10T17:42:14.999999999Z07:00 4386fb97867d: (from ubuntu-1:14.04) stop\n\n$ sudo docker events --filter 'container=7805c1d35632'\n2014-05-10T17:42:14.999999999Z07:00 7805c1d35632: (from redis:2.8) die\n2014-09-03T15:49:29.999999999Z07:00 7805c1d35632: (from redis:2.8) stop\n\n$ sudo docker events --filter 'container=7805c1d35632' --filter 'container=4386fb97867d'\n2014-09-03T15:49:29.999999999Z07:00 4386fb97867d: (from ubuntu-1:14.04) die\n2014-05-10T17:42:14.999999999Z07:00 4386fb97867d: (from ubuntu-1:14.04) stop\n2014-05-10T17:42:14.999999999Z07:00 7805c1d35632: (from redis:2.8) die\n2014-09-03T15:49:29.999999999Z07:00 7805c1d35632: (from redis:2.8) stop\n\n$ sudo docker events --filter 'container=7805c1d35632' --filter 'event=stop'\n2014-09-03T15:49:29.999999999Z07:00 7805c1d35632: (from redis:2.8) stop",
|
|
"title": "events"
|
|
},
|
|
{
|
|
"loc": "/reference/commandline/cli#exec",
|
|
"tags": "",
|
|
"text": "Usage: docker exec [OPTIONS] CONTAINER COMMAND [ARG...]\n\nRun a command in a running container\n\n -d, --detach=false Detached mode: run command in the background\n -i, --interactive=false Keep STDIN open even if not attached\n -t, --tty=false Allocate a pseudo-TTY The docker exec command runs a new command in a running container. The command started using docker exec will only run while the container's primary\nprocess ( PID 1 ) is running, and will not be restarted if the container is restarted. If the container is paused, then the docker exec command will fail with an error: $ docker pause test\ntest\n$ docker ps\nCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES\n1ae3b36715d2 ubuntu:latest \"bash\" 17 seconds ago Up 16 seconds (Paused) test\n$ docker exec test ls\nFATA[0000] Error response from daemon: Container test is paused, unpause the container before exec\n$ echo $?\n1 Examples $ sudo docker run --name ubuntu_bash --rm -i -t ubuntu bash This will create a container named ubuntu_bash and start a Bash session. $ sudo docker exec -d ubuntu_bash touch /tmp/execWorks This will create a new file /tmp/execWorks inside the running container ubuntu_bash , in the background. $ sudo docker exec -it ubuntu_bash bash This will create a new Bash session in the container ubuntu_bash .",
|
|
"title": "exec"
|
|
},
|
|
{
|
|
"loc": "/reference/commandline/cli#export",
|
|
"tags": "",
|
|
"text": "Usage: docker export CONTAINER\n\nExport the contents of a filesystem as a tar archive to STDOUT For example: $ sudo docker export red_panda latest.tar Note: docker export does not export the contents of volumes associated with the\ncontainer. If a volume is mounted on top of an existing directory in the \ncontainer, docker export will export the contents of the underlying \ndirectory, not the contents of the volume. Refer to Backup, restore, or migrate data volumes \nin the user guide for examples on exporting data in a volume.",
|
|
"title": "export"
|
|
},
|
|
{
|
|
"loc": "/reference/commandline/cli#history",
|
|
"tags": "",
|
|
"text": "Usage: docker history [OPTIONS] IMAGE\n\nShow the history of an image\n\n --no-trunc=false Don't truncate output\n -q, --quiet=false Only show numeric IDs To see how the docker:latest image was built: $ sudo docker history docker\nIMAGE CREATED CREATED BY SIZE\n3e23a5875458790b7a806f95f7ec0d0b2a5c1659bfc899c89f939f6d5b8f7094 8 days ago /bin/sh -c #(nop) ENV LC_ALL=C.UTF-8 0 B\n8578938dd17054dce7993d21de79e96a037400e8d28e15e7290fea4f65128a36 8 days ago /bin/sh -c dpkg-reconfigure locales locale-gen C.UTF-8 /usr/sbin/update-locale LANG=C.UTF-8 1.245 MB\nbe51b77efb42f67a5e96437b3e102f81e0a1399038f77bf28cea0ed23a65cf60 8 days ago /bin/sh -c apt-get update apt-get install -y git libxml2-dev python build-essential make gcc python-dev locales python-pip 338.3 MB\n4b137612be55ca69776c7f30c2d2dd0aa2e7d72059820abf3e25b629f887a084 6 weeks ago /bin/sh -c #(nop) ADD jessie.tar.xz in / 121 MB\n750d58736b4b6cc0f9a9abe8f258cef269e3e9dceced1146503522be9f985ada 6 weeks ago /bin/sh -c #(nop) MAINTAINER Tianon Gravi admwiggin@gmail.com - mkimage-debootstrap.sh -t jessie.tar.xz jessie http://http.debian.net/debian 0 B\n511136ea3c5a64f264b78b5433614aec563103b4d4702f3ba7d4d2698e22c158 9 months ago 0 B",
|
|
"title": "history"
|
|
},
|
|
{
|
|
"loc": "/reference/commandline/cli#images",
|
|
"tags": "",
|
|
"text": "Usage: docker images [OPTIONS] [REPOSITORY]\n\nList images\n\n -a, --all=false Show all images (by default filter out the intermediate image layers)\n -f, --filter=[] Provide filter values (i.e., 'dangling=true')\n --no-trunc=false Don't truncate output\n -q, --quiet=false Only show numeric IDs The default docker images will show all top level\nimages, their repository and tags, and their virtual size. Docker images have intermediate layers that increase reusability,\ndecrease disk usage, and speed up docker build by\nallowing each step to be cached. These intermediate layers are not shown\nby default. The VIRTUAL SIZE is the cumulative space taken up by the image and all\nits parent images. This is also the disk space used by the contents of the\nTar file created when you docker save an image. An image will be listed more than once if it has multiple repository names\nor tags. This single image (identifiable by its matching IMAGE ID )\nuses up the VIRTUAL SIZE listed only once. Listing the most recently created images $ sudo docker images | head\nREPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE none none 77af4d6b9913 19 hours ago 1.089 GB\ncommitt latest b6fa739cedf5 19 hours ago 1.089 GB none none 78a85c484f71 19 hours ago 1.089 GB\ndocker latest 30557a29d5ab 20 hours ago 1.089 GB none none 5ed6274db6ce 24 hours ago 1.089 GB\npostgres 9 746b819f315e 4 days ago 213.4 MB\npostgres 9.3 746b819f315e 4 days ago 213.4 MB\npostgres 9.3.5 746b819f315e 4 days ago 213.4 MB\npostgres latest 746b819f315e 4 days ago 213.4 MB Listing the full length image IDs $ sudo docker images --no-trunc | head\nREPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE none none 77af4d6b9913e693e8d0b4b294fa62ade6054e6b2f1ffb617ac955dd63fb0182 19 hours ago 1.089 GB\ncommittest latest b6fa739cedf5ea12a620a439402b6004d057da800f91c7524b5086a5e4749c9f 19 hours ago 1.089 GB none none 78a85c484f71509adeaace20e72e941f6bdd2b25b4c75da8693efd9f61a37921 19 hours ago 1.089 GB\ndocker latest 30557a29d5abc51e5f1d5b472e79b7e296f595abcf19fe6b9199dbbc809c6ff4 20 hours ago 1.089 GB none none 0124422dd9f9cf7ef15c0617cda3931ee68346455441d66ab8bdc5b05e9fdce5 20 hours ago 1.089 GB none none 18ad6fad340262ac2a636efd98a6d1f0ea775ae3d45240d3418466495a19a81b 22 hours ago 1.082 GB none none f9f1e26352f0a3ba6a0ff68167559f64f3e21ff7ada60366e2d44a04befd1d3a 23 hours ago 1.089 GB\ntryout latest 2629d1fa0b81b222fca63371ca16cbf6a0772d07759ff80e8d1369b926940074 23 hours ago 131.5 MB none none 5ed6274db6ceb2397844896966ea239290555e74ef307030ebb01ff91b1914df 24 hours ago 1.089 GB Filtering The filtering flag ( -f or --filter ) format is of \"key=value\". If there is more\nthan one filter, then pass multiple flags (e.g., --filter \"foo=bar\" --filter \"bif=baz\" ) Current filters:\n * dangling (boolean - true or false) Untagged images $ sudo docker images --filter \"dangling=true\"\n\nREPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE none none 8abc22fbb042 4 weeks ago 0 B none none 48e5f45168b9 4 weeks ago 2.489 MB none none bf747efa0e2f 4 weeks ago 0 B none none 980fe10e5736 12 weeks ago 101.4 MB none none dea752e4e117 12 weeks ago 101.4 MB none none 511136ea3c5a 8 months ago 0 B This will display untagged images, that are the leaves of the images tree (not\nintermediary layers). These images occur when a new build of an image takes the repo:tag away from the image ID, leaving it untagged. A warning will be issued\nif trying to remove an image when a container is presently using it.\nBy having this flag it allows for batch cleanup. Ready for use by docker rmi ... , like: $ sudo docker rmi $(sudo docker images -f \"dangling=true\" -q)\n\n8abc22fbb042\n48e5f45168b9\nbf747efa0e2f\n980fe10e5736\ndea752e4e117\n511136ea3c5a NOTE: Docker will warn you if any containers exist that are using these untagged images.",
|
|
"title": "images"
|
|
},
|
|
{
|
|
"loc": "/reference/commandline/cli#import",
|
|
"tags": "",
|
|
"text": "Usage: docker import URL|- [REPOSITORY[:TAG]]\n\nCreate an empty filesystem image and import the contents of the tarball (.tar, .tar.gz, .tgz, .bzip, .tar.xz, .txz) into it, then optionally tag it. URLs must start with http and point to a single file archive (.tar,\n.tar.gz, .tgz, .bzip, .tar.xz, or .txz) containing a root filesystem. If\nyou would like to import from a local directory or archive, you can use\nthe - parameter to take the data from STDIN . Examples Import from a remote location: This will create a new untagged image. $ sudo docker import http://example.com/exampleimage.tgz Import from a local file: Import to docker via pipe and STDIN . $ cat exampleimage.tgz | sudo docker import - exampleimagelocal:new Import from a local directory: $ sudo tar -c . | sudo docker import - exampleimagedir Note the sudo in this example \u2013 you must preserve\nthe ownership of the files (especially root ownership) during the\narchiving with tar. If you are not root (or the sudo command) when you\ntar, then the ownerships might not get preserved.",
|
|
"title": "import"
|
|
},
|
|
{
|
|
"loc": "/reference/commandline/cli#info",
|
|
"tags": "",
|
|
"text": "Usage: docker info\n\nDisplay system-wide information For example: $ sudo docker -D info\nContainers: 14\nImages: 52\nStorage Driver: aufs\n Root Dir: /var/lib/docker/aufs\n Backing Filesystem: extfs\n Dirs: 545\nExecution Driver: native-0.2\nKernel Version: 3.13.0-24-generic\nOperating System: Ubuntu 14.04 LTS\nCPUs: 1\nName: prod-server-42\nID: 7TRN:IPZB:QYBB:VPBQ:UMPP:KARE:6ZNR:XE6T:7EWV:PKF4:ZOJD:TPYS\nTotal Memory: 2 GiB\nDebug mode (server): false\nDebug mode (client): true\nFds: 10\nGoroutines: 9\nEventsListeners: 0\nInit Path: /usr/bin/docker\nDocker Root Dir: /var/lib/docker\nUsername: svendowideit\nRegistry: [https://index.docker.io/v1/]\nLabels:\n storage=ssd The global -D option tells all docker commands to output debug information. When sending issue reports, please use docker version and docker -D info to\nensure we know how your setup is configured.",
|
|
"title": "info"
|
|
},
|
|
{
|
|
"loc": "/reference/commandline/cli#inspect",
|
|
"tags": "",
|
|
"text": "Usage: docker inspect [OPTIONS] CONTAINER|IMAGE [CONTAINER|IMAGE...]\n\nReturn low-level information on a container or image\n\n -f, --format=\"\" Format the output using the given go template. By default, this will render all results in a JSON array. If a format is\nspecified, the given template will be executed for each result. Go's text/template package\ndescribes all the details of the format. Examples Get an instance's IP address: For the most part, you can pick out any field from the JSON in a fairly\nstraightforward manner. $ sudo docker inspect --format='{{.NetworkSettings.IPAddress}}' $INSTANCE_ID Get an instance's MAC Address: For the most part, you can pick out any field from the JSON in a fairly\nstraightforward manner. $ sudo docker inspect --format='{{.NetworkSettings.MacAddress}}' $INSTANCE_ID List All Port Bindings: One can loop over arrays and maps in the results to produce simple text\noutput: $ sudo docker inspect --format='{{range $p, $conf := .NetworkSettings.Ports}} {{$p}} - {{(index $conf 0).HostPort}} {{end}}' $INSTANCE_ID Find a Specific Port Mapping: The .Field syntax doesn't work when the field name begins with a\nnumber, but the template language's index function does. The .NetworkSettings.Ports section contains a map of the internal port\nmappings to a list of external address/port objects, so to grab just the\nnumeric public port, you use index to find the specific port map, and\nthen index 0 contains the first object inside of that. Then we ask for\nthe HostPort field to get the public address. $ sudo docker inspect --format='{{(index (index .NetworkSettings.Ports \"8787/tcp\") 0).HostPort}}' $INSTANCE_ID Get config: The .Field syntax doesn't work when the field contains JSON data, but\nthe template language's custom json function does. The .config \nsection contains complex JSON object, so to grab it as JSON, you use json to convert the configuration object into JSON. $ sudo docker inspect --format='{{json .config}}' $INSTANCE_ID",
|
|
"title": "inspect"
|
|
},
|
|
{
|
|
"loc": "/reference/commandline/cli#kill",
|
|
"tags": "",
|
|
"text": "Usage: docker kill [OPTIONS] CONTAINER [CONTAINER...]\n\nKill a running container using SIGKILL or a specified signal\n\n -s, --signal=\"KILL\" Signal to send to the container The main process inside the container will be sent SIGKILL , or any\nsignal specified with option --signal .",
|
|
"title": "kill"
|
|
},
|
|
{
|
|
"loc": "/reference/commandline/cli#load",
|
|
"tags": "",
|
|
"text": "Usage: docker load [OPTIONS]\n\nLoad an image from a tar archive on STDIN\n\n -i, --input=\"\" Read from a tar archive file, instead of STDIN Loads a tarred repository from a file or the standard input stream.\nRestores both images and tags. $ sudo docker images\nREPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE\n$ sudo docker load busybox.tar\n$ sudo docker images\nREPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE\nbusybox latest 769b9341d937 7 weeks ago 2.489 MB\n$ sudo docker load --input fedora.tar\n$ sudo docker images\nREPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE\nbusybox latest 769b9341d937 7 weeks ago 2.489 MB\nfedora rawhide 0d20aec6529d 7 weeks ago 387 MB\nfedora 20 58394af37342 7 weeks ago 385.5 MB\nfedora heisenbug 58394af37342 7 weeks ago 385.5 MB\nfedora latest 58394af37342 7 weeks ago 385.5 MB",
|
|
"title": "load"
|
|
},
|
|
{
|
|
"loc": "/reference/commandline/cli#login",
|
|
"tags": "",
|
|
"text": "Usage: docker login [OPTIONS] [SERVER]\n\nRegister or log in to a Docker registry server, if no server is specified \"https://index.docker.io/v1/\" is the default.\n\n -e, --email=\"\" Email\n -p, --password=\"\" Password\n -u, --username=\"\" Username If you want to login to a self-hosted registry you can specify this by\nadding the server name. example:\n$ sudo docker login localhost:8080",
|
|
"title": "login"
|
|
},
|
|
{
|
|
"loc": "/reference/commandline/cli#logout",
|
|
"tags": "",
|
|
"text": "Usage: docker logout [SERVER]\n\nLog out from a Docker registry, if no server is specified \"https://index.docker.io/v1/\" is the default. For example: $ sudo docker logout localhost:8080",
|
|
"title": "logout"
|
|
},
|
|
{
|
|
"loc": "/reference/commandline/cli#logs",
|
|
"tags": "",
|
|
"text": "Usage: docker logs [OPTIONS] CONTAINER\n\nFetch the logs of a container\n\n -f, --follow=false Follow log output\n -t, --timestamps=false Show timestamps\n --tail=\"all\" Output the specified number of lines at the end of logs (defaults to all logs) The docker logs command batch-retrieves logs present at the time of execution. The docker logs --follow command will continue streaming the new output from\nthe container's STDOUT and STDERR . Passing a negative number or a non-integer to --tail is invalid and the\nvalue is set to all in that case. This behavior may change in the future. The docker logs --timestamp commands will add an RFC3339Nano\ntimestamp, for example 2014-09-16T06:17:46.000000000Z , to each\nlog entry. To ensure that the timestamps for are aligned the\nnano-second part of the timestamp will be padded with zero when necessary.",
|
|
"title": "logs"
|
|
},
|
|
{
|
|
"loc": "/reference/commandline/cli#pause",
|
|
"tags": "",
|
|
"text": "Usage: docker pause CONTAINER\n\nPause all processes within a container The docker pause command uses the cgroups freezer to suspend all processes in\na container. Traditionally, when suspending a process the SIGSTOP signal is\nused, which is observable by the process being suspended. With the cgroups freezer\nthe process is unaware, and unable to capture, that it is being suspended,\nand subsequently resumed. See the cgroups freezer documentation \nfor further details.",
|
|
"title": "pause"
|
|
},
|
|
{
|
|
"loc": "/reference/commandline/cli#port",
|
|
"tags": "",
|
|
"text": "Usage: docker port CONTAINER [PRIVATE_PORT[/PROTO]]\n\nList port mappings for the CONTAINER, or lookup the public-facing port that is NAT-ed to the PRIVATE_PORT You can find out all the ports mapped by not specifying a PRIVATE_PORT , or\njust a specific mapping: $ sudo docker ps test\nCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES\nb650456536c7 busybox:latest top 54 minutes ago Up 54 minutes 0.0.0.0:1234- 9876/tcp, 0.0.0.0:4321- 7890/tcp test\n$ sudo docker port test\n7890/tcp - 0.0.0.0:4321\n9876/tcp - 0.0.0.0:1234\n$ sudo docker port test 7890/tcp\n0.0.0.0:4321\n$ sudo docker port test 7890/udp\n2014/06/24 11:53:36 Error: No public port '7890/udp' published for test\n$ sudo docker port test 7890\n0.0.0.0:4321",
|
|
"title": "port"
|
|
},
|
|
{
|
|
"loc": "/reference/commandline/cli#pause_1",
|
|
"tags": "",
|
|
"text": "Usage: docker pause CONTAINER\n\nPause all processes within a container The docker pause command uses the cgroups freezer to suspend all processes in\na container. Traditionally when suspending a process the SIGSTOP signal is\nused, which is observable by the process being suspended. With the cgroups freezer\nthe process is unaware, and unable to capture, that it is being suspended,\nand subsequently resumed. See the cgroups freezer documentation \nfor further details.",
|
|
"title": "pause"
|
|
},
|
|
{
|
|
"loc": "/reference/commandline/cli#rename",
|
|
"tags": "",
|
|
"text": "Usage: docker rename OLD_NAME NEW_NAME\n\nrename a existing container to a NEW_NAME The docker rename command allows the container to be renamed to a different name.",
|
|
"title": "rename"
|
|
},
|
|
{
|
|
"loc": "/reference/commandline/cli#ps",
|
|
"tags": "",
|
|
"text": "Usage: docker ps [OPTIONS]\n\nList containers\n\n -a, --all=false Show all containers. Only running containers are shown by default.\n --before=\"\" Show only container created before Id or Name, include non-running ones.\n -f, --filter=[] Provide filter values. Valid filters:\n exited= int - containers with exit code of int \n status=(restarting|running|paused|exited)\n -l, --latest=false Show only the latest created container, include non-running ones.\n -n=-1 Show n last created containers, include non-running ones.\n --no-trunc=false Don't truncate output\n -q, --quiet=false Only display numeric IDs\n -s, --size=false Display total file sizes\n --since=\"\" Show only containers created since Id or Name, include non-running ones. Running docker ps --no-trunc showing 2 linked containers. $ sudo docker ps\nCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES\nf7ee772232194fcc088c6bdec6ea09f7b3f6c54d53934658164b8602d7cd4744 ubuntu:12.04 bash 17 seconds ago Up 16 seconds webapp\nd0963715a061c7c7b7cc80b2646da913a959fbf13e80a971d4a60f6997a2f595 crosbymichael/redis:latest /redis-server --dir 33 minutes ago Up 33 minutes 6379/tcp redis,webapp/db docker ps will show only running containers by default. To see all containers: docker ps -a Filtering The filtering flag ( -f or --filter) format is a key=value pair. If there is more\nthan one filter, then pass multiple flags (e.g. --filter \"foo=bar\" --filter \"bif=baz\" ) Current filters:\n * exited (int - the code of exited containers. Only useful with '--all')\n * status (restarting|running|paused|exited) Successfully exited containers $ sudo docker ps -a --filter 'exited=0'\nCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES\nea09c3c82f6e registry:latest /srv/run.sh 2 weeks ago Exited (0) 2 weeks ago 127.0.0.1:5000- 5000/tcp desperate_leakey\n106ea823fe4e fedora:latest /bin/sh -c 'bash -l' 2 weeks ago Exited (0) 2 weeks ago determined_albattani\n48ee228c9464 fedora:20 bash 2 weeks ago Exited (0) 2 weeks ago tender_torvalds This shows all the containers that have exited with status of '0'",
|
|
"title": "ps"
|
|
},
|
|
{
|
|
"loc": "/reference/commandline/cli#pull",
|
|
"tags": "",
|
|
"text": "Usage: docker pull [OPTIONS] NAME[:TAG]\n\nPull an image or a repository from the registry\n\n -a, --all-tags=false Download all tagged images in the repository Most of your images will be created on top of a base image from the Docker Hub registry. Docker Hub contains many pre-built images that you\ncan pull and try without needing to define and configure your own. It is also possible to manually specify the path of a registry to pull from.\nFor example, if you have set up a local registry, you can specify its path to\npull from it. A repository path is similar to a URL, but does not contain\na protocol specifier ( https:// , for example). To download a particular image, or set of images (i.e., a repository),\nuse docker pull : $ sudo docker pull debian\n# will pull the debian:latest image, its intermediate layers\n# and any aliases of the same id\n$ sudo docker pull debian:testing\n# will pull the image named debian:testing and any intermediate\n# layers it is based on.\n# (Typically the empty `scratch` image, a MAINTAINER layer,\n# and the un-tarred base).\n$ sudo docker pull --all-tags centos\n# will pull all the images from the centos repository\n$ sudo docker pull registry.hub.docker.com/debian\n# manually specifies the path to the default Docker registry. This could\n# be replaced with the path to a local registry to pull from another source.",
|
|
"title": "pull"
|
|
},
|
|
{
|
|
"loc": "/reference/commandline/cli#push",
|
|
"tags": "",
|
|
"text": "Usage: docker push NAME[:TAG]\n\nPush an image or a repository to the registry Use docker push to share your images to the Docker Hub \nregistry or to a self-hosted one.",
|
|
"title": "push"
|
|
},
|
|
{
|
|
"loc": "/reference/commandline/cli#restart",
|
|
"tags": "",
|
|
"text": "Usage: docker restart [OPTIONS] CONTAINER [CONTAINER...]\n\nRestart a running container\n\n -t, --time=10 Number of seconds to try to stop for before killing the container. Once killed it will then be restarted. Default is 10 seconds.",
|
|
"title": "restart"
|
|
},
|
|
{
|
|
"loc": "/reference/commandline/cli#rm",
|
|
"tags": "",
|
|
"text": "Usage: docker rm [OPTIONS] CONTAINER [CONTAINER...]\n\nRemove one or more containers\n\n -f, --force=false Force the removal of a running container (uses SIGKILL)\n -l, --link=false Remove the specified link and not the underlying container\n -v, --volumes=false Remove the volumes associated with the container Examples $ sudo docker rm /redis\n/redis This will remove the container referenced under the link /redis . $ sudo docker rm --link /webapp/redis\n/webapp/redis This will remove the underlying link between /webapp and the /redis \ncontainers removing all network communication. $ sudo docker rm --force redis\nredis The main process inside the container referenced under the link /redis will receive SIGKILL , then the container will be removed. This command will delete all stopped containers. The command docker ps\n-a -q will return all existing container IDs and pass them to the rm \ncommand which will delete them. Any running containers will not be\ndeleted.",
|
|
"title": "rm"
|
|
},
|
|
{
|
|
"loc": "/reference/commandline/cli#rmi",
|
|
"tags": "",
|
|
"text": "Usage: docker rmi [OPTIONS] IMAGE [IMAGE...]\n\nRemove one or more images\n\n -f, --force=false Force removal of the image\n --no-prune=false Do not delete untagged parents Removing tagged images Images can be removed either by their short or long IDs, or their image\nnames. If an image has more than one name, each of them needs to be\nremoved before the image is removed. $ sudo docker images\nREPOSITORY TAG IMAGE ID CREATED SIZE\ntest1 latest fd484f19954f 23 seconds ago 7 B (virtual 4.964 MB)\ntest latest fd484f19954f 23 seconds ago 7 B (virtual 4.964 MB)\ntest2 latest fd484f19954f 23 seconds ago 7 B (virtual 4.964 MB)\n\n$ sudo docker rmi fd484f19954f\nError: Conflict, cannot delete image fd484f19954f because it is tagged in multiple repositories\n2013/12/11 05:47:16 Error: failed to remove one or more images\n\n$ sudo docker rmi test1\nUntagged: fd484f19954f4920da7ff372b5067f5b7ddb2fd3830cecd17b96ea9e286ba5b8\n$ sudo docker rmi test2\nUntagged: fd484f19954f4920da7ff372b5067f5b7ddb2fd3830cecd17b96ea9e286ba5b8\n\n$ sudo docker images\nREPOSITORY TAG IMAGE ID CREATED SIZE\ntest latest fd484f19954f 23 seconds ago 7 B (virtual 4.964 MB)\n$ sudo docker rmi test\nUntagged: fd484f19954f4920da7ff372b5067f5b7ddb2fd3830cecd17b96ea9e286ba5b8\nDeleted: fd484f19954f4920da7ff372b5067f5b7ddb2fd3830cecd17b96ea9e286ba5b8",
|
|
"title": "rmi"
|
|
},
|
|
{
|
|
"loc": "/reference/commandline/cli#run",
|
|
"tags": "",
|
|
"text": "Usage: docker run [OPTIONS] IMAGE [COMMAND] [ARG...]\n\nRun a command in a new container\n\n -a, --attach=[] Attach to STDIN, STDOUT or STDERR.\n --add-host=[] Add a custom host-to-IP mapping (host:ip)\n -c, --cpu-shares=0 CPU shares (relative weight)\n --cap-add=[] Add Linux capabilities\n --cap-drop=[] Drop Linux capabilities\n --cidfile=\"\" Write the container ID to the file\n --cpuset=\"\" CPUs in which to allow execution (0-3, 0,1)\n -d, --detach=false Detached mode: run the container in the background and print the new container ID\n --device=[] Add a host device to the container (e.g. --device=/dev/sdc:/dev/xvdc:rwm)\n --dns=[] Set custom DNS servers\n --dns-search=[] Set custom DNS search domains (Use --dns-search=. if you don't wish to set the search domain)\n -e, --env=[] Set environment variables\n --entrypoint=\"\" Overwrite the default ENTRYPOINT of the image\n --env-file=[] Read in a line delimited file of environment variables\n --expose=[] Expose a port or a range of ports (e.g. --expose=3300-3310) from the container without publishing it to your host\n -h, --hostname=\"\" Container host name\n -i, --interactive=false Keep STDIN open even if not attached\n --ipc=\"\" Default is to create a private IPC namespace (POSIX SysV IPC) for the container\n 'container: name|id ': reuses another container shared memory, semaphores and message queues\n 'host': use the host shared memory,semaphores and message queues inside the container. Note: the host mode gives the container full access to local shared memory and is therefore considered insecure.\n --link=[] Add link to another container in the form of name:alias\n --lxc-conf=[] (lxc exec-driver only) Add custom lxc options --lxc-conf=\"lxc.cgroup.cpuset.cpus = 0,1\"\n -m, --memory=\"\" Memory limit (format: number optional unit , where unit = b, k, m or g)\n -memory-swap=\"\" Total memory usage (memory + swap), set '-1' to disable swap (format: number optional unit , where unit = b, k, m or g)\n --mac-address=\"\" Container MAC address (e.g. 92:d0:c6:0a:29:33)\n --name=\"\" Assign a name to the container\n --net=\"bridge\" Set the Network mode for the container\n 'bridge': creates a new network stack for the container on the docker bridge\n 'none': no networking for this container\n 'container: name|id ': reuses another container network stack\n 'host': use the host network stack inside the container. Note: the host mode gives the container full access to local system services such as D-bus and is therefore considered insecure.\n -P, --publish-all=false Publish all exposed ports to random ports on the host interfaces\n -p, --publish=[] Publish a container's port to the host\n format: ip:hostPort:containerPort | ip::containerPort | hostPort:containerPort | containerPort\n Both hostPort and containerPort can be specified as a range of ports. \n When specifying ranges for both, the number of container ports in the range must match the number of host ports in the range. (e.g., `-p 1234-1236:1234-1236/tcp`)\n (use 'docker port' to see the actual mapping)\n --pid=host 'host': use the host PID namespace inside the container. Note: the host mode gives the container full access to local system services such as D-bus and is therefore considered insecure.\n --privileged=false Give extended privileges to this container\n --read-only=false Mount the container's root filesystem as read only\n --restart=\"\" Restart policy to apply when a container exits (no, on-failure[:max-retry], always)\n --rm=false Automatically remove the container when it exits (incompatible with -d)\n --security-opt=[] Security Options\n --sig-proxy=true Proxy received signals to the process (non-TTY mode only). SIGCHLD, SIGSTOP, and SIGKILL are not proxied.\n -t, --tty=false Allocate a pseudo-TTY\n -u, --user=\"\" Username or UID\n -v, --volume=[] Bind mount a volume (e.g., from the host: -v /host:/container, from Docker: -v /container)\n --volumes-from=[] Mount volumes from the specified container(s)\n -w, --workdir=\"\" Working directory inside the container The docker run command first creates a writeable container layer over the\nspecified image, and then starts it using the specified command. That is, docker run is equivalent to the API /containers/create then /containers/(id)/start . A stopped container can be restarted with all its\nprevious changes intact using docker start . See docker ps -a to view a list\nof all containers. There is detailed information about docker run in the Docker run reference . The docker run command can be used in combination with docker commit to change the command that a container runs . See the Docker User Guide for more detailed\ninformation about the --expose , -p , -P and --link parameters,\nand linking containers. Examples $ sudo docker run --name test -it debian\n$$ exit 13\nexit\n$ echo $?\n13\n$ sudo docker ps -a | grep test\n275c44472aeb debian:7 \"/bin/bash\" 26 seconds ago Exited (13) 17 seconds ago test In this example, we are running bash interactively in the debian:latest image, and giving\nthe container the name test . We then quit bash by running exit 13 , which means bash \nwill have an exit code of 13 . This is then passed on to the caller of docker run , and\nis recorded in the test container metadata. $ sudo docker run --cidfile /tmp/docker_test.cid ubuntu echo \"test\" This will create a container and print test to the console. The cidfile \nflag makes Docker attempt to create a new file and write the container ID to it.\nIf the file exists already, Docker will return an error. Docker will close this\nfile when docker run exits. $ sudo docker run -t -i --rm ubuntu bash\nroot@bc338942ef20:/# mount -t tmpfs none /mnt\nmount: permission denied This will not work, because by default, most potentially dangerous kernel\ncapabilities are dropped; including cap_sys_admin (which is required to mount\nfilesystems). However, the --privileged flag will allow it to run: $ sudo docker run --privileged ubuntu bash\nroot@50e3f57e16e6:/# mount -t tmpfs none /mnt\nroot@50e3f57e16e6:/# df -h\nFilesystem Size Used Avail Use% Mounted on\nnone 1.9G 0 1.9G 0% /mnt The --privileged flag gives all capabilities to the container, and it also\nlifts all the limitations enforced by the device cgroup controller. In other\nwords, the container can then do almost everything that the host can do. This\nflag exists to allow special use-cases, like running Docker within Docker. $ sudo docker run -w /path/to/dir/ -i -t ubuntu pwd The -w lets the command being executed inside directory given, here /path/to/dir/ . If the path does not exists it is created inside the container. $ sudo docker run -v `pwd`:`pwd` -w `pwd` -i -t ubuntu pwd The -v flag mounts the current working directory into the container. The -w \nlets the command being executed inside the current working directory, by\nchanging into the directory to the value returned by pwd . So this\ncombination executes the command using the container, but inside the\ncurrent working directory. $ sudo docker run -v /doesnt/exist:/foo -w /foo -i -t ubuntu bash When the host directory of a bind-mounted volume doesn't exist, Docker\nwill automatically create this directory on the host for you. In the\nexample above, Docker will create the /doesnt/exist \nfolder before starting your container. $ sudo docker run --read-only -v /icanwrite busybox touch /icanwrite here Volumes can be used in combination with --read-only to control where \na container writes files. The --read-only flag mounts the container's root\nfilesystem as read only prohibiting writes to locations other than the\nspecified volumes for the container. $ sudo docker run -t -i -v /var/run/docker.sock:/var/run/docker.sock -v ./static-docker:/usr/bin/docker busybox sh By bind-mounting the docker unix socket and statically linked docker\nbinary (such as that provided by https://get.docker.com ), you give the container the full access to create and\nmanipulate the host's Docker daemon. $ sudo docker run -p 127.0.0.1:80:8080 ubuntu bash This binds port 8080 of the container to port 80 on 127.0.0.1 of\nthe host machine. The Docker User Guide \nexplains in detail how to manipulate ports in Docker. $ sudo docker run --expose 80 ubuntu bash This exposes port 80 of the container for use within a link without\npublishing the port to the host system's interfaces. The Docker User\nGuide explains in detail how to manipulate\nports in Docker. $ sudo docker run -e MYVAR1 --env MYVAR2=foo --env-file ./env.list ubuntu bash This sets environmental variables in the container. For illustration all three\nflags are shown here. Where -e , --env take an environment variable and\nvalue, or if no = is provided, then that variable's current value is passed\nthrough (i.e. $MYVAR1 from the host is set to $MYVAR1 in the container). \nWhen no = is provided and that variable is not defined in the client's\nenvironment then that variable will be removed from the container's list of\nenvironment variables.\nAll three flags, -e , --env and --env-file can be repeated. Regardless of the order of these three flags, the --env-file are processed\nfirst, and then -e , --env flags. This way, the -e or --env will\noverride variables as needed. $ cat ./env.list\nTEST_FOO=BAR\n$ sudo docker run --env TEST_FOO=\"This is a test\" --env-file ./env.list busybox env | grep TEST_FOO\nTEST_FOO=This is a test The --env-file flag takes a filename as an argument and expects each line\nto be in the VAR=VAL format, mimicking the argument passed to --env . Comment\nlines need only be prefixed with # An example of a file passed with --env-file $ cat ./env.list\nTEST_FOO=BAR\n\n# this is a comment\nTEST_APP_DEST_HOST=10.10.0.127\nTEST_APP_DEST_PORT=8888\n\n# pass through this variable from the caller\nTEST_PASSTHROUGH\n$ sudo TEST_PASSTHROUGH=howdy docker run --env-file ./env.list busybox env\nHOME=/\nPATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\nHOSTNAME=5198e0745561\nTEST_FOO=BAR\nTEST_APP_DEST_HOST=10.10.0.127\nTEST_APP_DEST_PORT=8888\nTEST_PASSTHROUGH=howdy\n\n$ sudo docker run --name console -t -i ubuntu bash This will create and run a new container with the container name being console . $ sudo docker run --link /redis:redis --name console ubuntu bash The --link flag will link the container named /redis into the newly\ncreated container with the alias redis . The new container can access the\nnetwork and environment of the redis container via environment variables.\nThe --name flag will assign the name console to the newly created\ncontainer. $ sudo docker run --volumes-from 777f7dc92da7 --volumes-from ba8c0c54f0f2:ro -i -t ubuntu pwd The --volumes-from flag mounts all the defined volumes from the referenced\ncontainers. Containers can be specified by repetitions of the --volumes-from \nargument. The container ID may be optionally suffixed with :ro or :rw to\nmount the volumes in read-only or read-write mode, respectively. By default,\nthe volumes are mounted in the same mode (read write or read only) as\nthe reference container. The -a flag tells docker run to bind to the container's STDIN , STDOUT or STDERR . This makes it possible to manipulate the output and input as needed. $ echo \"test\" | sudo docker run -i -a stdin ubuntu cat - This pipes data into a container and prints the container's ID by attaching\nonly to the container's STDIN . $ sudo docker run -a stderr ubuntu echo test This isn't going to print anything unless there's an error because we've\nonly attached to the STDERR of the container. The container's logs\nstill store what's been written to STDERR and STDOUT . $ cat somefile | sudo docker run -i -a stdin mybuilder dobuild This is how piping a file into a container could be done for a build.\nThe container's ID will be printed after the build is done and the build\nlogs could be retrieved using docker logs . This is\nuseful if you need to pipe a file or something else into a container and\nretrieve the container's ID once the container has finished running. $ sudo docker run --device=/dev/sdc:/dev/xvdc --device=/dev/sdd --device=/dev/zero:/dev/nulo -i -t ubuntu ls -l /dev/{xvdc,sdd,nulo}\n brw-rw---- 1 root disk 8, 2 Feb 9 16:05 /dev/xvdc\n brw-rw---- 1 root disk 8, 3 Feb 9 16:05 /dev/sdd\n crw-rw-rw- 1 root root 1, 5 Feb 9 16:05 /dev/nulo It is often necessary to directly expose devices to a container. The --device \noption enables that. For example, a specific block storage device or loop\ndevice or audio device can be added to an otherwise unprivileged container\n(without the --privileged flag) and have the application directly access it. By default, the container will be able to read , write and mknod these devices.\nThis can be overridden using a third :rwm set of options to each --device \nflag: $ sudo docker run --device=/dev/sda:/dev/xvdc --rm -it ubuntu fdisk /dev/xvdc\n\n Command (m for help): q\n $ sudo docker run --device=/dev/sda:/dev/xvdc:r --rm -it ubuntu fdisk /dev/xvdc\n You will not be able to write the partition table.\n\n Command (m for help): q\n\n $ sudo docker run --device=/dev/sda:/dev/xvdc --rm -it ubuntu fdisk /dev/xvdc\n\n Command (m for help): q\n\n $ sudo docker run --device=/dev/sda:/dev/xvdc:m --rm -it ubuntu fdisk /dev/xvdc\n fdisk: unable to open /dev/xvdc: Operation not permitted Note: --device cannot be safely used with ephemeral devices. Block devices that\nmay be removed should not be added to untrusted containers with --device . A complete example: $ sudo docker run -d --name static static-web-files sh\n$ sudo docker run -d --expose=8098 --name riak riakserver\n$ sudo docker run -d -m 100m -e DEVELOPMENT=1 -e BRANCH=example-code -v $(pwd):/app/bin:ro --name app appserver\n$ sudo docker run -d -p 1443:443 --dns=10.0.0.1 --dns-search=dev.org -v /var/log/httpd --volumes-from static --link riak --link app -h www.sven.dev.org --name web webserver\n$ sudo docker run -t -i --rm --volumes-from web -w /var/log/httpd busybox tail -f access.log This example shows five containers that might be set up to test a web\napplication change: Start a pre-prepared volume image static-web-files (in the background)\n that has CSS, image and static HTML in it, (with a VOLUME instruction in\n the Dockerfile to allow the web server to use those files); Start a pre-prepared riakserver image, give the container name riak and\n expose port 8098 to any containers that link to it; Start the appserver image, restricting its memory usage to 100MB, setting\n two environment variables DEVELOPMENT and BRANCH and bind-mounting the\n current directory ( $(pwd) ) in the container in read-only mode as /app/bin ; Start the webserver , mapping port 443 in the container to port 1443 on\n the Docker server, setting the DNS server to 10.0.0.1 and DNS search\n domain to dev.org , creating a volume to put the log files into (so we can\n access it from another container), then importing the files from the volume\n exposed by the static container, and linking to all exposed ports from\n riak and app . Lastly, we set the hostname to web.sven.dev.org so its\n consistent with the pre-generated SSL certificate; Finally, we create a container that runs tail -f access.log using the logs\n volume from the web container, setting the workdir to /var/log/httpd . The\n --rm option means that when the container exits, the container's layer is\n removed. Restart Policies Use Docker's --restart to specify a container's restart policy . A restart \npolicy controls whether the Docker daemon restarts a container after exit.\nDocker supports the following restart policies: \n \n \n Policy \n Result \n \n \n \n \n no \n \n Do not automatically restart the container when it exits. This is the \n default.\n \n \n \n \n \n on-failure [:max-retries]\n \n \n \n Restart only if the container exits with a non-zero exit status.\n Optionally, limit the number of restart retries the Docker \n daemon attempts.\n \n \n \n always \n \n Always restart the container regardless of the exit status.\n When you specify always, the Docker daemon will try to restart\n the container indefinitely.\n \n \n $ sudo docker run --restart=always redis This will run the redis container with a restart policy of always \nso that if the container exits, Docker will restart it. More detailed information on restart policies can be found in the Restart Policies (--restart) section\nof the Docker run reference page. Adding entries to a container hosts file You can add other hosts into a container's /etc/hosts file by using one or more --add-host flags. This example adds a static address for a host named docker : $ docker run --add-host=docker:10.180.0.1 --rm -it debian\n $$ ping docker\n PING docker (10.180.0.1): 48 data bytes\n 56 bytes from 10.180.0.1: icmp_seq=0 ttl=254 time=7.600 ms\n 56 bytes from 10.180.0.1: icmp_seq=1 ttl=254 time=30.705 ms\n ˆC--- docker ping statistics ---\n 2 packets transmitted, 2 packets received, 0% packet loss\n round-trip min/avg/max/stddev = 7.600/19.152/30.705/11.553 ms Note: \nSometimes you need to connect to the Docker host, which means getting the IP\naddress of the host. You can use the following shell commands to simplify this\nprocess: $ alias hostip=\"ip route show 0.0.0.0/0 | grep -Eo 'via \\S+' | awk '{ print \\$2 }'\"\n $ docker run --add-host=docker:$(hostip) --rm -it debian",
|
|
"title": "run"
|
|
},
|
|
{
|
|
"loc": "/reference/commandline/cli#save",
|
|
"tags": "",
|
|
"text": "Usage: docker save [OPTIONS] IMAGE [IMAGE...]\n\nSave an image(s) to a tar archive (streamed to STDOUT by default)\n\n -o, --output=\"\" Write to a file, instead of STDOUT Produces a tarred repository to the standard output stream.\nContains all parent layers, and all tags + versions, or specified repo:tag , for\neach argument provided. It is used to create a backup that can then be used with docker load $ sudo docker save busybox busybox.tar\n$ ls -sh busybox.tar\n2.7M busybox.tar\n$ sudo docker save --output busybox.tar busybox\n$ ls -sh busybox.tar\n2.7M busybox.tar\n$ sudo docker save -o fedora-all.tar fedora\n$ sudo docker save -o fedora-latest.tar fedora:latest It is even useful to cherry-pick particular tags of an image repository $ sudo docker save -o ubuntu.tar ubuntu:lucid ubuntu:saucy",
|
|
"title": "save"
|
|
},
|
|
{
|
|
"loc": "/reference/commandline/cli#search",
|
|
"tags": "",
|
|
"text": "Search Docker Hub for images Usage: docker search [OPTIONS] TERM\n\nSearch the Docker Hub for images\n\n --automated=false Only show automated builds\n --no-trunc=false Don't truncate output\n -s, --stars=0 Only displays with at least x stars See Find Public Images on Docker Hub for\nmore details on finding shared images from the command line. Note: \nSearch queries will only return up to 25 results",
|
|
"title": "search"
|
|
},
|
|
{
|
|
"loc": "/reference/commandline/cli#start",
|
|
"tags": "",
|
|
"text": "Usage: docker start [OPTIONS] CONTAINER [CONTAINER...]\n\nRestart a stopped container\n\n -a, --attach=false Attach container's STDOUT and STDERR and forward all signals to the process\n -i, --interactive=false Attach container's STDIN",
|
|
"title": "start"
|
|
},
|
|
{
|
|
"loc": "/reference/commandline/cli#stats",
|
|
"tags": "",
|
|
"text": "Usage: docker stats CONTAINER [CONTAINER...]\n\nDisplay a live stream of one or more containers' resource usage statistics\n\n --help=false Print usage Note : this functionality currently only works when using the libcontainer exec-driver. Running docker stats on multiple containers $ sudo docker stats redis1 redis2\nCONTAINER CPU % MEM USAGE/LIMIT MEM % NET I/O\nredis1 0.07% 796 KiB/64 MiB 1.21% 788 B/648 B\nredis2 0.07% 2.746 MiB/64 MiB 4.29% 1.266 KiB/648 B The docker stats command will only return a live stream of data for running \ncontainers. Stopped containers will not return any data. Note: \nIf you want more detailed information about a container's resource usage, use the API endpoint.",
|
|
"title": "stats"
|
|
},
|
|
{
|
|
"loc": "/reference/commandline/cli#stop",
|
|
"tags": "",
|
|
"text": "Usage: docker stop [OPTIONS] CONTAINER [CONTAINER...]\n\nStop a running container by sending SIGTERM and then SIGKILL after a grace period\n\n -t, --time=10 Number of seconds to wait for the container to stop before killing it. Default is 10 seconds. The main process inside the container will receive SIGTERM , and after a\ngrace period, SIGKILL .",
|
|
"title": "stop"
|
|
},
|
|
{
|
|
"loc": "/reference/commandline/cli#tag",
|
|
"tags": "",
|
|
"text": "Usage: docker tag [OPTIONS] IMAGE[:TAG] [REGISTRYHOST/][USERNAME/]NAME[:TAG]\n\nTag an image into a repository\n\n -f, --force=false Force You can group your images together using names and tags, and then upload\nthem to Share Images via Repositories .",
|
|
"title": "tag"
|
|
},
|
|
{
|
|
"loc": "/reference/commandline/cli#top",
|
|
"tags": "",
|
|
"text": "Usage: docker top CONTAINER [ps OPTIONS]\n\nDisplay the running processes of a container",
|
|
"title": "top"
|
|
},
|
|
{
|
|
"loc": "/reference/commandline/cli#unpause",
|
|
"tags": "",
|
|
"text": "Usage: docker unpause CONTAINER\n\nUnpause all processes within a container The docker unpause command uses the cgroups freezer to un-suspend all\nprocesses in a container. See the cgroups freezer documentation \nfor further details.",
|
|
"title": "unpause"
|
|
},
|
|
{
|
|
"loc": "/reference/commandline/cli#version",
|
|
"tags": "",
|
|
"text": "Usage: docker version\n\nShow the Docker version information. Show the Docker version, API version, Git commit, and Go version of\nboth Docker client and daemon.",
|
|
"title": "version"
|
|
},
|
|
{
|
|
"loc": "/reference/commandline/cli#wait",
|
|
"tags": "",
|
|
"text": "Usage: docker wait CONTAINER [CONTAINER...]\n\nBlock until a container stops, then print its exit code.",
|
|
"title": "wait"
|
|
},
|
|
{
|
|
"loc": "/reference/builder/",
|
|
"tags": "",
|
|
"text": "Dockerfile Reference\nDocker can build images automatically by reading the instructions\nfrom a Dockerfile. A Dockerfile is a text document that contains all\nthe commands you would normally execute manually in order to build a\nDocker image. By calling docker build from your terminal, you can have\nDocker build your image step by step, executing the instructions\nsuccessively.\nThis page discusses the specifics of all the instructions you can use in your\nDockerfile. To further help you write a clear, readable, maintainable\nDockerfile, we've also written a Dockerfile Best Practices\nguide. Lastly, you can test your\nDockerfile knowledge with the Dockerfile tutorial.\nUsage\nTo build an image from a source repository,\ncreate a description file called Dockerfile at the root of your repository.\nThis file will describe the steps to assemble the image.\nThen call docker build with the path of your source repository as the argument\n(for example, .):\n$ sudo docker build .\n\nThe path to the source repository defines where to find the context of\nthe build. The build is run by the Docker daemon, not by the CLI, so the\nwhole context must be transferred to the daemon. The Docker CLI reports\n\"Sending build context to Docker daemon\" when the context is sent to the daemon.\n\nWarning\nAvoid using your root directory, /, as the root of the source repository. The \ndocker build command will use whatever directory contains the Dockerfile as the build\ncontext (including all of its subdirectories). The build context will be sent to the\nDocker daemon before building the image, which means if you use / as the source\nrepository, the entire contents of your hard drive will get sent to the daemon (and\nthus to the machine running the daemon). You probably don't want that.\n\nIn most cases, it's best to put each Dockerfile in an empty directory, and then add only\nthe files needed for building that Dockerfile to that directory. To further speed up the\nbuild, you can exclude files and directories by adding a .dockerignore file to the same\ndirectory.\nYou can specify a repository and tag at which to save the new image if\nthe build succeeds:\n$ sudo docker build -t shykes/myapp .\n\nThe Docker daemon will run your steps one-by-one, committing the result\nto a new image if necessary, before finally outputting the ID of your\nnew image. The Docker daemon will automatically clean up the context you\nsent.\nNote that each instruction is run independently, and causes a new image\nto be created - so RUN cd /tmp will not have any effect on the next\ninstructions.\nWhenever possible, Docker will re-use the intermediate images,\naccelerating docker build significantly (indicated by Using cache -\nsee the Dockerfile Best Practices\nguide for more information):\n$ sudo docker build -t SvenDowideit/ambassador .\nUploading context 10.24 kB\nUploading context\nStep 1 : FROM docker-ut\n --- cbba202fe96b\nStep 2 : MAINTAINER SvenDowideit@home.org.au\n --- Using cache\n --- 51182097be13\nStep 3 : CMD env | grep _TCP= | sed 's/.*_PORT_\\([0-9]*\\)_TCP=tcp:\\/\\/\\(.*\\):\\(.*\\)/socat TCP4-LISTEN:\\1,fork,reuseaddr TCP4:\\2:\\3 \\/' | sh top\n --- Using cache\n --- 1a5ffc17324d\nSuccessfully built 1a5ffc17324d\n\nWhen you're done with your build, you're ready to look into Pushing a\nrepository to its registry.\nFormat\nHere is the format of the Dockerfile:\n# Comment\nINSTRUCTION arguments\n\nThe Instruction is not case-sensitive, however convention is for them to\nbe UPPERCASE in order to distinguish them from arguments more easily.\nDocker runs the instructions in a Dockerfile in order. The\nfirst instruction must be `FROM` in order to specify the Base\nImage from which you are building.\nDocker will treat lines that begin with # as a\ncomment. A # marker anywhere else in the line will\nbe treated as an argument. This allows statements like:\n# Comment\nRUN echo 'we are running some # of cool things'\n\nHere is the set of instructions you can use in a Dockerfile for building\nimages.\nEnvironment Replacement\n\nNote: prior to 1.3, Dockerfile environment variables were handled\nsimilarly, in that they would be replaced as described below. However, there\nwas no formal definition on as to which instructions handled environment\nreplacement at the time. After 1.3 this behavior will be preserved and\ncanonical.\n\nEnvironment variables (declared with the ENV statement) can also be used in\ncertain instructions as variables to be interpreted by the Dockerfile. Escapes\nare also handled for including variable-like syntax into a statement literally.\nEnvironment variables are notated in the Dockerfile either with\n$variable_name or ${variable_name}. They are treated equivalently and the\nbrace syntax is typically used to address issues with variable names with no\nwhitespace, like ${foo}_bar.\nEscaping is possible by adding a \\ before the variable: \\$foo or \\${foo},\nfor example, will translate to $foo and ${foo} literals respectively.\nExample (parsed representation is displayed after the #):\nFROM busybox\nENV foo /bar\nWORKDIR ${foo} # WORKDIR /bar\nADD . $foo # ADD . /bar\nCOPY \\$foo /quux # COPY $foo /quux\n\nThe instructions that handle environment variables in the Dockerfile are:\n\nENV\nADD\nCOPY\nWORKDIR\nEXPOSE\nVOLUME\nUSER\n\nONBUILD instructions are NOT supported for environment replacement, even\nthe instructions above.\nThe .dockerignore file\nIf a file named .dockerignore exists in the source repository, then it\nis interpreted as a newline-separated list of exclusion patterns.\nExclusion patterns match files or directories relative to the source repository\nthat will be excluded from the context. Globbing is done using Go's\nfilepath.Match rules.\n\nNote:\nThe .dockerignore file can even be used to ignore the Dockerfile and\n.dockerignore files. This might be useful if you are copying files from\nthe root of the build context into your new containter but do not want to \ninclude the Dockerfile or .dockerignore files (e.g. ADD . /someDir/).\n\nThe following example shows the use of the .dockerignore file to exclude the\n.git directory from the context. Its effect can be seen in the changed size of\nthe uploaded context.\n$ sudo docker build .\nUploading context 18.829 MB\nUploading context\nStep 0 : FROM busybox\n --- 769b9341d937\nStep 1 : CMD echo Hello World\n --- Using cache\n --- 99cc1ad10469\nSuccessfully built 99cc1ad10469\n$ echo \".git\" .dockerignore\n$ sudo docker build .\nUploading context 6.76 MB\nUploading context\nStep 0 : FROM busybox\n --- 769b9341d937\nStep 1 : CMD echo Hello World\n --- Using cache\n --- 99cc1ad10469\nSuccessfully built 99cc1ad10469\n\nFROM\nFROM image\n\nOr\nFROM image:tag\n\nThe FROM instruction sets the Base Image\nfor subsequent instructions. As such, a valid Dockerfile must have FROM as\nits first instruction. The image can be any valid image \u2013 it is especially easy\nto start by pulling an image from the Public Repositories.\nFROM must be the first non-comment instruction in the Dockerfile.\nFROM can appear multiple times within a single Dockerfile in order to create\nmultiple images. Simply make a note of the last image ID output by the commit\nbefore each new FROM command.\nIf no tag is given to the FROM instruction, latest is assumed. If the\nused tag does not exist, an error will be returned.\nMAINTAINER\nMAINTAINER name\n\nThe MAINTAINER instruction allows you to set the Author field of the\ngenerated images.\nRUN\nRUN has 2 forms:\n\nRUN command (the command is run in a shell - /bin/sh -c - shell form)\nRUN [\"executable\", \"param1\", \"param2\"] (exec form)\n\nThe RUN instruction will execute any commands in a new layer on top of the\ncurrent image and commit the results. The resulting committed image will be\nused for the next step in the Dockerfile.\nLayering RUN instructions and generating commits conforms to the core\nconcepts of Docker where commits are cheap and containers can be created from\nany point in an image's history, much like source control.\nThe exec form makes it possible to avoid shell string munging, and to RUN\ncommands using a base image that does not contain /bin/sh.\n\nNote:\nTo use a different shell, other than '/bin/sh', use the exec form\npassing in the desired shell. For example,\nRUN [\"/bin/bash\", \"-c\", \"echo hello\"]\nNote:\nThe exec form is parsed as a JSON array, which means that\nyou must use double-quotes (\") around words not single-quotes (').\nNote:\nUnlike the shell form, the exec form does not invoke a command shell.\nThis means that normal shell processing does not happen. For example,\nRUN [ \"echo\", \"$HOME\" ] will not do variable substitution on $HOME.\nIf you want shell processing then either use the shell form or execute \na shell directly, for example: RUN [ \"sh\", \"-c\", \"echo\", \"$HOME\" ].\n\nThe cache for RUN instructions isn't invalidated automatically during\nthe next build. The cache for an instruction like \nRUN apt-get dist-upgrade -y will be reused during the next build. The \ncache for RUN instructions can be invalidated by using the --no-cache \nflag, for example docker build --no-cache.\nSee the Dockerfile Best Practices\nguide for more information.\nThe cache for RUN instructions can be invalidated by ADD instructions. See\nbelow for details.\nKnown Issues (RUN)\n\nIssue 783 is about file\n permissions problems that can occur when using the AUFS file system. You\n might notice it during an attempt to rm a file, for example. The issue\n describes a workaround.\n\nCMD\nThe CMD instruction has three forms:\n\nCMD [\"executable\",\"param1\",\"param2\"] (exec form, this is the preferred form)\nCMD [\"param1\",\"param2\"] (as default parameters to ENTRYPOINT)\nCMD command param1 param2 (shell form)\n\nThere can only be one CMD instruction in a Dockerfile. If you list more than one CMD\nthen only the last CMD will take effect.\nThe main purpose of a CMD is to provide defaults for an executing\ncontainer. These defaults can include an executable, or they can omit\nthe executable, in which case you must specify an ENTRYPOINT\ninstruction as well.\n\nNote:\nIf CMD is used to provide default arguments for the ENTRYPOINT \ninstruction, both the CMD and ENTRYPOINT instructions should be specified \nwith the JSON array format.\nNote:\nThe exec form is parsed as a JSON array, which means that\nyou must use double-quotes (\") around words not single-quotes (').\nNote:\nUnlike the shell form, the exec form does not invoke a command shell.\nThis means that normal shell processing does not happen. For example,\nCMD [ \"echo\", \"$HOME\" ] will not do variable substitution on $HOME.\nIf you want shell processing then either use the shell form or execute \na shell directly, for example: CMD [ \"sh\", \"-c\", \"echo\", \"$HOME\" ].\n\nWhen used in the shell or exec formats, the CMD instruction sets the command\nto be executed when running the image.\nIf you use the shell form of the CMD, then the command will execute in\n/bin/sh -c:\nFROM ubuntu\nCMD echo \"This is a test.\" | wc -\n\nIf you want to run your command without a shell then you must\nexpress the command as a JSON array and give the full path to the executable.\nThis array form is the preferred format of CMD. Any additional parameters\nmust be individually expressed as strings in the array:\nFROM ubuntu\nCMD [\"/usr/bin/wc\",\"--help\"]\n\nIf you would like your container to run the same executable every time, then\nyou should consider using ENTRYPOINT in combination with CMD. See\nENTRYPOINT.\nIf the user specifies arguments to docker run then they will override the\ndefault specified in CMD.\n\nNote:\ndon't confuse RUN with CMD. RUN actually runs a command and commits\nthe result; CMD does not execute anything at build time, but specifies\nthe intended command for the image.\n\nEXPOSE\nEXPOSE port [port...]\n\nThe EXPOSE instructions informs Docker that the container will listen on the\nspecified network ports at runtime. Docker uses this information to interconnect\ncontainers using links (see the Docker User\nGuide) and to determine which ports to expose to the\nhost when using the -P flag.\n\nNote:\nEXPOSE doesn't define which ports can be exposed to the host or make ports\naccessible from the host by default. To expose ports to the host, at runtime,\nuse the -p flag or\nthe -P flag.\n\nENV\nENV key value\nENV key=value ...\n\nThe ENV instruction sets the environment variable key to the value\nvalue. This value will be in the environment of all \"descendent\" Dockerfile\ncommands and can be replaced inline in many as well.\nThe ENV instruction has two forms. The first form, ENV key value,\nwill set a single variable to a value. The entire string after the first\nspace will be treated as the value - including characters such as \nspaces and quotes.\nThe second form, ENV key=value ..., allows for multiple variables to \nbe set at one time. Notice that the second form uses the equals sign (=) \nin the syntax, while the first form does not. Like command line parsing, \nquotes and backslashes can be used to include spaces within values.\nFor example:\nENV myName=\"John Doe\" myDog=Rex\\ The\\ Dog \\\n myCat=fluffy\n\nand\nENV myName John Doe\nENV myDog Rex The Dog\nENV myCat fluffy\n\nwill yield the same net results in the final container, but the first form \ndoes it all in one layer.\nThe environment variables set using ENV will persist when a container is run\nfrom the resulting image. You can view the values using docker inspect, and\nchange them using docker run --env key=value.\n\nNote:\nEnvironment persistence can cause unexpected effects. For example,\nsetting ENV DEBIAN_FRONTEND noninteractive may confuse apt-get\nusers on a Debian-based image. To set a value for a single command, use\nRUN key=value command.\n\nADD\nADD has two forms:\n\nADD src... dest\nADD [\"src\"... \"dest\"] (this form is required for paths containing\nwhitespace)\n\nThe ADD instruction copies new files, directories or remote file URLs from src\nand adds them to the filesystem of the container at the path dest. \nMultiple src resource may be specified but if they are files or \ndirectories then they must be relative to the source directory that is \nbeing built (the context of the build).\nEach src may contain wildcards and matching will be done using Go's\nfilepath.Match rules.\nFor most command line uses this should act as expected, for example:\nADD hom* /mydir/ # adds all files starting with \"hom\"\nADD hom?.txt /mydir/ # ? is replaced with any single character\n\nThe dest is an absolute path, or a path relative to WORKDIR, into which\nthe source will be copied inside the destination container.\nADD test aDir/ # adds \"test\" to `WORKDIR`/aDir/\n\nAll new files and directories are created with a UID and GID of 0.\nIn the case where src is a remote file URL, the destination will\nhave permissions of 600. If the remote file being retrieved has an HTTP\nLast-Modified header, the timestamp from that header will be used\nto set the mtime on the destination file. Then, like any other file\nprocessed during an ADD, mtime will be included in the determination\nof whether or not the file has changed and the cache should be updated.\n\nNote:\nIf you build by passing a Dockerfile through STDIN (docker\nbuild - somefile), there is no build context, so the Dockerfile\ncan only contain a URL based ADD instruction. You can also pass a\ncompressed archive through STDIN: (docker build - archive.tar.gz),\nthe Dockerfile at the root of the archive and the rest of the\narchive will get used at the context of the build.\nNote:\nIf your URL files are protected using authentication, you\nwill need to use RUN wget, RUN curl or use another tool from\nwithin the container as the ADD instruction does not support\nauthentication.\nNote:\nThe first encountered ADD instruction will invalidate the cache for all\nfollowing instructions from the Dockerfile if the contents of src have\nchanged. This includes invalidating the cache for RUN instructions.\nSee the Dockerfile Best Practices\nguide for more information.\n\nThe copy obeys the following rules:\n\n\nThe src path must be inside the context of the build;\n you cannot ADD ../something /something, because the first step of a\n docker build is to send the context directory (and subdirectories) to the\n docker daemon.\n\n\nIf src is a URL and dest does not end with a trailing slash, then a\n file is downloaded from the URL and copied to dest.\n\n\nIf src is a URL and dest does end with a trailing slash, then the\n filename is inferred from the URL and the file is downloaded to\n dest/filename. For instance, ADD http://example.com/foobar / would\n create the file /foobar. The URL must have a nontrivial path so that an\n appropriate filename can be discovered in this case (http://example.com\n will not work).\n\n\nIf src is a directory, the entire contents of the directory are copied, \n including filesystem metadata. \n\nNote:\nThe directory itself is not copied, just its contents.\n\n\n\nIf src is a local tar archive in a recognized compression format\n (identity, gzip, bzip2 or xz) then it is unpacked as a directory. Resources\n from remote URLs are not decompressed. When a directory is copied or\n unpacked, it has the same behavior as tar -x: the result is the union of:\n\nWhatever existed at the destination path and\nThe contents of the source tree, with conflicts resolved in favor\n of \"2.\" on a file-by-file basis.\n\n\n\nIf src is any other kind of file, it is copied individually along with\n its metadata. In this case, if dest ends with a trailing slash /, it\n will be considered a directory and the contents of src will be written\n at dest/base(src).\n\n\nIf multiple src resources are specified, either directly or due to the\n use of a wildcard, then dest must be a directory, and it must end with \n a slash /.\n\n\nIf dest does not end with a trailing slash, it will be considered a\n regular file and the contents of src will be written at dest.\n\n\nIf dest doesn't exist, it is created along with all missing directories\n in its path.\n\n\nCOPY\nCOPY has two forms:\n\nCOPY src... dest\nCOPY [\"src\"... \"dest\"] (this form is required for paths containing\nwhitespace)\n\nThe COPY instruction copies new files or directories from src\nand adds them to the filesystem of the container at the path dest.\nMultiple src resource may be specified but they must be relative\nto the source directory that is being built (the context of the build).\nEach src may contain wildcards and matching will be done using Go's\nfilepath.Match rules.\nFor most command line uses this should act as expected, for example:\nCOPY hom* /mydir/ # adds all files starting with \"hom\"\nCOPY hom?.txt /mydir/ # ? is replaced with any single character\n\nThe dest is an absolute path, or a path relative to WORKDIR, into which\nthe source will be copied inside the destination container.\nCOPY test aDir/ # adds \"test\" to `WORKDIR`/aDir/\n\nAll new files and directories are created with a UID and GID of 0.\n\nNote:\nIf you build using STDIN (docker build - somefile), there is no\nbuild context, so COPY can't be used.\n\nThe copy obeys the following rules:\n\n\nThe src path must be inside the context of the build;\n you cannot COPY ../something /something, because the first step of a\n docker build is to send the context directory (and subdirectories) to the\n docker daemon.\n\n\nIf src is a directory, the entire contents of the directory are copied, \n including filesystem metadata. \n\nNote:\nThe directory itself is not copied, just its contents.\n\n\n\nIf src is any other kind of file, it is copied individually along with\n its metadata. In this case, if dest ends with a trailing slash /, it\n will be considered a directory and the contents of src will be written\n at dest/base(src).\n\n\nIf multiple src resources are specified, either directly or due to the\n use of a wildcard, then dest must be a directory, and it must end with \n a slash /.\n\n\nIf dest does not end with a trailing slash, it will be considered a\n regular file and the contents of src will be written at dest.\n\n\nIf dest doesn't exist, it is created along with all missing directories\n in its path.\n\n\nENTRYPOINT\nENTRYPOINT has two forms:\n\nENTRYPOINT [\"executable\", \"param1\", \"param2\"]\n (the preferred exec form)\nENTRYPOINT command param1 param2\n (shell form)\n\nAn ENTRYPOINT allows you to configure a container that will run as an executable.\nFor example, the following will start nginx with its default content, listening\non port 80:\ndocker run -i -t --rm -p 80:80 nginx\n\nCommand line arguments to docker run image will be appended after all\nelements in an exec form ENTRYPOINT, and will override all elements specified\nusing CMD.\nThis allows arguments to be passed to the entry point, i.e., docker run image -d\nwill pass the -d argument to the entry point. \nYou can override the ENTRYPOINT instruction using the docker run --entrypoint\nflag.\nThe shell form prevents any CMD or run command line arguments from being\nused, but has the disadvantage that your ENTRYPOINT will be started as a\nsubcommand of /bin/sh -c, which does not pass signals.\nThis means that the executable will not be the container's PID 1 - and\nwill not receive Unix signals - so your executable will not receive a\nSIGTERM from docker stop container.\nOnly the last ENTRYPOINT instruction in the Dockerfile will have an effect.\nExec form ENTRYPOINT example\nYou can use the exec form of ENTRYPOINT to set fairly stable default commands\nand arguments and then use either form of CMD to set additional defaults that\nare more likely to be changed.\nFROM ubuntu\nENTRYPOINT [\"top\", \"-b\"]\nCMD [\"-c\"]\n\nWhen you run the container, you can see that top is the only process:\n$ docker run -it --rm --name test top -H\ntop - 08:25:00 up 7:27, 0 users, load average: 0.00, 0.01, 0.05\nThreads: 1 total, 1 running, 0 sleeping, 0 stopped, 0 zombie\n%Cpu(s): 0.1 us, 0.1 sy, 0.0 ni, 99.7 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st\nKiB Mem: 2056668 total, 1616832 used, 439836 free, 99352 buffers\nKiB Swap: 1441840 total, 0 used, 1441840 free. 1324440 cached Mem\n\n PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND\n 1 root 20 0 19744 2336 2080 R 0.0 0.1 0:00.04 top\n\nTo examine the result further, you can use docker exec:\n$ docker exec -it test ps aux\nUSER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND\nroot 1 2.6 0.1 19752 2352 ? Ss+ 08:24 0:00 top -b -H\nroot 7 0.0 0.1 15572 2164 ? R+ 08:25 0:00 ps aux\n\nAnd you can gracefully request top to shut down using docker stop test.\nThe following Dockerfile shows using the ENTRYPOINT to run Apache in the\nforeground (i.e., as PID 1):\nFROM debian:stable\nRUN apt-get update apt-get install -y --force-yes apache2\nEXPOSE 80 443\nVOLUME [/var/www, /var/log/apache2, /etc/apache2]\nENTRYPOINT [/usr/sbin/apache2ctl, -D, FOREGROUND]\n\n\nIf you need to write a starter script for a single executable, you can ensure that\nthe final executable receives the Unix signals by using exec and gosu\ncommands:\n#!/bin/bash\nset -e\n\nif [ $1 = 'postgres' ]; then\n chown -R postgres $PGDATA\n\n if [ -z $(ls -A $PGDATA) ]; then\n gosu postgres initdb\n fi\n\n exec gosu postgres $@\nfi\n\nexec $@\n\n\nLastly, if you need to do some extra cleanup (or communicate with other containers)\non shutdown, or are co-ordinating more than one executable, you may need to ensure\nthat the ENTRYPOINT script receives the Unix signals, passes them on, and then\ndoes some more work:\n#!/bin/sh\n# Note: I've written this using sh so it works in the busybox container too\n\n# USE the trap if you need to also do manual cleanup after the service is stopped,\n# or need to start multiple services in the one container\ntrap echo TRAPed signal HUP INT QUIT KILL TERM\n\n# start service in background here\n/usr/sbin/apachectl start\n\necho [hit enter key to exit] or run 'docker stop container'\nread\n\n# stop service and clean up here\necho stopping apache\n/usr/sbin/apachectl stop\n\necho exited $0\n\n\nIf you run this image with docker run -it --rm -p 80:80 --name test apache,\nyou can then examine the container's processes with docker exec, or docker top,\nand then ask the script to stop Apache:\n$ docker exec -it test ps aux\nUSER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND\nroot 1 0.1 0.0 4448 692 ? Ss+ 00:42 0:00 /bin/sh /run.sh 123 cmd cmd2\nroot 19 0.0 0.2 71304 4440 ? Ss 00:42 0:00 /usr/sbin/apache2 -k start\nwww-data 20 0.2 0.2 360468 6004 ? Sl 00:42 0:00 /usr/sbin/apache2 -k start\nwww-data 21 0.2 0.2 360468 6000 ? Sl 00:42 0:00 /usr/sbin/apache2 -k start\nroot 81 0.0 0.1 15572 2140 ? R+ 00:44 0:00 ps aux\n$ docker top test\nPID USER COMMAND\n10035 root {run.sh} /bin/sh /run.sh 123 cmd cmd2\n10054 root /usr/sbin/apache2 -k start\n10055 33 /usr/sbin/apache2 -k start\n10056 33 /usr/sbin/apache2 -k start\n$ /usr/bin/time docker stop test\ntest\nreal 0m 0.27s\nuser 0m 0.03s\nsys 0m 0.03s\n\n\n\nNote: you can over ride the ENTRYPOINT setting using --entrypoint,\nbut this can only set the binary to exec (no sh -c will be used).\nNote:\nThe exec form is parsed as a JSON array, which means that\nyou must use double-quotes (\") around words not single-quotes (').\nNote:\nUnlike the shell form, the exec form does not invoke a command shell.\nThis means that normal shell processing does not happen. For example,\nENTRYPOINT [ \"echo\", \"$HOME\" ] will not do variable substitution on $HOME.\nIf you want shell processing then either use the shell form or execute \na shell directly, for example: ENTRYPOINT [ \"sh\", \"-c\", \"echo\", \"$HOME\" ].\nVariables that are defined in the Dockerfileusing ENV, will be substituted by\nthe Dockerfile parser.\n\nShell form ENTRYPOINT example\nYou can specify a plain string for the ENTRYPOINT and it will execute in /bin/sh -c.\nThis form will use shell processing to substitute shell environment variables,\nand will ignore any CMD or docker run command line arguments.\nTo ensure that docker stop will signal any long running ENTRYPOINT executable\ncorrectly, you need to remember to start it with exec:\nFROM ubuntu\nENTRYPOINT exec top -b\n\nWhen you run this image, you'll see the single PID 1 process:\n$ docker run -it --rm --name test top\nMem: 1704520K used, 352148K free, 0K shrd, 0K buff, 140368121167873K cached\nCPU: 5% usr 0% sys 0% nic 94% idle 0% io 0% irq 0% sirq\nLoad average: 0.08 0.03 0.05 2/98 6\n PID PPID USER STAT VSZ %VSZ %CPU COMMAND\n 1 0 root R 3164 0% 0% top -b\n\nWhich will exit cleanly on docker stop:\n$ /usr/bin/time docker stop test\ntest\nreal 0m 0.20s\nuser 0m 0.02s\nsys 0m 0.04s\n\nIf you forget to add exec to the beginning of your ENTRYPOINT:\nFROM ubuntu\nENTRYPOINT top -b\nCMD --ignored-param1\n\nYou can then run it (giving it a name for the next step):\n$ docker run -it --name test top --ignored-param2\nMem: 1704184K used, 352484K free, 0K shrd, 0K buff, 140621524238337K cached\nCPU: 9% usr 2% sys 0% nic 88% idle 0% io 0% irq 0% sirq\nLoad average: 0.01 0.02 0.05 2/101 7\n PID PPID USER STAT VSZ %VSZ %CPU COMMAND\n 1 0 root S 3168 0% 0% /bin/sh -c top -b cmd cmd2\n 7 1 root R 3164 0% 0% top -b\n\nYou can see from the output of top that the specified ENTRYPOINT is not PID 1.\nIf you then run docker stop test, the container will not exit cleanly - the\nstop command will be forced to send a SIGKILL after the timeout:\n$ docker exec -it test ps aux\nPID USER COMMAND\n 1 root /bin/sh -c top -b cmd cmd2\n 7 root top -b\n 8 root ps aux\n$ /usr/bin/time docker stop test\ntest\nreal 0m 10.19s\nuser 0m 0.04s\nsys 0m 0.03s\n\nVOLUME\nVOLUME [\"/data\"]\n\nThe VOLUME instruction creates a mount point with the specified name\nand marks it as holding externally mounted volumes from native host or other\ncontainers. The value can be a JSON array, VOLUME [\"/var/log/\"], or a plain\nstring with multiple arguments, such as VOLUME /var/log or VOLUME /var/log\n/var/db. For more information/examples and mounting instructions via the\nDocker client, refer to \nShare Directories via Volumes\ndocumentation.\nThe docker run command initializes the newly created volume with any data \nthat exists at the specified location within the base image. For example, \nconsider the following Dockerfile snippet:\nFROM ubuntu\nRUN mkdir /myvol\nRUN echo \"hello world\" /myvol/greating\nVOLUME /myvol\n\nThis Dockerfile results in an image that causes docker run, to\ncreate a new mount point at /myvol and copy the greating file \ninto the newly created volume.\n\nNote:\nThe list is parsed as a JSON array, which means that\nyou must use double-quotes (\") around words not single-quotes (').\n\nUSER\nUSER daemon\n\nThe USER instruction sets the user name or UID to use when running the image\nand for any RUN, CMD and ENTRYPOINT instructions that follow it in the\nDockerfile.\nWORKDIR\nWORKDIR /path/to/workdir\n\nThe WORKDIR instruction sets the working directory for any RUN, CMD,\nENTRYPOINT, COPY and ADD instructions that follow it in the Dockerfile.\nIt can be used multiple times in the one Dockerfile. If a relative path\nis provided, it will be relative to the path of the previous WORKDIR\ninstruction. For example:\nWORKDIR /a\nWORKDIR b\nWORKDIR c\nRUN pwd\n\nThe output of the final pwd command in this Dockerfile would be\n/a/b/c.\nThe WORKDIR instruction can resolve environment variables previously set using\nENV. You can only use environment variables explicitly set in the Dockerfile.\nFor example:\nENV DIRPATH /path\nWORKDIR $DIRPATH/$DIRNAME\n\nThe output of the final pwd command in this Dockerfile would be\n/path/$DIRNAME\nONBUILD\nONBUILD [INSTRUCTION]\n\nThe ONBUILD instruction adds to the image a trigger instruction to\nbe executed at a later time, when the image is used as the base for\nanother build. The trigger will be executed in the context of the\ndownstream build, as if it had been inserted immediately after the\nFROM instruction in the downstream Dockerfile.\nAny build instruction can be registered as a trigger.\nThis is useful if you are building an image which will be used as a base\nto build other images, for example an application build environment or a\ndaemon which may be customized with user-specific configuration.\nFor example, if your image is a reusable Python application builder, it\nwill require application source code to be added in a particular\ndirectory, and it might require a build script to be called after\nthat. You can't just call ADD and RUN now, because you don't yet\nhave access to the application source code, and it will be different for\neach application build. You could simply provide application developers\nwith a boilerplate Dockerfile to copy-paste into their application, but\nthat is inefficient, error-prone and difficult to update because it\nmixes with application-specific code.\nThe solution is to use ONBUILD to register advance instructions to\nrun later, during the next build stage.\nHere's how it works:\n\nWhen it encounters an ONBUILD instruction, the builder adds a\n trigger to the metadata of the image being built. The instruction\n does not otherwise affect the current build.\nAt the end of the build, a list of all triggers is stored in the\n image manifest, under the key OnBuild. They can be inspected with\n the docker inspect command.\nLater the image may be used as a base for a new build, using the\n FROM instruction. As part of processing the FROM instruction,\n the downstream builder looks for ONBUILD triggers, and executes\n them in the same order they were registered. If any of the triggers\n fail, the FROM instruction is aborted which in turn causes the\n build to fail. If all triggers succeed, the FROM instruction\n completes and the build continues as usual.\nTriggers are cleared from the final image after being executed. In\n other words they are not inherited by \"grand-children\" builds.\n\nFor example you might add something like this:\n[...]\nONBUILD ADD . /app/src\nONBUILD RUN /usr/local/bin/python-build --dir /app/src\n[...]\n\n\nWarning: Chaining ONBUILD instructions using ONBUILD ONBUILD isn't allowed.\nWarning: The ONBUILD instruction may not trigger FROM or MAINTAINER instructions.\n\nDockerfile Examples\n# Nginx\n#\n# VERSION 0.0.1\n\nFROM ubuntu\nMAINTAINER Victor Vieux victor@docker.com\n\nRUN apt-get update apt-get install -y inotify-tools nginx apache2 openssh-server\n\n# Firefox over VNC\n#\n# VERSION 0.3\n\nFROM ubuntu\n\n# Install vnc, xvfb in order to create a 'fake' display and firefox\nRUN apt-get update apt-get install -y x11vnc xvfb firefox\nRUN mkdir ~/.vnc\n# Setup a password\nRUN x11vnc -storepasswd 1234 ~/.vnc/passwd\n# Autostart firefox (might not be the best way, but it does the trick)\nRUN bash -c 'echo \"firefox\" /.bashrc'\n\nEXPOSE 5900\nCMD [\"x11vnc\", \"-forever\", \"-usepw\", \"-create\"]\n\n# Multiple images example\n#\n# VERSION 0.1\n\nFROM ubuntu\nRUN echo foo bar\n# Will output something like === 907ad6c2736f\n\nFROM ubuntu\nRUN echo moo oink\n# Will output something like === 695d7793cbe4\n\n# You\u1fbfll now have two images, 907ad6c2736f with /bar, and 695d7793cbe4 with\n# /oink.",
|
|
"title": "Dockerfile"
|
|
},
|
|
{
|
|
"loc": "/reference/builder#dockerfile-reference",
|
|
"tags": "",
|
|
"text": "Docker can build images automatically by reading the instructions\nfrom a Dockerfile . A Dockerfile is a text document that contains all\nthe commands you would normally execute manually in order to build a\nDocker image. By calling docker build from your terminal, you can have\nDocker build your image step by step, executing the instructions\nsuccessively. This page discusses the specifics of all the instructions you can use in your Dockerfile . To further help you write a clear, readable, maintainable Dockerfile , we've also written a Dockerfile Best Practices\nguide . Lastly, you can test your\nDockerfile knowledge with the Dockerfile tutorial .",
|
|
"title": "Dockerfile Reference"
|
|
},
|
|
{
|
|
"loc": "/reference/builder#usage",
|
|
"tags": "",
|
|
"text": "To build an image from a source repository,\ncreate a description file called Dockerfile at the root of your repository.\nThis file will describe the steps to assemble the image. Then call docker build with the path of your source repository as the argument\n(for example, . ): $ sudo docker build . The path to the source repository defines where to find the context of\nthe build. The build is run by the Docker daemon, not by the CLI, so the\nwhole context must be transferred to the daemon. The Docker CLI reports\n\"Sending build context to Docker daemon\" when the context is sent to the daemon. Warning \nAvoid using your root directory, / , as the root of the source repository. The docker build command will use whatever directory contains the Dockerfile as the build\ncontext (including all of its subdirectories). The build context will be sent to the\nDocker daemon before building the image, which means if you use / as the source\nrepository, the entire contents of your hard drive will get sent to the daemon (and\nthus to the machine running the daemon). You probably don't want that. In most cases, it's best to put each Dockerfile in an empty directory, and then add only\nthe files needed for building that Dockerfile to that directory. To further speed up the\nbuild, you can exclude files and directories by adding a .dockerignore file to the same\ndirectory. You can specify a repository and tag at which to save the new image if\nthe build succeeds: $ sudo docker build -t shykes/myapp . The Docker daemon will run your steps one-by-one, committing the result\nto a new image if necessary, before finally outputting the ID of your\nnew image. The Docker daemon will automatically clean up the context you\nsent. Note that each instruction is run independently, and causes a new image\nto be created - so RUN cd /tmp will not have any effect on the next\ninstructions. Whenever possible, Docker will re-use the intermediate images,\naccelerating docker build significantly (indicated by Using cache -\nsee the Dockerfile Best Practices\nguide for more information): $ sudo docker build -t SvenDowideit/ambassador .\nUploading context 10.24 kB\nUploading context\nStep 1 : FROM docker-ut\n --- cbba202fe96b\nStep 2 : MAINTAINER SvenDowideit@home.org.au\n --- Using cache\n --- 51182097be13\nStep 3 : CMD env | grep _TCP= | sed 's/.*_PORT_\\([0-9]*\\)_TCP=tcp:\\/\\/\\(.*\\):\\(.*\\)/socat TCP4-LISTEN:\\1,fork,reuseaddr TCP4:\\2:\\3 \\ /' | sh top\n --- Using cache\n --- 1a5ffc17324d\nSuccessfully built 1a5ffc17324d When you're done with your build, you're ready to look into Pushing a\nrepository to its registry .",
|
|
"title": "Usage"
|
|
},
|
|
{
|
|
"loc": "/reference/builder#format",
|
|
"tags": "",
|
|
"text": "Here is the format of the Dockerfile : # Comment\nINSTRUCTION arguments The Instruction is not case-sensitive, however convention is for them to\nbe UPPERCASE in order to distinguish them from arguments more easily. Docker runs the instructions in a Dockerfile in order. The\nfirst instruction must be `FROM` in order to specify the Base\nImage from which you are building. Docker will treat lines that begin with # as a\ncomment. A # marker anywhere else in the line will\nbe treated as an argument. This allows statements like: # Comment\nRUN echo 'we are running some # of cool things' Here is the set of instructions you can use in a Dockerfile for building\nimages. Environment Replacement Note : prior to 1.3, Dockerfile environment variables were handled\nsimilarly, in that they would be replaced as described below. However, there\nwas no formal definition on as to which instructions handled environment\nreplacement at the time. After 1.3 this behavior will be preserved and\ncanonical. Environment variables (declared with the ENV statement ) can also be used in\ncertain instructions as variables to be interpreted by the Dockerfile . Escapes\nare also handled for including variable-like syntax into a statement literally. Environment variables are notated in the Dockerfile either with $variable_name or ${variable_name} . They are treated equivalently and the\nbrace syntax is typically used to address issues with variable names with no\nwhitespace, like ${foo}_bar . Escaping is possible by adding a \\ before the variable: \\$foo or \\${foo} ,\nfor example, will translate to $foo and ${foo} literals respectively. Example (parsed representation is displayed after the # ): FROM busybox\nENV foo /bar\nWORKDIR ${foo} # WORKDIR /bar\nADD . $foo # ADD . /bar\nCOPY \\$foo /quux # COPY $foo /quux The instructions that handle environment variables in the Dockerfile are: ENV ADD COPY WORKDIR EXPOSE VOLUME USER ONBUILD instructions are NOT supported for environment replacement, even\nthe instructions above.",
|
|
"title": "Format"
|
|
},
|
|
{
|
|
"loc": "/reference/builder#the-dockerignore-file",
|
|
"tags": "",
|
|
"text": "If a file named .dockerignore exists in the source repository, then it\nis interpreted as a newline-separated list of exclusion patterns.\nExclusion patterns match files or directories relative to the source repository\nthat will be excluded from the context. Globbing is done using Go's filepath.Match rules. Note :\nThe .dockerignore file can even be used to ignore the Dockerfile and .dockerignore files. This might be useful if you are copying files from\nthe root of the build context into your new containter but do not want to \ninclude the Dockerfile or .dockerignore files (e.g. ADD . /someDir/ ). The following example shows the use of the .dockerignore file to exclude the .git directory from the context. Its effect can be seen in the changed size of\nthe uploaded context. $ sudo docker build .\nUploading context 18.829 MB\nUploading context\nStep 0 : FROM busybox\n --- 769b9341d937\nStep 1 : CMD echo Hello World\n --- Using cache\n --- 99cc1ad10469\nSuccessfully built 99cc1ad10469\n$ echo \".git\" .dockerignore\n$ sudo docker build .\nUploading context 6.76 MB\nUploading context\nStep 0 : FROM busybox\n --- 769b9341d937\nStep 1 : CMD echo Hello World\n --- Using cache\n --- 99cc1ad10469\nSuccessfully built 99cc1ad10469",
|
|
"title": "The .dockerignore file"
|
|
},
|
|
{
|
|
"loc": "/reference/builder#from",
|
|
"tags": "",
|
|
"text": "FROM image Or FROM image : tag The FROM instruction sets the Base Image \nfor subsequent instructions. As such, a valid Dockerfile must have FROM as\nits first instruction. The image can be any valid image \u2013 it is especially easy\nto start by pulling an image from the Public Repositories . FROM must be the first non-comment instruction in the Dockerfile . FROM can appear multiple times within a single Dockerfile in order to create\nmultiple images. Simply make a note of the last image ID output by the commit\nbefore each new FROM command. If no tag is given to the FROM instruction, latest is assumed. If the\nused tag does not exist, an error will be returned.",
|
|
"title": "FROM"
|
|
},
|
|
{
|
|
"loc": "/reference/builder#maintainer",
|
|
"tags": "",
|
|
"text": "MAINTAINER name The MAINTAINER instruction allows you to set the Author field of the\ngenerated images.",
|
|
"title": "MAINTAINER"
|
|
},
|
|
{
|
|
"loc": "/reference/builder#run",
|
|
"tags": "",
|
|
"text": "RUN has 2 forms: RUN command (the command is run in a shell - /bin/sh -c - shell form) RUN [\"executable\", \"param1\", \"param2\"] ( exec form) The RUN instruction will execute any commands in a new layer on top of the\ncurrent image and commit the results. The resulting committed image will be\nused for the next step in the Dockerfile . Layering RUN instructions and generating commits conforms to the core\nconcepts of Docker where commits are cheap and containers can be created from\nany point in an image's history, much like source control. The exec form makes it possible to avoid shell string munging, and to RUN \ncommands using a base image that does not contain /bin/sh . Note :\nTo use a different shell, other than '/bin/sh', use the exec form\npassing in the desired shell. For example, RUN [\"/bin/bash\", \"-c\", \"echo hello\"] Note :\nThe exec form is parsed as a JSON array, which means that\nyou must use double-quotes (\") around words not single-quotes ('). Note :\nUnlike the shell form, the exec form does not invoke a command shell.\nThis means that normal shell processing does not happen. For example, RUN [ \"echo\", \"$HOME\" ] will not do variable substitution on $HOME .\nIf you want shell processing then either use the shell form or execute \na shell directly, for example: RUN [ \"sh\", \"-c\", \"echo\", \"$HOME\" ] . The cache for RUN instructions isn't invalidated automatically during\nthe next build. The cache for an instruction like RUN apt-get dist-upgrade -y will be reused during the next build. The \ncache for RUN instructions can be invalidated by using the --no-cache \nflag, for example docker build --no-cache . See the Dockerfile Best Practices\nguide for more information. The cache for RUN instructions can be invalidated by ADD instructions. See below for details. Known Issues (RUN) Issue 783 is about file\n permissions problems that can occur when using the AUFS file system. You\n might notice it during an attempt to rm a file, for example. The issue\n describes a workaround.",
|
|
"title": "RUN"
|
|
},
|
|
{
|
|
"loc": "/reference/builder#cmd",
|
|
"tags": "",
|
|
"text": "The CMD instruction has three forms: CMD [\"executable\",\"param1\",\"param2\"] ( exec form, this is the preferred form) CMD [\"param1\",\"param2\"] (as default parameters to ENTRYPOINT ) CMD command param1 param2 ( shell form) There can only be one CMD instruction in a Dockerfile . If you list more than one CMD \nthen only the last CMD will take effect. The main purpose of a CMD is to provide defaults for an executing\ncontainer. These defaults can include an executable, or they can omit\nthe executable, in which case you must specify an ENTRYPOINT \ninstruction as well. Note :\nIf CMD is used to provide default arguments for the ENTRYPOINT \ninstruction, both the CMD and ENTRYPOINT instructions should be specified \nwith the JSON array format. Note :\nThe exec form is parsed as a JSON array, which means that\nyou must use double-quotes (\") around words not single-quotes ('). Note :\nUnlike the shell form, the exec form does not invoke a command shell.\nThis means that normal shell processing does not happen. For example, CMD [ \"echo\", \"$HOME\" ] will not do variable substitution on $HOME .\nIf you want shell processing then either use the shell form or execute \na shell directly, for example: CMD [ \"sh\", \"-c\", \"echo\", \"$HOME\" ] . When used in the shell or exec formats, the CMD instruction sets the command\nto be executed when running the image. If you use the shell form of the CMD , then the command will execute in /bin/sh -c : FROM ubuntu\nCMD echo \"This is a test.\" | wc - If you want to run your command without a shell then you must\nexpress the command as a JSON array and give the full path to the executable. This array form is the preferred format of CMD . Any additional parameters\nmust be individually expressed as strings in the array: FROM ubuntu\nCMD [\"/usr/bin/wc\",\"--help\"] If you would like your container to run the same executable every time, then\nyou should consider using ENTRYPOINT in combination with CMD . See ENTRYPOINT . If the user specifies arguments to docker run then they will override the\ndefault specified in CMD . Note :\ndon't confuse RUN with CMD . RUN actually runs a command and commits\nthe result; CMD does not execute anything at build time, but specifies\nthe intended command for the image.",
|
|
"title": "CMD"
|
|
},
|
|
{
|
|
"loc": "/reference/builder#expose",
|
|
"tags": "",
|
|
"text": "EXPOSE port [ port ...] The EXPOSE instructions informs Docker that the container will listen on the\nspecified network ports at runtime. Docker uses this information to interconnect\ncontainers using links (see the Docker User\nGuide ) and to determine which ports to expose to the\nhost when using the -P flag . Note : EXPOSE doesn't define which ports can be exposed to the host or make ports\naccessible from the host by default. To expose ports to the host, at runtime, use the -p flag or the -P flag .",
|
|
"title": "EXPOSE"
|
|
},
|
|
{
|
|
"loc": "/reference/builder#env",
|
|
"tags": "",
|
|
"text": "ENV key value \nENV key = value ... The ENV instruction sets the environment variable key to the value value . This value will be in the environment of all \"descendent\" Dockerfile \ncommands and can be replaced inline in many as well. The ENV instruction has two forms. The first form, ENV key value ,\nwill set a single variable to a value. The entire string after the first\nspace will be treated as the value - including characters such as \nspaces and quotes. The second form, ENV key = value ... , allows for multiple variables to \nbe set at one time. Notice that the second form uses the equals sign (=) \nin the syntax, while the first form does not. Like command line parsing, \nquotes and backslashes can be used to include spaces within values. For example: ENV myName=\"John Doe\" myDog=Rex\\ The\\ Dog \\\n myCat=fluffy and ENV myName John Doe\nENV myDog Rex The Dog\nENV myCat fluffy will yield the same net results in the final container, but the first form \ndoes it all in one layer. The environment variables set using ENV will persist when a container is run\nfrom the resulting image. You can view the values using docker inspect , and\nchange them using docker run --env key = value . Note :\nEnvironment persistence can cause unexpected effects. For example,\nsetting ENV DEBIAN_FRONTEND noninteractive may confuse apt-get\nusers on a Debian-based image. To set a value for a single command, use RUN key = value command .",
|
|
"title": "ENV"
|
|
},
|
|
{
|
|
"loc": "/reference/builder#add",
|
|
"tags": "",
|
|
"text": "ADD has two forms: ADD src ... dest ADD [\" src \"... \" dest \"] (this form is required for paths containing\nwhitespace) The ADD instruction copies new files, directories or remote file URLs from src \nand adds them to the filesystem of the container at the path dest . Multiple src resource may be specified but if they are files or \ndirectories then they must be relative to the source directory that is \nbeing built (the context of the build). Each src may contain wildcards and matching will be done using Go's filepath.Match rules.\nFor most command line uses this should act as expected, for example: ADD hom* /mydir/ # adds all files starting with \"hom\"\nADD hom?.txt /mydir/ # ? is replaced with any single character The dest is an absolute path, or a path relative to WORKDIR , into which\nthe source will be copied inside the destination container. ADD test aDir/ # adds \"test\" to `WORKDIR`/aDir/ All new files and directories are created with a UID and GID of 0. In the case where src is a remote file URL, the destination will\nhave permissions of 600. If the remote file being retrieved has an HTTP Last-Modified header, the timestamp from that header will be used\nto set the mtime on the destination file. Then, like any other file\nprocessed during an ADD , mtime will be included in the determination\nof whether or not the file has changed and the cache should be updated. Note :\nIf you build by passing a Dockerfile through STDIN ( docker\nbuild - somefile ), there is no build context, so the Dockerfile \ncan only contain a URL based ADD instruction. You can also pass a\ncompressed archive through STDIN: ( docker build - archive.tar.gz ),\nthe Dockerfile at the root of the archive and the rest of the\narchive will get used at the context of the build. Note :\nIf your URL files are protected using authentication, you\nwill need to use RUN wget , RUN curl or use another tool from\nwithin the container as the ADD instruction does not support\nauthentication. Note :\nThe first encountered ADD instruction will invalidate the cache for all\nfollowing instructions from the Dockerfile if the contents of src have\nchanged. This includes invalidating the cache for RUN instructions.\nSee the Dockerfile Best Practices\nguide for more information. The copy obeys the following rules: The src path must be inside the context of the build;\n you cannot ADD ../something /something , because the first step of a\n docker build is to send the context directory (and subdirectories) to the\n docker daemon. If src is a URL and dest does not end with a trailing slash, then a\n file is downloaded from the URL and copied to dest . If src is a URL and dest does end with a trailing slash, then the\n filename is inferred from the URL and the file is downloaded to\n dest / filename . For instance, ADD http://example.com/foobar / would\n create the file /foobar . The URL must have a nontrivial path so that an\n appropriate filename can be discovered in this case ( http://example.com \n will not work). If src is a directory, the entire contents of the directory are copied, \n including filesystem metadata. Note :\nThe directory itself is not copied, just its contents. If src is a local tar archive in a recognized compression format\n (identity, gzip, bzip2 or xz) then it is unpacked as a directory. Resources\n from remote URLs are not decompressed. When a directory is copied or\n unpacked, it has the same behavior as tar -x : the result is the union of: Whatever existed at the destination path and The contents of the source tree, with conflicts resolved in favor\n of \"2.\" on a file-by-file basis. If src is any other kind of file, it is copied individually along with\n its metadata. In this case, if dest ends with a trailing slash / , it\n will be considered a directory and the contents of src will be written\n at dest /base( src ) . If multiple src resources are specified, either directly or due to the\n use of a wildcard, then dest must be a directory, and it must end with \n a slash / . If dest does not end with a trailing slash, it will be considered a\n regular file and the contents of src will be written at dest . If dest doesn't exist, it is created along with all missing directories\n in its path.",
|
|
"title": "ADD"
|
|
},
|
|
{
|
|
"loc": "/reference/builder#copy",
|
|
"tags": "",
|
|
"text": "COPY has two forms: COPY src ... dest COPY [\" src \"... \" dest \"] (this form is required for paths containing\nwhitespace) The COPY instruction copies new files or directories from src \nand adds them to the filesystem of the container at the path dest . Multiple src resource may be specified but they must be relative\nto the source directory that is being built (the context of the build). Each src may contain wildcards and matching will be done using Go's filepath.Match rules.\nFor most command line uses this should act as expected, for example: COPY hom* /mydir/ # adds all files starting with \"hom\"\nCOPY hom?.txt /mydir/ # ? is replaced with any single character The dest is an absolute path, or a path relative to WORKDIR , into which\nthe source will be copied inside the destination container. COPY test aDir/ # adds \"test\" to `WORKDIR`/aDir/ All new files and directories are created with a UID and GID of 0. Note :\nIf you build using STDIN ( docker build - somefile ), there is no\nbuild context, so COPY can't be used. The copy obeys the following rules: The src path must be inside the context of the build;\n you cannot COPY ../something /something , because the first step of a\n docker build is to send the context directory (and subdirectories) to the\n docker daemon. If src is a directory, the entire contents of the directory are copied, \n including filesystem metadata. Note :\nThe directory itself is not copied, just its contents. If src is any other kind of file, it is copied individually along with\n its metadata. In this case, if dest ends with a trailing slash / , it\n will be considered a directory and the contents of src will be written\n at dest /base( src ) . If multiple src resources are specified, either directly or due to the\n use of a wildcard, then dest must be a directory, and it must end with \n a slash / . If dest does not end with a trailing slash, it will be considered a\n regular file and the contents of src will be written at dest . If dest doesn't exist, it is created along with all missing directories\n in its path.",
|
|
"title": "COPY"
|
|
},
|
|
{
|
|
"loc": "/reference/builder#entrypoint",
|
|
"tags": "",
|
|
"text": "ENTRYPOINT has two forms: ENTRYPOINT [\"executable\", \"param1\", \"param2\"] \n (the preferred exec form) ENTRYPOINT command param1 param2 \n ( shell form) An ENTRYPOINT allows you to configure a container that will run as an executable. For example, the following will start nginx with its default content, listening\non port 80: docker run -i -t --rm -p 80:80 nginx Command line arguments to docker run image will be appended after all\nelements in an exec form ENTRYPOINT , and will override all elements specified\nusing CMD .\nThis allows arguments to be passed to the entry point, i.e., docker run image -d \nwill pass the -d argument to the entry point. \nYou can override the ENTRYPOINT instruction using the docker run --entrypoint \nflag. The shell form prevents any CMD or run command line arguments from being\nused, but has the disadvantage that your ENTRYPOINT will be started as a\nsubcommand of /bin/sh -c , which does not pass signals.\nThis means that the executable will not be the container's PID 1 - and\nwill not receive Unix signals - so your executable will not receive a SIGTERM from docker stop container . Only the last ENTRYPOINT instruction in the Dockerfile will have an effect. Exec form ENTRYPOINT example You can use the exec form of ENTRYPOINT to set fairly stable default commands\nand arguments and then use either form of CMD to set additional defaults that\nare more likely to be changed. FROM ubuntu\nENTRYPOINT [\"top\", \"-b\"]\nCMD [\"-c\"] When you run the container, you can see that top is the only process: $ docker run -it --rm --name test top -H\ntop - 08:25:00 up 7:27, 0 users, load average: 0.00, 0.01, 0.05\nThreads: 1 total, 1 running, 0 sleeping, 0 stopped, 0 zombie\n%Cpu(s): 0.1 us, 0.1 sy, 0.0 ni, 99.7 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st\nKiB Mem: 2056668 total, 1616832 used, 439836 free, 99352 buffers\nKiB Swap: 1441840 total, 0 used, 1441840 free. 1324440 cached Mem\n\n PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND\n 1 root 20 0 19744 2336 2080 R 0.0 0.1 0:00.04 top To examine the result further, you can use docker exec : $ docker exec -it test ps aux\nUSER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND\nroot 1 2.6 0.1 19752 2352 ? Ss+ 08:24 0:00 top -b -H\nroot 7 0.0 0.1 15572 2164 ? R+ 08:25 0:00 ps aux And you can gracefully request top to shut down using docker stop test . The following Dockerfile shows using the ENTRYPOINT to run Apache in the\nforeground (i.e., as PID 1 ): FROM debian:stable\nRUN apt-get update apt-get install -y --force-yes apache2\nEXPOSE 80 443\nVOLUME [ /var/www , /var/log/apache2 , /etc/apache2 ]\nENTRYPOINT [ /usr/sbin/apache2ctl , -D , FOREGROUND ] If you need to write a starter script for a single executable, you can ensure that\nthe final executable receives the Unix signals by using exec and gosu \ncommands: #!/bin/bash\nset -e\n\nif [ $1 = 'postgres' ]; then\n chown -R postgres $PGDATA \n\n if [ -z $(ls -A $PGDATA ) ]; then\n gosu postgres initdb\n fi\n\n exec gosu postgres $@ \nfi\n\nexec $@ Lastly, if you need to do some extra cleanup (or communicate with other containers)\non shutdown, or are co-ordinating more than one executable, you may need to ensure\nthat the ENTRYPOINT script receives the Unix signals, passes them on, and then\ndoes some more work: #!/bin/sh\n# Note: I've written this using sh so it works in the busybox container too\n\n# USE the trap if you need to also do manual cleanup after the service is stopped,\n# or need to start multiple services in the one container\ntrap echo TRAPed signal HUP INT QUIT KILL TERM\n\n# start service in background here\n/usr/sbin/apachectl start\n\necho [hit enter key to exit] or run 'docker stop container ' \nread\n\n# stop service and clean up here\necho stopping apache \n/usr/sbin/apachectl stop\n\necho exited $0 If you run this image with docker run -it --rm -p 80:80 --name test apache ,\nyou can then examine the container's processes with docker exec , or docker top ,\nand then ask the script to stop Apache: $ docker exec -it test ps aux\nUSER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND\nroot 1 0.1 0.0 4448 692 ? Ss+ 00:42 0:00 /bin/sh /run.sh 123 cmd cmd2\nroot 19 0.0 0.2 71304 4440 ? Ss 00:42 0:00 /usr/sbin/apache2 -k start\nwww-data 20 0.2 0.2 360468 6004 ? Sl 00:42 0:00 /usr/sbin/apache2 -k start\nwww-data 21 0.2 0.2 360468 6000 ? Sl 00:42 0:00 /usr/sbin/apache2 -k start\nroot 81 0.0 0.1 15572 2140 ? R+ 00:44 0:00 ps aux\n$ docker top test\nPID USER COMMAND\n10035 root {run.sh} /bin/sh /run.sh 123 cmd cmd2\n10054 root /usr/sbin/apache2 -k start\n10055 33 /usr/sbin/apache2 -k start\n10056 33 /usr/sbin/apache2 -k start\n$ /usr/bin/time docker stop test\ntest\nreal 0m 0.27s\nuser 0m 0.03s\nsys 0m 0.03s Note: you can over ride the ENTRYPOINT setting using --entrypoint ,\nbut this can only set the binary to exec (no sh -c will be used). Note :\nThe exec form is parsed as a JSON array, which means that\nyou must use double-quotes (\") around words not single-quotes ('). Note :\nUnlike the shell form, the exec form does not invoke a command shell.\nThis means that normal shell processing does not happen. For example, ENTRYPOINT [ \"echo\", \"$HOME\" ] will not do variable substitution on $HOME .\nIf you want shell processing then either use the shell form or execute \na shell directly, for example: ENTRYPOINT [ \"sh\", \"-c\", \"echo\", \"$HOME\" ] .\nVariables that are defined in the Dockerfile using ENV , will be substituted by\nthe Dockerfile parser. Shell form ENTRYPOINT example You can specify a plain string for the ENTRYPOINT and it will execute in /bin/sh -c .\nThis form will use shell processing to substitute shell environment variables,\nand will ignore any CMD or docker run command line arguments.\nTo ensure that docker stop will signal any long running ENTRYPOINT executable\ncorrectly, you need to remember to start it with exec : FROM ubuntu\nENTRYPOINT exec top -b When you run this image, you'll see the single PID 1 process: $ docker run -it --rm --name test top\nMem: 1704520K used, 352148K free, 0K shrd, 0K buff, 140368121167873K cached\nCPU: 5% usr 0% sys 0% nic 94% idle 0% io 0% irq 0% sirq\nLoad average: 0.08 0.03 0.05 2/98 6\n PID PPID USER STAT VSZ %VSZ %CPU COMMAND\n 1 0 root R 3164 0% 0% top -b Which will exit cleanly on docker stop : $ /usr/bin/time docker stop test\ntest\nreal 0m 0.20s\nuser 0m 0.02s\nsys 0m 0.04s If you forget to add exec to the beginning of your ENTRYPOINT : FROM ubuntu\nENTRYPOINT top -b\nCMD --ignored-param1 You can then run it (giving it a name for the next step): $ docker run -it --name test top --ignored-param2\nMem: 1704184K used, 352484K free, 0K shrd, 0K buff, 140621524238337K cached\nCPU: 9% usr 2% sys 0% nic 88% idle 0% io 0% irq 0% sirq\nLoad average: 0.01 0.02 0.05 2/101 7\n PID PPID USER STAT VSZ %VSZ %CPU COMMAND\n 1 0 root S 3168 0% 0% /bin/sh -c top -b cmd cmd2\n 7 1 root R 3164 0% 0% top -b You can see from the output of top that the specified ENTRYPOINT is not PID 1 . If you then run docker stop test , the container will not exit cleanly - the stop command will be forced to send a SIGKILL after the timeout: $ docker exec -it test ps aux\nPID USER COMMAND\n 1 root /bin/sh -c top -b cmd cmd2\n 7 root top -b\n 8 root ps aux\n$ /usr/bin/time docker stop test\ntest\nreal 0m 10.19s\nuser 0m 0.04s\nsys 0m 0.03s",
|
|
"title": "ENTRYPOINT"
|
|
},
|
|
{
|
|
"loc": "/reference/builder#volume",
|
|
"tags": "",
|
|
"text": "VOLUME [\"/data\"] The VOLUME instruction creates a mount point with the specified name\nand marks it as holding externally mounted volumes from native host or other\ncontainers. The value can be a JSON array, VOLUME [\"/var/log/\"] , or a plain\nstring with multiple arguments, such as VOLUME /var/log or VOLUME /var/log\n/var/db . For more information/examples and mounting instructions via the\nDocker client, refer to Share Directories via Volumes \ndocumentation. The docker run command initializes the newly created volume with any data \nthat exists at the specified location within the base image. For example, \nconsider the following Dockerfile snippet: FROM ubuntu\nRUN mkdir /myvol\nRUN echo \"hello world\" /myvol/greating\nVOLUME /myvol This Dockerfile results in an image that causes docker run , to\ncreate a new mount point at /myvol and copy the greating file \ninto the newly created volume. Note :\nThe list is parsed as a JSON array, which means that\nyou must use double-quotes (\") around words not single-quotes (').",
|
|
"title": "VOLUME"
|
|
},
|
|
{
|
|
"loc": "/reference/builder#user",
|
|
"tags": "",
|
|
"text": "USER daemon The USER instruction sets the user name or UID to use when running the image\nand for any RUN , CMD and ENTRYPOINT instructions that follow it in the Dockerfile .",
|
|
"title": "USER"
|
|
},
|
|
{
|
|
"loc": "/reference/builder#workdir",
|
|
"tags": "",
|
|
"text": "WORKDIR /path/to/workdir The WORKDIR instruction sets the working directory for any RUN , CMD , ENTRYPOINT , COPY and ADD instructions that follow it in the Dockerfile . It can be used multiple times in the one Dockerfile . If a relative path\nis provided, it will be relative to the path of the previous WORKDIR \ninstruction. For example: WORKDIR /a\nWORKDIR b\nWORKDIR c\nRUN pwd The output of the final pwd command in this Dockerfile would be /a/b/c . The WORKDIR instruction can resolve environment variables previously set using ENV . You can only use environment variables explicitly set in the Dockerfile .\nFor example: ENV DIRPATH /path\nWORKDIR $DIRPATH/$DIRNAME The output of the final pwd command in this Dockerfile would be /path/$DIRNAME",
|
|
"title": "WORKDIR"
|
|
},
|
|
{
|
|
"loc": "/reference/builder#onbuild",
|
|
"tags": "",
|
|
"text": "ONBUILD [INSTRUCTION] The ONBUILD instruction adds to the image a trigger instruction to\nbe executed at a later time, when the image is used as the base for\nanother build. The trigger will be executed in the context of the\ndownstream build, as if it had been inserted immediately after the FROM instruction in the downstream Dockerfile . Any build instruction can be registered as a trigger. This is useful if you are building an image which will be used as a base\nto build other images, for example an application build environment or a\ndaemon which may be customized with user-specific configuration. For example, if your image is a reusable Python application builder, it\nwill require application source code to be added in a particular\ndirectory, and it might require a build script to be called after \nthat. You can't just call ADD and RUN now, because you don't yet\nhave access to the application source code, and it will be different for\neach application build. You could simply provide application developers\nwith a boilerplate Dockerfile to copy-paste into their application, but\nthat is inefficient, error-prone and difficult to update because it\nmixes with application-specific code. The solution is to use ONBUILD to register advance instructions to\nrun later, during the next build stage. Here's how it works: When it encounters an ONBUILD instruction, the builder adds a\n trigger to the metadata of the image being built. The instruction\n does not otherwise affect the current build. At the end of the build, a list of all triggers is stored in the\n image manifest, under the key OnBuild . They can be inspected with\n the docker inspect command. Later the image may be used as a base for a new build, using the\n FROM instruction. As part of processing the FROM instruction,\n the downstream builder looks for ONBUILD triggers, and executes\n them in the same order they were registered. If any of the triggers\n fail, the FROM instruction is aborted which in turn causes the\n build to fail. If all triggers succeed, the FROM instruction\n completes and the build continues as usual. Triggers are cleared from the final image after being executed. In\n other words they are not inherited by \"grand-children\" builds. For example you might add something like this: [...]\nONBUILD ADD . /app/src\nONBUILD RUN /usr/local/bin/python-build --dir /app/src\n[...] Warning : Chaining ONBUILD instructions using ONBUILD ONBUILD isn't allowed. Warning : The ONBUILD instruction may not trigger FROM or MAINTAINER instructions.",
|
|
"title": "ONBUILD"
|
|
},
|
|
{
|
|
"loc": "/reference/builder#dockerfile-examples",
|
|
"tags": "",
|
|
"text": "# Nginx\n#\n# VERSION 0.0.1\n\nFROM ubuntu\nMAINTAINER Victor Vieux victor@docker.com \n\nRUN apt-get update apt-get install -y inotify-tools nginx apache2 openssh-server\n\n# Firefox over VNC\n#\n# VERSION 0.3\n\nFROM ubuntu\n\n# Install vnc, xvfb in order to create a 'fake' display and firefox\nRUN apt-get update apt-get install -y x11vnc xvfb firefox\nRUN mkdir ~/.vnc\n# Setup a password\nRUN x11vnc -storepasswd 1234 ~/.vnc/passwd\n# Autostart firefox (might not be the best way, but it does the trick)\nRUN bash -c 'echo \"firefox\" /.bashrc'\n\nEXPOSE 5900\nCMD [\"x11vnc\", \"-forever\", \"-usepw\", \"-create\"]\n\n# Multiple images example\n#\n# VERSION 0.1\n\nFROM ubuntu\nRUN echo foo bar\n# Will output something like === 907ad6c2736f\n\nFROM ubuntu\nRUN echo moo oink\n# Will output something like === 695d7793cbe4\n\n# You\u1fbfll now have two images, 907ad6c2736f with /bar, and 695d7793cbe4 with\n# /oink.",
|
|
"title": "Dockerfile Examples"
|
|
},
|
|
{
|
|
"loc": "/faq/",
|
|
"tags": "",
|
|
"text": "Frequently Asked Questions (FAQ)\nIf you don't see your question here, feel free to submit new ones to\n. Or, you can fork the\nrepo and contribute them yourself by editing\nthe documentation sources.\nHow much does Docker cost?\nDocker is 100% free. It is open source, so you can use it without paying.\nWhat open source license are you using?\nWe are using the Apache License Version 2.0, see it here:\nhttps://github.com/docker/docker/blob/master/LICENSE\nDoes Docker run on Mac OS X or Windows?\nDocker currently runs only on Linux, but you can use VirtualBox to run Docker in\na virtual machine on your box, and get the best of both worlds. Check out the\nMac OS X and Microsoft\nWindows installation guides. The small Linux\ndistribution boot2docker can be run inside virtual machines on these two\noperating systems.\n\nNote: if you are using a remote Docker daemon, such as Boot2Docker, \nthen do not type the sudo before the docker commands shown in the\ndocumentation's examples.\n\nHow do containers compare to virtual machines?\nThey are complementary. VMs are best used to allocate chunks of hardware\nresources. Containers operate at the process level, which makes them very\nlightweight and perfect as a unit of software delivery.\nWhat does Docker add to just plain LXC?\nDocker is not a replacement for LXC. \"LXC\" refers to capabilities of the Linux\nkernel (specifically namespaces and control groups) which allow sandboxing\nprocesses from one another, and controlling their resource allocations. On top\nof this low-level foundation of kernel features, Docker offers a high-level tool\nwith several powerful functionalities:\n\n\nPortable deployment across machines. Docker defines a format for bundling\n an application and all its dependencies into a single object which can be\n transferred to any Docker-enabled machine, and executed there with the\n guarantee that the execution environment exposed to the application will be the\n same. LXC implements process sandboxing, which is an important pre-requisite\n for portable deployment, but that alone is not enough for portable deployment.\n If you sent me a copy of your application installed in a custom LXC\n configuration, it would almost certainly not run on my machine the way it does\n on yours, because it is tied to your machine's specific configuration:\n networking, storage, logging, distro, etc. Docker defines an abstraction for\n these machine-specific settings, so that the exact same Docker container can\n run - unchanged - on many different machines, with many different\n configurations.\n\n\nApplication-centric. Docker is optimized for the deployment of\n applications, as opposed to machines. This is reflected in its API, user\n interface, design philosophy and documentation. By contrast, the lxc helper\n scripts focus on containers as lightweight machines - basically servers that\n boot faster and need less RAM. We think there's more to containers than just\n that.\n\n\nAutomatic build. Docker includes a tool for developers to automatically\n assemble a container from their source\n code, with full control over application\n dependencies, build tools, packaging etc. They are free to use make, maven,\n chef, puppet, salt, Debian packages, RPMs, source tarballs, or any\n combination of the above, regardless of the configuration of the machines.\n\n\nVersioning. Docker includes git-like capabilities for tracking successive\n versions of a container, inspecting the diff between versions, committing new\n versions, rolling back etc. The history also includes how a container was\n assembled and by whom, so you get full traceability from the production server\n all the way back to the upstream developer. Docker also implements incremental\n uploads and downloads, similar to git pull, so new versions of a container\n can be transferred by only sending diffs.\n\n\nComponent re-use. Any container can be used as a \"base image\" to create more specialized components. This can\n be done manually or as part of an automated build. For example you can prepare\n the ideal Python environment, and use it as a base for 10 different\n applications. Your ideal Postgresql setup can be re-used for all your future\n projects. And so on.\n\n\nSharing. Docker has access to a public registry\n where thousands of people have uploaded useful containers: anything from Redis,\n CouchDB, Postgres to IRC bouncers to Rails app servers to Hadoop to base images\n for various Linux distros. The\n registry also\n includes an official \"standard library\" of useful containers maintained by the\n Docker team. The registry itself is open-source, so anyone can deploy their own\n registry to store and transfer private containers, for internal server\n deployments for example.\n\n\nTool ecosystem. Docker defines an API for automating and customizing the\n creation and deployment of containers. There are a huge number of tools\n integrating with Docker to extend its capabilities. PaaS-like deployment\n (Dokku, Deis, Flynn), multi-node orchestration (Maestro, Salt, Mesos, Openstack\n Nova), management dashboards (docker-ui, Openstack Horizon, Shipyard),\n configuration management (Chef, Puppet), continuous integration (Jenkins,\n Strider, Travis), etc. Docker is rapidly establishing itself as the standard\n for container-based tooling.\n\n\nWhat is different between a Docker container and a VM?\nThere's a great StackOverflow answer showing the differences.\nDo I lose my data when the container exits?\nNot at all! Any data that your application writes to disk gets preserved in its\ncontainer until you explicitly delete the container. The file system for the\ncontainer persists even after the container halts.\nHow far do Docker containers scale?\nSome of the largest server farms in the world today are based on containers.\nLarge web deployments like Google and Twitter, and platform providers such as\nHeroku and dotCloud all run on container technology, at a scale of hundreds of\nthousands or even millions of containers running in parallel.\nHow do I connect Docker containers?\nCurrently the recommended way to link containers is via the link primitive. You\ncan see details of how to work with links here.\nAlso useful for more flexible service portability is the Ambassador linking\npattern.\nHow do I run more than one process in a Docker container?\nAny capable process supervisor such as http://supervisord.org/, runit, s6, or daemontools can do the trick. Docker\nwill start up the process management daemon which will then fork to run\nadditional processes. As long as the processor manager daemon continues to run,\nthe container will continue to as well. You can see a more substantial example\nthat uses supervisord here.\nWhat platforms does Docker run on?\nLinux:\n\nUbuntu 12.04, 13.04 et al \nFedora 19/20+ \nRHEL 6.5+ \nCentos 6+ \nGentoo \nArchLinux \nopenSUSE 12.3+ \nCRUX 3.0+\n\nCloud:\n\nAmazon EC2 \nGoogle Compute Engine \nRackspace\n\nHow do I report a security issue with Docker?\nYou can learn about the project's security policy\nhere and report security issues to this\nmailbox.\nWhy do I need to sign my commits to Docker with the DCO?\nPlease read our blog post on the introduction of the DCO.\nWhen building an image, should I prefer system libraries or bundled ones?\nThis is a summary of a discussion on the docker-dev mailing list.\nVirtually all programs depend on third-party libraries. Most frequently, they\nwill use dynamic linking and some kind of package dependency, so that when\nmultiple programs need the same library, it is installed only once.\nSome programs, however, will bundle their third-party libraries, because they\nrely on very specific versions of those libraries. For instance, Node.js bundles\nOpenSSL; MongoDB bundles V8 and Boost (among others).\nWhen creating a Docker image, is it better to use the bundled libraries, or\nshould you build those programs so that they use the default system libraries\ninstead?\nThe key point about system libraries is not about saving disk or memory space.\nIt is about security. All major distributions handle security seriously, by\nhaving dedicated security teams, following up closely with published\nvulnerabilities, and disclosing advisories themselves. (Look at the Debian\nSecurity Information for an example of those\nprocedures.) Upstream developers, however, do not always implement similar\npractices.\nBefore setting up a Docker image to compile a program from source, if you want\nto use bundled libraries, you should check if the upstream authors provide a\nconvenient way to announce security vulnerabilities, and if they update their\nbundled libraries in a timely manner. If they don't, you are exposing yourself\n(and the users of your image) to security vulnerabilities.\nLikewise, before using packages built by others, you should check if the\nchannels providing those packages implement similar security best practices.\nDownloading and installing an \"all-in-one\" .deb or .rpm sounds great at first,\nexcept if you have no way to figure out that it contains a copy of the OpenSSL\nlibrary vulnerable to the Heartbleed bug.\nWhy is DEBIAN_FRONTEND=noninteractive discouraged in Dockerfiles?\nWhen building Docker images on Debian and Ubuntu you may have seen errors like:\nunable to initialize frontend: Dialog\n\nThese errors don't stop the image from being built but inform you that the\ninstallation process tried to open a dialog box, but was unable to. Generally,\nthese errors are safe to ignore.\nSome people circumvent these errors by changing the DEBIAN_FRONTEND\nenvironment variable inside the Dockerfile using:\nENV DEBIAN_FRONTEND=noninteractive\n\nThis prevents the installer from opening dialog boxes during installation which\nstops the errors.\nWhile this may sound like a good idea, it may have side effects. The\nDEBIAN_FRONTEND environment variable will be inherited by all images and\ncontainers built from your image, effectively changing their behavior. People\nusing those images will run into problems when installing software\ninteractively, because installers will not show any dialog boxes.\nBecause of this, and because setting DEBIAN_FRONTEND to noninteractive is\nmainly a 'cosmetic' change, we discourage changing it.\nIf you really need to change its setting, make sure to change it back to its\ndefault value\nafterwards.\nWhy do I get Connection reset by peer when making a request to a service running in a container?\nTypically, this message is returned if the service is already bound to your\nlocalhost. As a result, requests coming to the container from outside are\ndropped. To correct this problem, change the service's configuration on your\nlocalhost so that the service accepts requests from all IPs. If you aren't sure\nhow to do this, check the documentation for your OS.\nWhere can I find more answers?\nYou can find more answers on:\n\nDocker user mailinglist \nDocker developer mailinglist \nIRC, docker on freenode \nGitHub \nAsk questions on Stackoverflow \nJoin the conversation on Twitter\n\nLooking for something else to read? Checkout the User Guide.",
|
|
"title": "FAQ"
|
|
},
|
|
{
|
|
"loc": "/faq#frequently-asked-questions-faq",
|
|
"tags": "",
|
|
"text": "If you don't see your question here, feel free to submit new ones to . Or, you can fork the\nrepo and contribute them yourself by editing\nthe documentation sources. How much does Docker cost? Docker is 100% free. It is open source, so you can use it without paying. What open source license are you using? We are using the Apache License Version 2.0, see it here: https://github.com/docker/docker/blob/master/LICENSE Does Docker run on Mac OS X or Windows? Docker currently runs only on Linux, but you can use VirtualBox to run Docker in\na virtual machine on your box, and get the best of both worlds. Check out the Mac OS X and Microsoft\nWindows installation guides. The small Linux\ndistribution boot2docker can be run inside virtual machines on these two\noperating systems. Note: if you are using a remote Docker daemon, such as Boot2Docker, \nthen do not type the sudo before the docker commands shown in the\ndocumentation's examples. How do containers compare to virtual machines? They are complementary. VMs are best used to allocate chunks of hardware\nresources. Containers operate at the process level, which makes them very\nlightweight and perfect as a unit of software delivery. What does Docker add to just plain LXC? Docker is not a replacement for LXC. \"LXC\" refers to capabilities of the Linux\nkernel (specifically namespaces and control groups) which allow sandboxing\nprocesses from one another, and controlling their resource allocations. On top\nof this low-level foundation of kernel features, Docker offers a high-level tool\nwith several powerful functionalities: Portable deployment across machines. Docker defines a format for bundling\n an application and all its dependencies into a single object which can be\n transferred to any Docker-enabled machine, and executed there with the\n guarantee that the execution environment exposed to the application will be the\n same. LXC implements process sandboxing, which is an important pre-requisite\n for portable deployment, but that alone is not enough for portable deployment.\n If you sent me a copy of your application installed in a custom LXC\n configuration, it would almost certainly not run on my machine the way it does\n on yours, because it is tied to your machine's specific configuration:\n networking, storage, logging, distro, etc. Docker defines an abstraction for\n these machine-specific settings, so that the exact same Docker container can\n run - unchanged - on many different machines, with many different\n configurations. Application-centric. Docker is optimized for the deployment of\n applications, as opposed to machines. This is reflected in its API, user\n interface, design philosophy and documentation. By contrast, the lxc helper\n scripts focus on containers as lightweight machines - basically servers that\n boot faster and need less RAM. We think there's more to containers than just\n that. Automatic build. Docker includes a tool for developers to automatically\n assemble a container from their source\n code , with full control over application\n dependencies, build tools, packaging etc. They are free to use make , maven ,\n chef , puppet , salt, Debian packages, RPMs, source tarballs, or any\n combination of the above, regardless of the configuration of the machines. Versioning. Docker includes git-like capabilities for tracking successive\n versions of a container, inspecting the diff between versions, committing new\n versions, rolling back etc. The history also includes how a container was\n assembled and by whom, so you get full traceability from the production server\n all the way back to the upstream developer. Docker also implements incremental\n uploads and downloads, similar to git pull , so new versions of a container\n can be transferred by only sending diffs. Component re-use. Any container can be used as a \"base image\" to create more specialized components. This can\n be done manually or as part of an automated build. For example you can prepare\n the ideal Python environment, and use it as a base for 10 different\n applications. Your ideal Postgresql setup can be re-used for all your future\n projects. And so on. Sharing. Docker has access to a public registry \n where thousands of people have uploaded useful containers: anything from Redis,\n CouchDB, Postgres to IRC bouncers to Rails app servers to Hadoop to base images\n for various Linux distros. The\n registry also\n includes an official \"standard library\" of useful containers maintained by the\n Docker team. The registry itself is open-source, so anyone can deploy their own\n registry to store and transfer private containers, for internal server\n deployments for example. Tool ecosystem. Docker defines an API for automating and customizing the\n creation and deployment of containers. There are a huge number of tools\n integrating with Docker to extend its capabilities. PaaS-like deployment\n (Dokku, Deis, Flynn), multi-node orchestration (Maestro, Salt, Mesos, Openstack\n Nova), management dashboards (docker-ui, Openstack Horizon, Shipyard),\n configuration management (Chef, Puppet), continuous integration (Jenkins,\n Strider, Travis), etc. Docker is rapidly establishing itself as the standard\n for container-based tooling. What is different between a Docker container and a VM? There's a great StackOverflow answer showing the differences . Do I lose my data when the container exits? Not at all! Any data that your application writes to disk gets preserved in its\ncontainer until you explicitly delete the container. The file system for the\ncontainer persists even after the container halts. How far do Docker containers scale? Some of the largest server farms in the world today are based on containers.\nLarge web deployments like Google and Twitter, and platform providers such as\nHeroku and dotCloud all run on container technology, at a scale of hundreds of\nthousands or even millions of containers running in parallel. How do I connect Docker containers? Currently the recommended way to link containers is via the link primitive. You\ncan see details of how to work with links here . Also useful for more flexible service portability is the Ambassador linking\npattern . How do I run more than one process in a Docker container? Any capable process supervisor such as http://supervisord.org/ , runit, s6, or daemontools can do the trick. Docker\nwill start up the process management daemon which will then fork to run\nadditional processes. As long as the processor manager daemon continues to run,\nthe container will continue to as well. You can see a more substantial example that uses supervisord here . What platforms does Docker run on? Linux: Ubuntu 12.04, 13.04 et al Fedora 19/20+ RHEL 6.5+ Centos 6+ Gentoo ArchLinux openSUSE 12.3+ CRUX 3.0+ Cloud: Amazon EC2 Google Compute Engine Rackspace How do I report a security issue with Docker? You can learn about the project's security policy here and report security issues to this mailbox . Why do I need to sign my commits to Docker with the DCO? Please read our blog post on the introduction of the DCO. When building an image, should I prefer system libraries or bundled ones? This is a summary of a discussion on the docker-dev mailing list . Virtually all programs depend on third-party libraries. Most frequently, they\nwill use dynamic linking and some kind of package dependency, so that when\nmultiple programs need the same library, it is installed only once. Some programs, however, will bundle their third-party libraries, because they\nrely on very specific versions of those libraries. For instance, Node.js bundles\nOpenSSL; MongoDB bundles V8 and Boost (among others). When creating a Docker image, is it better to use the bundled libraries, or\nshould you build those programs so that they use the default system libraries\ninstead? The key point about system libraries is not about saving disk or memory space.\nIt is about security. All major distributions handle security seriously, by\nhaving dedicated security teams, following up closely with published\nvulnerabilities, and disclosing advisories themselves. (Look at the Debian\nSecurity Information for an example of those\nprocedures.) Upstream developers, however, do not always implement similar\npractices. Before setting up a Docker image to compile a program from source, if you want\nto use bundled libraries, you should check if the upstream authors provide a\nconvenient way to announce security vulnerabilities, and if they update their\nbundled libraries in a timely manner. If they don't, you are exposing yourself\n(and the users of your image) to security vulnerabilities. Likewise, before using packages built by others, you should check if the\nchannels providing those packages implement similar security best practices.\nDownloading and installing an \"all-in-one\" .deb or .rpm sounds great at first,\nexcept if you have no way to figure out that it contains a copy of the OpenSSL\nlibrary vulnerable to the Heartbleed bug. Why is DEBIAN_FRONTEND=noninteractive discouraged in Dockerfiles? When building Docker images on Debian and Ubuntu you may have seen errors like: unable to initialize frontend: Dialog These errors don't stop the image from being built but inform you that the\ninstallation process tried to open a dialog box, but was unable to. Generally,\nthese errors are safe to ignore. Some people circumvent these errors by changing the DEBIAN_FRONTEND \nenvironment variable inside the Dockerfile using: ENV DEBIAN_FRONTEND=noninteractive This prevents the installer from opening dialog boxes during installation which\nstops the errors. While this may sound like a good idea, it may have side effects. The DEBIAN_FRONTEND environment variable will be inherited by all images and\ncontainers built from your image, effectively changing their behavior. People\nusing those images will run into problems when installing software\ninteractively, because installers will not show any dialog boxes. Because of this, and because setting DEBIAN_FRONTEND to noninteractive is\nmainly a 'cosmetic' change, we discourage changing it. If you really need to change its setting, make sure to change it back to its default value \nafterwards. Why do I get Connection reset by peer when making a request to a service running in a container? Typically, this message is returned if the service is already bound to your\nlocalhost. As a result, requests coming to the container from outside are\ndropped. To correct this problem, change the service's configuration on your\nlocalhost so that the service accepts requests from all IPs. If you aren't sure\nhow to do this, check the documentation for your OS. Where can I find more answers? You can find more answers on: Docker user mailinglist Docker developer mailinglist IRC, docker on freenode GitHub Ask questions on Stackoverflow Join the conversation on Twitter Looking for something else to read? Checkout the User Guide .",
|
|
"title": "Frequently Asked Questions (FAQ)"
|
|
},
|
|
{
|
|
"loc": "/reference/run/",
|
|
"tags": "",
|
|
"text": "Docker run reference\nDocker runs processes in isolated containers. When an operator\nexecutes docker run, she starts a process with its own file system,\nits own networking, and its own isolated process tree. The\nImage which starts the process may define\ndefaults related to the binary to run, the networking to expose, and\nmore, but docker run gives final control to the operator who starts\nthe container from the image. That's the main reason\nrun has more options than any\nother docker command.\nGeneral form\nThe basic docker run command takes this form:\n$ sudo docker run [OPTIONS] IMAGE[:TAG] [COMMAND] [ARG...]\n\nTo learn how to interpret the types of [OPTIONS],\nsee Option types.\nThe list of [OPTIONS] breaks down into two groups:\n\nSettings exclusive to operators, including:\nDetached or Foreground running,\nContainer Identification,\nNetwork settings, and\nRuntime Constraints on CPU and Memory\nPrivileges and LXC Configuration\n\n\nSettings shared between operators and developers, where operators can\n override defaults developers set in images at build time.\n\nTogether, the docker run [OPTIONS] give the operator complete control over runtime\nbehavior, allowing them to override all defaults set by\nthe developer during docker build and nearly all the defaults set by\nthe Docker runtime itself.\nOperator exclusive options\nOnly the operator (the person executing docker run) can set the\nfollowing options.\n\nDetached vs Foreground\nDetached (-d)\nForeground\n\n\nContainer Identification\nName (--name)\nPID Equivalent\n\n\nIPC Settings\nNetwork Settings\nRestart Policies(--restart)\nClean Up (--rm)\nRuntime Constraints on CPU and Memory\nRuntime Privilege, Linux Capabilities, and LXC Configuration\n\nDetached vs foreground\nWhen starting a Docker container, you must first decide if you want to\nrun the container in the background in a \"detached\" mode or in the\ndefault foreground mode:\n-d=false: Detached mode: Run container in the background, print new container id\n\nDetached (-d)\nIn detached mode (-d=true or just -d), all I/O should be done\nthrough network connections or shared volumes because the container is\nno longer listening to the command line where you executed docker run.\nYou can reattach to a detached container with docker\nattach. If you choose to run a\ncontainer in the detached mode, then you cannot use the --rm option.\nForeground\nIn foreground mode (the default when -d is not specified), docker\nrun can start the process in the container and attach the console to\nthe process's standard input, output, and standard error. It can even\npretend to be a TTY (this is what most command line executables expect)\nand pass along signals. All of that is configurable:\n-a=[] : Attach to `STDIN`, `STDOUT` and/or `STDERR`\n-t=false : Allocate a pseudo-tty\n--sig-proxy=true: Proxify all received signal to the process (non-TTY mode only)\n-i=false : Keep STDIN open even if not attached\n\nIf you do not specify -a then Docker will attach all standard\nstreams. You can\nspecify to which of the three standard streams (STDIN, STDOUT,\nSTDERR) you'd like to connect instead, as in:\n$ sudo docker run -a stdin -a stdout -i -t ubuntu /bin/bash\n\nFor interactive processes (like a shell), you must use -i -t together in\norder to allocate a tty for the container process. Specifying -t is however\nforbidden when the client standard output is redirected or pipe, such as in:\necho test | docker run -i busybox cat.\nContainer identification\nName (--name)\nThe operator can identify a container in three ways:\n\nUUID long identifier\n (\"f78375b1c487e03c9438c729345e54db9d20cfa2ac1fc3494b6eb60872e74778\")\nUUID short identifier (\"f78375b1c487\")\nName (\"evil_ptolemy\")\n\nThe UUID identifiers come from the Docker daemon, and if you do not\nassign a name to the container with --name then the daemon will also\ngenerate a random string name too. The name can become a handy way to\nadd meaning to a container since you can use this name when defining\nlinks (or any\nother place you need to identify a container). This works for both\nbackground and foreground Docker containers.\nPID equivalent\nFinally, to help with automation, you can have Docker write the\ncontainer ID out to a file of your choosing. This is similar to how some\nprograms might write out their process ID to a file (you've seen them as\nPID files):\n--cidfile=\"\": Write the container ID to the file\n\nImage[:tag]\nWhile not strictly a means of identifying a container, you can specify a version of an\nimage you'd like to run the container with by adding image[:tag] to the command. For\nexample, docker run ubuntu:14.04.\nPID Settings\n--pid=\"\" : Set the PID (Process) Namespace mode for the container,\n 'host': use the host's PID namespace inside the container\n\nBy default, all containers have the PID namespace enabled.\nPID namespace provides separation of processes. The PID Namespace removes the\nview of the system processes, and allows process ids to be reused including\npid 1.\nIn certain cases you want your container to share the host's process namespace,\nbasically allowing processes within the container to see all of the processes\non the system. For example, you could build a container with debugging tools\nlike strace or gdb, but want to use these tools when debugging processes\nwithin the container.\n$ sudo docker run --pid=host rhel7 strace -p 1234\n\nThis command would allow you to use strace inside the container on pid 1234 on\nthe host.\nIPC Settings\n--ipc=\"\" : Set the IPC mode for the container,\n 'container:name|id': reuses another container's IPC namespace\n 'host': use the host's IPC namespace inside the container\n\nBy default, all containers have the IPC namespace enabled.\nIPC (POSIX/SysV IPC) namespace provides separation of named shared memory segments, semaphores and message queues. \nShared memory segments are used to accelerate inter-process communication at\nmemory speed, rather than through pipes or through the network stack. Shared\nmemory is commonly used by databases and custom-built (typically C/OpenMPI, \nC++/using boost libraries) high performance applications for scientific\ncomputing and financial services industries. If these types of applications\nare broken into multiple containers, you might need to share the IPC mechanisms\nof the containers.\nNetwork settings\n--dns=[] : Set custom dns servers for the container\n--net=\"bridge\" : Set the Network mode for the container\n 'bridge': creates a new network stack for the container on the docker bridge\n 'none': no networking for this container\n 'container:name|id': reuses another container network stack\n 'host': use the host network stack inside the container\n--add-host=\"\" : Add a line to /etc/hosts (host:IP)\n--mac-address=\"\" : Sets the container's Ethernet device's MAC address\n\nBy default, all containers have networking enabled and they can make any\noutgoing connections. The operator can completely disable networking\nwith docker run --net none which disables all incoming and outgoing\nnetworking. In cases like this, you would perform I/O through files or\nSTDIN and STDOUT only.\nYour container will use the same DNS servers as the host by default, but\nyou can override this with --dns.\nBy default a random MAC is generated. You can set the container's MAC address\nexplicitly by providing a MAC via the --mac-address parameter (format:\n12:34:56:78:9a:bc).\nSupported networking modes are:\n\nnone - no networking in the container\nbridge - (default) connect the container to the bridge via veth interfaces\nhost - use the host's network stack inside the container. Note: This gives the container full access to local system services such as D-bus and is therefore considered insecure.\ncontainer - use another container's network stack\n\nMode: none\nWith the networking mode set to none a container will not have a\naccess to any external routes. The container will still have a\nloopback interface enabled in the container but it does not have any\nroutes to external traffic.\nMode: bridge\nWith the networking mode set to bridge a container will use docker's\ndefault networking setup. A bridge is setup on the host, commonly named\ndocker0, and a pair of veth interfaces will be created for the\ncontainer. One side of the veth pair will remain on the host attached\nto the bridge while the other side of the pair will be placed inside the\ncontainer's namespaces in addition to the loopback interface. An IP\naddress will be allocated for containers on the bridge's network and\ntraffic will be routed though this bridge to the container.\nMode: host\nWith the networking mode set to host a container will share the host's\nnetwork stack and all interfaces from the host will be available to the\ncontainer. The container's hostname will match the hostname on the host\nsystem. Publishing ports and linking to other containers will not work\nwhen sharing the host's network stack.\nMode: container\nWith the networking mode set to container a container will share the\nnetwork stack of another container. The other container's name must be\nprovided in the format of --net container:name|id.\nExample running a Redis container with Redis binding to localhost then\nrunning the redis-cli command and connecting to the Redis server over the\nlocalhost interface.\n$ sudo docker run -d --name redis example/redis --bind 127.0.0.1\n$ # use the redis container's network stack to access localhost\n$ sudo docker run --rm -ti --net container:redis example/redis-cli -h 127.0.0.1\n\nManaging /etc/hosts\nYour container will have lines in /etc/hosts which define the hostname of the\ncontainer itself as well as localhost and a few other common things. The\n--add-host flag can be used to add additional lines to /etc/hosts. \n$ /docker run -ti --add-host db-static:86.75.30.9 ubuntu cat /etc/hosts\n172.17.0.22 09d03f76bf2c\nfe00::0 ip6-localnet\nff00::0 ip6-mcastprefix\nff02::1 ip6-allnodes\nff02::2 ip6-allrouters\n127.0.0.1 localhost\n::1 localhost ip6-localhost ip6-loopback\n86.75.30.9 db-static\n\nRestart policies (--restart)\nUsing the --restart flag on Docker run you can specify a restart policy for\nhow a container should or should not be restarted on exit.\nWhen a restart policy is active on a container, it will be shown as either Up\nor Restarting in docker ps. It can also be\nuseful to use docker events to see the\nrestart policy in effect.\nDocker supports the following restart policies:\n\n \n \n Policy\n Result\n \n \n \n \n no\n \n Do not automatically restart the container when it exits. This is the \n default.\n \n \n \n \n \n on-failure[:max-retries]\n \n \n \n Restart only if the container exits with a non-zero exit status.\n Optionally, limit the number of restart retries the Docker \n daemon attempts.\n \n \n \n always\n \n Always restart the container regardless of the exit status.\n When you specify always, the Docker daemon will try to restart\n the container indefinitely.\n \n \n \n\n\nAn ever increasing delay (double the previous delay, starting at 100\nmilliseconds) is added before each restart to prevent flooding the server.\nThis means the daemon will wait for 100 ms, then 200 ms, 400, 800, 1600,\nand so on until either the on-failure limit is hit, or when you docker stop\nor docker rm -f the container.\nIf a container is succesfully restarted (the container is started and runs\nfor at least 10 seconds), the delay is reset to its default value of 100 ms.\nYou can specify the maximum amount of times Docker will try to restart the\ncontainer when using the on-failure policy. The default is that Docker\nwill try forever to restart the container. The number of (attempted) restarts\nfor a container can be obtained via docker inspect. For example, to get the number of restarts\nfor container \"my-container\";\n$ sudo docker inspect -f \"{{ .RestartCount }}\" my-container\n# 2\n\nOr, to get the last time the container was (re)started;\n$ docker inspect -f \"{{ .State.StartedAt }}\" my-container\n# 2015-03-04T23:47:07.691840179Z\n\nYou cannot set any restart policy in combination with \n\"clean up (--rm)\". Setting both --restart and --rm\nresults in an error.\nExamples\n$ sudo docker run --restart=always redis\n\nThis will run the redis container with a restart policy of always\nso that if the container exits, Docker will restart it.\n$ sudo docker run --restart=on-failure:10 redis\n\nThis will run the redis container with a restart policy of on-failure \nand a maximum restart count of 10. If the redis container exits with a\nnon-zero exit status more than 10 times in a row Docker will abort trying to\nrestart the container. Providing a maximum restart limit is only valid for the\non-failure policy.\nClean up (--rm)\nBy default a container's file system persists even after the container\nexits. This makes debugging a lot easier (since you can inspect the\nfinal state) and you retain all your data by default. But if you are\nrunning short-term foreground processes, these container file\nsystems can really pile up. If instead you'd like Docker to\nautomatically clean up the container and remove the file system when\nthe container exits, you can add the --rm flag:\n--rm=false: Automatically remove the container when it exits (incompatible with -d)\n\nSecurity configuration\n--security-opt=\"label:user:USER\" : Set the label user for the container\n--security-opt=\"label:role:ROLE\" : Set the label role for the container\n--security-opt=\"label:type:TYPE\" : Set the label type for the container\n--security-opt=\"label:level:LEVEL\" : Set the label level for the container\n--security-opt=\"label:disable\" : Turn off label confinement for the container\n--security-opt=\"apparmor:PROFILE\" : Set the apparmor profile to be applied \n to the container\n\nYou can override the default labeling scheme for each container by specifying\nthe --security-opt flag. For example, you can specify the MCS/MLS level, a\nrequirement for MLS systems. Specifying the level in the following command\nallows you to share the same content between containers.\n# docker run --security-opt label:level:s0:c100,c200 -i -t fedora bash\n\nAn MLS example might be:\n# docker run --security-opt label:level:TopSecret -i -t rhel7 bash\n\nTo disable the security labeling for this container versus running with the\n--permissive flag, use the following command:\n# docker run --security-opt label:disable -i -t fedora bash\n\nIf you want a tighter security policy on the processes within a container,\nyou can specify an alternate type for the container. You could run a container\nthat is only allowed to listen on Apache ports by executing the following\ncommand:\n# docker run --security-opt label:type:svirt_apache_t -i -t centos bash\n\nNote:\nYou would have to write policy defining a svirt_apache_t type.\nRuntime constraints on CPU and memory\nThe operator can also adjust the performance parameters of the\ncontainer:\n-m=\"\": Memory limit (format: numberoptional unit, where unit = b, k, m or g)\n-c=0 : CPU shares (relative weight)\n\nThe operator can constrain the memory available to a container easily\nwith docker run -m. If the host supports swap memory, then the -m\nmemory setting can be larger than physical RAM.\nWe have four ways to set memory usage:\n\n\nmemory=Linf, memory-swap=inf (specify memory and set memory-swap as -1)\n It is not allowed to use more than L bytes of memory, but use as much swap\n as you want (only if the host supports swap memory).\n\n\nmemory=Linf, memory-swap=2L (specify memory without memory-swap)\n It is not allowed to use more than L bytes of memory, swap plus* memory\n usage is double of that.\n\n\nmemory=Linf, memory-swap=Sinf, L=S (specify both memory and memory-swap)\n It is not allowed to use more than L bytes of memory, swap plus memory\n usage is limited by S.\n\n\nSimilarly the operator can increase the priority of this container with\nthe -c option. By default, all containers run at the same priority and\nget the same proportion of CPU cycles, but you can tell the kernel to\ngive more shares of CPU time to one or more containers when you start\nthem via Docker.\nThe flag -c or --cpu-shares with value 0 indicates that the running\ncontainer has access to all 1024 (default) CPU shares. However, this value\ncan be modified to run a container with a different priority or different\nproportion of CPU cycles.\nE.g., If we start three {C0, C1, C2} containers with default values\n(-c OR --cpu-shares = 0) and one {C3} with (-c or --cpu-shares=512)\nthen C0, C1, and C2 would have access to 100% CPU shares (1024) and C3 would\nonly have access to 50% CPU shares (512). In the context of a time-sliced OS\nwith time quantum set as 100 milliseconds, containers C0, C1, and C2 will run\nfor full-time quantum, and container C3 will run for half-time quantum i.e 50\nmilliseconds.\nRuntime privilege, Linux capabilities, and LXC configuration\n--cap-add: Add Linux capabilities\n--cap-drop: Drop Linux capabilities\n--privileged=false: Give extended privileges to this container\n--device=[]: Allows you to run devices inside the container without the --privileged flag.\n--lxc-conf=[]: (lxc exec-driver only) Add custom lxc options --lxc-conf=\"lxc.cgroup.cpuset.cpus = 0,1\"\n\nBy default, Docker containers are \"unprivileged\" and cannot, for\nexample, run a Docker daemon inside a Docker container. This is because\nby default a container is not allowed to access any devices, but a\n\"privileged\" container is given access to all devices (see lxc-template.go\nand documentation on cgroups devices).\nWhen the operator executes docker run --privileged, Docker will enable\nto access to all devices on the host as well as set some configuration\nin AppArmor or SELinux to allow the container nearly all the same access to the\nhost as processes running outside containers on the host. Additional\ninformation about running with --privileged is available on the\nDocker Blog.\nIf you want to limit access to a specific device or devices you can use\nthe --device flag. It allows you to specify one or more devices that\nwill be accessible within the container.\n$ sudo docker run --device=/dev/snd:/dev/snd ...\n\nBy default, the container will be able to read, write, and mknod these devices.\nThis can be overridden using a third :rwm set of options to each --device flag:\n $ sudo docker run --device=/dev/sda:/dev/xvdc --rm -it ubuntu fdisk /dev/xvdc\n\n Command (m for help): q\n $ sudo docker run --device=/dev/sda:/dev/xvdc:r --rm -it ubuntu fdisk /dev/xvdc\n You will not be able to write the partition table.\n\n Command (m for help): q\n\n $ sudo docker run --device=/dev/sda:/dev/xvdc:w --rm -it ubuntu fdisk /dev/xvdc\n crash....\n\n $ sudo docker run --device=/dev/sda:/dev/xvdc:m --rm -it ubuntu fdisk /dev/xvdc\n fdisk: unable to open /dev/xvdc: Operation not permitted\n\n\nIn addition to --privileged, the operator can have fine grain control over the\ncapabilities using --cap-add and --cap-drop. By default, Docker has a default\nlist of capabilities that are kept. Both flags support the value all, so if the\noperator wants to have all capabilities but MKNOD they could use:\n$ sudo docker run --cap-add=ALL --cap-drop=MKNOD ...\n\nFor interacting with the network stack, instead of using --privileged they\nshould use --cap-add=NET_ADMIN to modify the network interfaces.\n$ docker run -t -i --rm ubuntu:14.04 ip link add dummy0 type dummy\nRTNETLINK answers: Operation not permitted\n$ docker run -t -i --rm --cap-add=NET_ADMIN ubuntu:14.04 ip link add dummy0 type dummy\n\nTo mount a FUSE based filesystem, you need to combine both --cap-add and\n--device:\n$ docker run --rm -it --cap-add SYS_ADMIN sshfs sshfs sven@10.10.10.20:/home/sven /mnt\nfuse: failed to open /dev/fuse: Operation not permitted\n$ docker run --rm -it --device /dev/fuse sshfs sshfs sven@10.10.10.20:/home/sven /mnt\nfusermount: mount failed: Operation not permitted\n$ docker run --rm -it --cap-add SYS_ADMIN --device /dev/fuse sshfs\n# sshfs sven@10.10.10.20:/home/sven /mnt\nThe authenticity of host '10.10.10.20 (10.10.10.20)' can't be established.\nECDSA key fingerprint is 25:34:85:75:25:b0:17:46:05:19:04:93:b5:dd:5f:c6.\nAre you sure you want to continue connecting (yes/no)? yes\nsven@10.10.10.20's password:\nroot@30aa0cfaf1b5:/# ls -la /mnt/src/docker\ntotal 1516\ndrwxrwxr-x 1 1000 1000 4096 Dec 4 06:08 .\ndrwxrwxr-x 1 1000 1000 4096 Dec 4 11:46 ..\n-rw-rw-r-- 1 1000 1000 16 Oct 8 00:09 .dockerignore\n-rwxrwxr-x 1 1000 1000 464 Oct 8 00:09 .drone.yml\ndrwxrwxr-x 1 1000 1000 4096 Dec 4 06:11 .git\n-rw-rw-r-- 1 1000 1000 461 Dec 4 06:08 .gitignore\n....\n\nIf the Docker daemon was started using the lxc exec-driver\n(docker -d --exec-driver=lxc) then the operator can also specify LXC options\nusing one or more --lxc-conf parameters. These can be new parameters or\noverride existing parameters from the lxc-template.go.\nNote that in the future, a given host's docker daemon may not use LXC, so this\nis an implementation-specific configuration meant for operators already\nfamiliar with using LXC directly.\n\nNote:\nIf you use --lxc-conf to modify a container's configuration which is also\nmanaged by the Docker daemon, then the Docker daemon will not know about this\nmodification, and you will need to manage any conflicts yourself. For example,\nyou can use --lxc-conf to set a container's IP address, but this will not be\nreflected in the /etc/hosts file.\n\nOverriding Dockerfile image defaults\nWhen a developer builds an image from a Dockerfile\nor when she commits it, the developer can set a number of default parameters\nthat take effect when the image starts up as a container.\nFour of the Dockerfile commands cannot be overridden at runtime: FROM,\nMAINTAINER, RUN, and ADD. Everything else has a corresponding override\nin docker run. We'll go through what the developer might have set in each\nDockerfile instruction and how the operator can override that setting.\n\nCMD (Default Command or Options)\nENTRYPOINT (Default Command to Execute at Runtime)\nEXPOSE (Incoming Ports)\nENV (Environment Variables)\nVOLUME (Shared Filesystems)\nUSER\nWORKDIR\n\nCMD (default command or options)\nRecall the optional COMMAND in the Docker\ncommandline:\n$ sudo docker run [OPTIONS] IMAGE[:TAG] [COMMAND] [ARG...]\n\nThis command is optional because the person who created the IMAGE may\nhave already provided a default COMMAND using the Dockerfile CMD\ninstruction. As the operator (the person running a container from the\nimage), you can override that CMD instruction just by specifying a new\nCOMMAND.\nIf the image also specifies an ENTRYPOINT then the CMD or COMMAND\nget appended as arguments to the ENTRYPOINT.\nENTRYPOINT (default command to execute at runtime)\n--entrypoint=\"\": Overwrite the default entrypoint set by the image\n\nThe ENTRYPOINT of an image is similar to a COMMAND because it\nspecifies what executable to run when the container starts, but it is\n(purposely) more difficult to override. The ENTRYPOINT gives a\ncontainer its default nature or behavior, so that when you set an\nENTRYPOINT you can run the container as if it were that binary,\ncomplete with default options, and you can pass in more options via the\nCOMMAND. But, sometimes an operator may want to run something else\ninside the container, so you can override the default ENTRYPOINT at\nruntime by using a string to specify the new ENTRYPOINT. Here is an\nexample of how to run a shell in a container that has been set up to\nautomatically run something else (like /usr/bin/redis-server):\n$ sudo docker run -i -t --entrypoint /bin/bash example/redis\n\nor two examples of how to pass more parameters to that ENTRYPOINT:\n$ sudo docker run -i -t --entrypoint /bin/bash example/redis -c ls -l\n$ sudo docker run -i -t --entrypoint /usr/bin/redis-cli example/redis --help\n\nEXPOSE (incoming ports)\nThe Dockerfile doesn't give much control over networking, only providing\nthe EXPOSE instruction to give a hint to the operator about what\nincoming ports might provide services. The following options work with\nor override the Dockerfile's exposed defaults:\n--expose=[]: Expose a port or a range of ports from the container\n without publishing it to your host\n-P=false : Publish all exposed ports to the host interfaces\n-p=[] : Publish a container\u1fbfs port or a range of ports to the host \n format: ip:hostPort:containerPort | ip::containerPort | hostPort:containerPort | containerPort\n Both hostPort and containerPort can be specified as a range of ports. \n When specifying ranges for both, the number of container ports in the range must match the number of host ports in the range. (e.g., `-p 1234-1236:1234-1236/tcp`)\n (use 'docker port' to see the actual mapping)\n--link=\"\" : Add link to another container (name or id:alias)\n\nAs mentioned previously, EXPOSE (and --expose) makes ports available\nin a container for incoming connections. The port number on the\ninside of the container (where the service listens) does not need to be\nthe same number as the port exposed on the outside of the container\n(where clients connect), so inside the container you might have an HTTP\nservice listening on port 80 (and so you EXPOSE 80 in the Dockerfile),\nbut outside the container the port might be 42800.\nTo help a new client container reach the server container's internal\nport operator --expose'd by the operator or EXPOSE'd by the\ndeveloper, the operator has three choices: start the server container\nwith -P or -p, or start the client container with --link.\nIf the operator uses -P or -p then Docker will make the exposed port\naccessible on the host and the ports will be available to any client\nthat can reach the host. When using -P, Docker will bind the exposed \nports to a random port on the host between 49153 and 65535. To find the\nmapping between the host ports and the exposed ports, use docker port.\nIf the operator uses --link when starting the new client container,\nthen the client container can access the exposed port via a private\nnetworking interface. Docker will set some environment variables in the\nclient container to help indicate which interface and port to use.\nENV (environment variables)\nWhen a new container is created, Docker will set the following environment\nvariables automatically:\n\n \n Variable\n Value\n \n \n HOME\n \n Set based on the value of USER\n \n \n \n HOSTNAME\n \n The hostname associated with the container\n \n \n \n PATH\n \n Includes popular directories, such as :\n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\n \n \n TERM\n xterm if the container is allocated a psuedo-TTY\n \n\n\nThe container may also include environment variables defined\nas a result of the container being linked with another container. See\nthe Container Links\nsection for more details.\nAdditionally, the operator can set any environment variable in the \ncontainer by using one or more -e flags, even overriding those mentioned \nabove, or already defined by the developer with a Dockerfile ENV:\n$ sudo docker run -e \"deep=purple\" --rm ubuntu /bin/bash -c export\ndeclare -x HOME=\"/\"\ndeclare -x HOSTNAME=\"85bc26a0e200\"\ndeclare -x OLDPWD\ndeclare -x PATH=\"/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\"\ndeclare -x PWD=\"/\"\ndeclare -x SHLVL=\"1\"\ndeclare -x container=\"lxc\"\ndeclare -x deep=\"purple\"\n\nSimilarly the operator can set the hostname with -h.\n--link name or id:alias also sets environment variables, using the alias string to\ndefine environment variables within the container that give the IP and PORT\ninformation for connecting to the service container. Let's imagine we have a\ncontainer running Redis:\n# Start the service container, named redis-name\n$ sudo docker run -d --name redis-name dockerfiles/redis\n4241164edf6f5aca5b0e9e4c9eccd899b0b8080c64c0cd26efe02166c73208f3\n\n# The redis-name container exposed port 6379\n$ sudo docker ps\nCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES\n4241164edf6f $ dockerfiles/redis:latest /redis-stable/src/re 5 seconds ago Up 4 seconds 6379/tcp redis-name\n\n# Note that there are no public ports exposed since we didn\u1fbft use -p or -P\n$ sudo docker port 4241164edf6f 6379\n2014/01/25 00:55:38 Error: No public port '6379' published for 4241164edf6f\n\nYet we can get information about the Redis container's exposed ports\nwith --link. Choose an alias that will form a\nvalid environment variable!\n$ sudo docker run --rm --link redis-name:redis_alias --entrypoint /bin/bash dockerfiles/redis -c export\ndeclare -x HOME=\"/\"\ndeclare -x HOSTNAME=\"acda7f7b1cdc\"\ndeclare -x OLDPWD\ndeclare -x PATH=\"/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\"\ndeclare -x PWD=\"/\"\ndeclare -x REDIS_ALIAS_NAME=\"/distracted_wright/redis\"\ndeclare -x REDIS_ALIAS_PORT=\"tcp://172.17.0.32:6379\"\ndeclare -x REDIS_ALIAS_PORT_6379_TCP=\"tcp://172.17.0.32:6379\"\ndeclare -x REDIS_ALIAS_PORT_6379_TCP_ADDR=\"172.17.0.32\"\ndeclare -x REDIS_ALIAS_PORT_6379_TCP_PORT=\"6379\"\ndeclare -x REDIS_ALIAS_PORT_6379_TCP_PROTO=\"tcp\"\ndeclare -x SHLVL=\"1\"\ndeclare -x container=\"lxc\"\n\nAnd we can use that information to connect from another container as a client:\n$ sudo docker run -i -t --rm --link redis-name:redis_alias --entrypoint /bin/bash dockerfiles/redis -c '/redis-stable/src/redis-cli -h $REDIS_ALIAS_PORT_6379_TCP_ADDR -p $REDIS_ALIAS_PORT_6379_TCP_PORT'\n172.17.0.32:6379\n\nDocker will also map the private IP address to the alias of a linked\ncontainer by inserting an entry into /etc/hosts. You can use this\nmechanism to communicate with a linked container by its alias:\n$ sudo docker run -d --name servicename busybox sleep 30\n$ sudo docker run -i -t --link servicename:servicealias busybox ping -c 1 servicealias\n\nIf you restart the source container (servicename in this case), the recipient\ncontainer's /etc/hosts entry will be automatically updated.\n\nNote:\nUnlike host entries in the /ets/hosts file, IP addresses stored in the\nenvironment variables are not automatically updated if the source container is\nrestarted. We recommend using the host entries in /etc/hosts to resolve the\nIP address of linked containers.\n\nVOLUME (shared filesystems)\n-v=[]: Create a bind mount with: [host-dir]:[container-dir]:[rw|ro].\n If \"container-dir\" is missing, then docker creates a new volume.\n--volumes-from=\"\": Mount all volumes from the given container(s)\n\nThe volumes commands are complex enough to have their own documentation\nin section Managing data in \ncontainers. A developer can define\none or more VOLUME's associated with an image, but only the operator\ncan give access from one container to another (or from a container to a\nvolume mounted on the host).\nUSER\nThe default user within a container is root (id = 0), but if the\ndeveloper created additional users, those are accessible too. The\ndeveloper can set a default user to run the first process with the\nDockerfile USER instruction, but the operator can override it:\n-u=\"\": Username or UID\n\n\nNote: if you pass numeric uid, it must be in range 0-2147483647.\n\nWORKDIR\nThe default working directory for running binaries within a container is the\nroot directory (/), but the developer can set a different default with the\nDockerfile WORKDIR command. The operator can override this with:\n-w=\"\": Working directory inside the container",
|
|
"title": "Run Reference"
|
|
},
|
|
{
|
|
"loc": "/reference/run#docker-run-reference",
|
|
"tags": "",
|
|
"text": "Docker runs processes in isolated containers . When an operator\nexecutes docker run , she starts a process with its own file system,\nits own networking, and its own isolated process tree. The Image which starts the process may define\ndefaults related to the binary to run, the networking to expose, and\nmore, but docker run gives final control to the operator who starts\nthe container from the image. That's the main reason run has more options than any\nother docker command.",
|
|
"title": "Docker run reference"
|
|
},
|
|
{
|
|
"loc": "/reference/run#general-form",
|
|
"tags": "",
|
|
"text": "The basic docker run command takes this form: $ sudo docker run [OPTIONS] IMAGE[:TAG] [COMMAND] [ARG...] To learn how to interpret the types of [OPTIONS] ,\nsee Option types . The list of [OPTIONS] breaks down into two groups: Settings exclusive to operators, including: Detached or Foreground running, Container Identification, Network settings, and Runtime Constraints on CPU and Memory Privileges and LXC Configuration Settings shared between operators and developers, where operators can\n override defaults developers set in images at build time. Together, the docker run [OPTIONS] give the operator complete control over runtime\nbehavior, allowing them to override all defaults set by\nthe developer during docker build and nearly all the defaults set by\nthe Docker runtime itself.",
|
|
"title": "General form"
|
|
},
|
|
{
|
|
"loc": "/reference/run#operator-exclusive-options",
|
|
"tags": "",
|
|
"text": "Only the operator (the person executing docker run ) can set the\nfollowing options. Detached vs Foreground Detached (-d) Foreground Container Identification Name (--name) PID Equivalent IPC Settings Network Settings Restart Policies (--restart) Clean Up (--rm) Runtime Constraints on CPU and Memory Runtime Privilege, Linux Capabilities, and LXC Configuration",
|
|
"title": "Operator exclusive options"
|
|
},
|
|
{
|
|
"loc": "/reference/run#detached-vs-foreground",
|
|
"tags": "",
|
|
"text": "When starting a Docker container, you must first decide if you want to\nrun the container in the background in a \"detached\" mode or in the\ndefault foreground mode: -d=false: Detached mode: Run container in the background, print new container id Detached (-d) In detached mode ( -d=true or just -d ), all I/O should be done\nthrough network connections or shared volumes because the container is\nno longer listening to the command line where you executed docker run .\nYou can reattach to a detached container with docker attach . If you choose to run a\ncontainer in the detached mode, then you cannot use the --rm option. Foreground In foreground mode (the default when -d is not specified), docker\nrun can start the process in the container and attach the console to\nthe process's standard input, output, and standard error. It can even\npretend to be a TTY (this is what most command line executables expect)\nand pass along signals. All of that is configurable: -a=[] : Attach to `STDIN`, `STDOUT` and/or `STDERR`\n-t=false : Allocate a pseudo-tty\n--sig-proxy=true: Proxify all received signal to the process (non-TTY mode only)\n-i=false : Keep STDIN open even if not attached If you do not specify -a then Docker will attach all standard\nstreams . You can\nspecify to which of the three standard streams ( STDIN , STDOUT , STDERR ) you'd like to connect instead, as in: $ sudo docker run -a stdin -a stdout -i -t ubuntu /bin/bash For interactive processes (like a shell), you must use -i -t together in\norder to allocate a tty for the container process. Specifying -t is however\nforbidden when the client standard output is redirected or pipe, such as in: echo test | docker run -i busybox cat .",
|
|
"title": "Detached vs foreground"
|
|
},
|
|
{
|
|
"loc": "/reference/run#container-identification",
|
|
"tags": "",
|
|
"text": "Name (--name) The operator can identify a container in three ways: UUID long identifier\n (\"f78375b1c487e03c9438c729345e54db9d20cfa2ac1fc3494b6eb60872e74778\") UUID short identifier (\"f78375b1c487\") Name (\"evil_ptolemy\") The UUID identifiers come from the Docker daemon, and if you do not\nassign a name to the container with --name then the daemon will also\ngenerate a random string name too. The name can become a handy way to\nadd meaning to a container since you can use this name when defining links (or any\nother place you need to identify a container). This works for both\nbackground and foreground Docker containers. PID equivalent Finally, to help with automation, you can have Docker write the\ncontainer ID out to a file of your choosing. This is similar to how some\nprograms might write out their process ID to a file (you've seen them as\nPID files): --cidfile=\"\": Write the container ID to the file Image[:tag] While not strictly a means of identifying a container, you can specify a version of an\nimage you'd like to run the container with by adding image[:tag] to the command. For\nexample, docker run ubuntu:14.04 .",
|
|
"title": "Container identification"
|
|
},
|
|
{
|
|
"loc": "/reference/run#pid-settings",
|
|
"tags": "",
|
|
"text": "--pid=\"\" : Set the PID (Process) Namespace mode for the container,\n 'host': use the host's PID namespace inside the container By default, all containers have the PID namespace enabled. PID namespace provides separation of processes. The PID Namespace removes the\nview of the system processes, and allows process ids to be reused including\npid 1. In certain cases you want your container to share the host's process namespace,\nbasically allowing processes within the container to see all of the processes\non the system. For example, you could build a container with debugging tools\nlike strace or gdb , but want to use these tools when debugging processes\nwithin the container. $ sudo docker run --pid=host rhel7 strace -p 1234 This command would allow you to use strace inside the container on pid 1234 on\nthe host.",
|
|
"title": "PID Settings"
|
|
},
|
|
{
|
|
"loc": "/reference/run#ipc-settings",
|
|
"tags": "",
|
|
"text": "--ipc=\"\" : Set the IPC mode for the container,\n 'container: name|id ': reuses another container's IPC namespace\n 'host': use the host's IPC namespace inside the container By default, all containers have the IPC namespace enabled. IPC (POSIX/SysV IPC) namespace provides separation of named shared memory segments, semaphores and message queues. Shared memory segments are used to accelerate inter-process communication at\nmemory speed, rather than through pipes or through the network stack. Shared\nmemory is commonly used by databases and custom-built (typically C/OpenMPI, \nC++/using boost libraries) high performance applications for scientific\ncomputing and financial services industries. If these types of applications\nare broken into multiple containers, you might need to share the IPC mechanisms\nof the containers.",
|
|
"title": "IPC Settings"
|
|
},
|
|
{
|
|
"loc": "/reference/run#network-settings",
|
|
"tags": "",
|
|
"text": "--dns=[] : Set custom dns servers for the container\n--net=\"bridge\" : Set the Network mode for the container\n 'bridge': creates a new network stack for the container on the docker bridge\n 'none': no networking for this container\n 'container: name|id ': reuses another container network stack\n 'host': use the host network stack inside the container\n--add-host=\"\" : Add a line to /etc/hosts (host:IP)\n--mac-address=\"\" : Sets the container's Ethernet device's MAC address By default, all containers have networking enabled and they can make any\noutgoing connections. The operator can completely disable networking\nwith docker run --net none which disables all incoming and outgoing\nnetworking. In cases like this, you would perform I/O through files or STDIN and STDOUT only. Your container will use the same DNS servers as the host by default, but\nyou can override this with --dns . By default a random MAC is generated. You can set the container's MAC address\nexplicitly by providing a MAC via the --mac-address parameter (format: 12:34:56:78:9a:bc ). Supported networking modes are: none - no networking in the container bridge - (default) connect the container to the bridge via veth interfaces host - use the host's network stack inside the container. Note: This gives the container full access to local system services such as D-bus and is therefore considered insecure. container - use another container's network stack Mode: none With the networking mode set to none a container will not have a\naccess to any external routes. The container will still have a loopback interface enabled in the container but it does not have any\nroutes to external traffic. Mode: bridge With the networking mode set to bridge a container will use docker's\ndefault networking setup. A bridge is setup on the host, commonly named docker0 , and a pair of veth interfaces will be created for the\ncontainer. One side of the veth pair will remain on the host attached\nto the bridge while the other side of the pair will be placed inside the\ncontainer's namespaces in addition to the loopback interface. An IP\naddress will be allocated for containers on the bridge's network and\ntraffic will be routed though this bridge to the container. Mode: host With the networking mode set to host a container will share the host's\nnetwork stack and all interfaces from the host will be available to the\ncontainer. The container's hostname will match the hostname on the host\nsystem. Publishing ports and linking to other containers will not work\nwhen sharing the host's network stack. Mode: container With the networking mode set to container a container will share the\nnetwork stack of another container. The other container's name must be\nprovided in the format of --net container: name|id . Example running a Redis container with Redis binding to localhost then\nrunning the redis-cli command and connecting to the Redis server over the localhost interface. $ sudo docker run -d --name redis example/redis --bind 127.0.0.1\n$ # use the redis container's network stack to access localhost\n$ sudo docker run --rm -ti --net container:redis example/redis-cli -h 127.0.0.1 Managing /etc/hosts Your container will have lines in /etc/hosts which define the hostname of the\ncontainer itself as well as localhost and a few other common things. The --add-host flag can be used to add additional lines to /etc/hosts . $ /docker run -ti --add-host db-static:86.75.30.9 ubuntu cat /etc/hosts\n172.17.0.22 09d03f76bf2c\nfe00::0 ip6-localnet\nff00::0 ip6-mcastprefix\nff02::1 ip6-allnodes\nff02::2 ip6-allrouters\n127.0.0.1 localhost\n::1 localhost ip6-localhost ip6-loopback\n86.75.30.9 db-static",
|
|
"title": "Network settings"
|
|
},
|
|
{
|
|
"loc": "/reference/run#restart-policies-restart",
|
|
"tags": "",
|
|
"text": "Using the --restart flag on Docker run you can specify a restart policy for\nhow a container should or should not be restarted on exit. When a restart policy is active on a container, it will be shown as either Up \nor Restarting in docker ps . It can also be\nuseful to use docker events to see the\nrestart policy in effect. Docker supports the following restart policies: \n \n \n Policy \n Result \n \n \n \n \n no \n \n Do not automatically restart the container when it exits. This is the \n default.\n \n \n \n \n \n on-failure [:max-retries]\n \n \n \n Restart only if the container exits with a non-zero exit status.\n Optionally, limit the number of restart retries the Docker \n daemon attempts.\n \n \n \n always \n \n Always restart the container regardless of the exit status.\n When you specify always, the Docker daemon will try to restart\n the container indefinitely.\n \n \n An ever increasing delay (double the previous delay, starting at 100\nmilliseconds) is added before each restart to prevent flooding the server.\nThis means the daemon will wait for 100 ms, then 200 ms, 400, 800, 1600,\nand so on until either the on-failure limit is hit, or when you docker stop \nor docker rm -f the container. If a container is succesfully restarted (the container is started and runs\nfor at least 10 seconds), the delay is reset to its default value of 100 ms. You can specify the maximum amount of times Docker will try to restart the\ncontainer when using the on-failure policy. The default is that Docker\nwill try forever to restart the container. The number of (attempted) restarts\nfor a container can be obtained via docker inspect . For example, to get the number of restarts\nfor container \"my-container\"; $ sudo docker inspect -f \"{{ .RestartCount }}\" my-container\n# 2 Or, to get the last time the container was (re)started; $ docker inspect -f \"{{ .State.StartedAt }}\" my-container\n# 2015-03-04T23:47:07.691840179Z You cannot set any restart policy in combination with \"clean up (--rm)\" . Setting both --restart and --rm \nresults in an error. Examples $ sudo docker run --restart=always redis This will run the redis container with a restart policy of always \nso that if the container exits, Docker will restart it. $ sudo docker run --restart=on-failure:10 redis This will run the redis container with a restart policy of on-failure \nand a maximum restart count of 10. If the redis container exits with a\nnon-zero exit status more than 10 times in a row Docker will abort trying to\nrestart the container. Providing a maximum restart limit is only valid for the on-failure policy.",
|
|
"title": "Restart policies (--restart)"
|
|
},
|
|
{
|
|
"loc": "/reference/run#clean-up-rm",
|
|
"tags": "",
|
|
"text": "By default a container's file system persists even after the container\nexits. This makes debugging a lot easier (since you can inspect the\nfinal state) and you retain all your data by default. But if you are\nrunning short-term foreground processes, these container file\nsystems can really pile up. If instead you'd like Docker to automatically clean up the container and remove the file system when\nthe container exits , you can add the --rm flag: --rm=false: Automatically remove the container when it exits (incompatible with -d)",
|
|
"title": "Clean up (--rm)"
|
|
},
|
|
{
|
|
"loc": "/reference/run#security-configuration",
|
|
"tags": "",
|
|
"text": "--security-opt=\"label:user:USER\" : Set the label user for the container\n--security-opt=\"label:role:ROLE\" : Set the label role for the container\n--security-opt=\"label:type:TYPE\" : Set the label type for the container\n--security-opt=\"label:level:LEVEL\" : Set the label level for the container\n--security-opt=\"label:disable\" : Turn off label confinement for the container\n--security-opt=\"apparmor:PROFILE\" : Set the apparmor profile to be applied \n to the container You can override the default labeling scheme for each container by specifying\nthe --security-opt flag. For example, you can specify the MCS/MLS level, a\nrequirement for MLS systems. Specifying the level in the following command\nallows you to share the same content between containers. # docker run --security-opt label:level:s0:c100,c200 -i -t fedora bash An MLS example might be: # docker run --security-opt label:level:TopSecret -i -t rhel7 bash To disable the security labeling for this container versus running with the --permissive flag, use the following command: # docker run --security-opt label:disable -i -t fedora bash If you want a tighter security policy on the processes within a container,\nyou can specify an alternate type for the container. You could run a container\nthat is only allowed to listen on Apache ports by executing the following\ncommand: # docker run --security-opt label:type:svirt_apache_t -i -t centos bash Note: You would have to write policy defining a svirt_apache_t type.",
|
|
"title": "Security configuration"
|
|
},
|
|
{
|
|
"loc": "/reference/run#runtime-constraints-on-cpu-and-memory",
|
|
"tags": "",
|
|
"text": "The operator can also adjust the performance parameters of the\ncontainer: -m=\"\": Memory limit (format: number optional unit , where unit = b, k, m or g)\n-c=0 : CPU shares (relative weight) The operator can constrain the memory available to a container easily\nwith docker run -m . If the host supports swap memory, then the -m \nmemory setting can be larger than physical RAM. We have four ways to set memory usage: memory=L inf, memory-swap=inf (specify memory and set memory-swap as -1 )\n It is not allowed to use more than L bytes of memory, but use as much swap\n as you want (only if the host supports swap memory). memory=L inf, memory-swap=2 L (specify memory without memory-swap)\n It is not allowed to use more than L bytes of memory, swap plus* memory\n usage is double of that. memory=L inf, memory-swap=S inf, L =S (specify both memory and memory-swap)\n It is not allowed to use more than L bytes of memory, swap plus memory\n usage is limited by S. Similarly the operator can increase the priority of this container with\nthe -c option. By default, all containers run at the same priority and\nget the same proportion of CPU cycles, but you can tell the kernel to\ngive more shares of CPU time to one or more containers when you start\nthem via Docker. The flag -c or --cpu-shares with value 0 indicates that the running\ncontainer has access to all 1024 (default) CPU shares. However, this value\ncan be modified to run a container with a different priority or different\nproportion of CPU cycles. E.g., If we start three {C0, C1, C2} containers with default values\n( -c OR --cpu-shares = 0) and one {C3} with ( -c or --cpu-shares =512)\nthen C0, C1, and C2 would have access to 100% CPU shares (1024) and C3 would\nonly have access to 50% CPU shares (512). In the context of a time-sliced OS\nwith time quantum set as 100 milliseconds, containers C0, C1, and C2 will run\nfor full-time quantum, and container C3 will run for half-time quantum i.e 50\nmilliseconds.",
|
|
"title": "Runtime constraints on CPU and memory"
|
|
},
|
|
{
|
|
"loc": "/reference/run#runtime-privilege-linux-capabilities-and-lxc-configuration",
|
|
"tags": "",
|
|
"text": "--cap-add: Add Linux capabilities\n--cap-drop: Drop Linux capabilities\n--privileged=false: Give extended privileges to this container\n--device=[]: Allows you to run devices inside the container without the --privileged flag.\n--lxc-conf=[]: (lxc exec-driver only) Add custom lxc options --lxc-conf=\"lxc.cgroup.cpuset.cpus = 0,1\" By default, Docker containers are \"unprivileged\" and cannot, for\nexample, run a Docker daemon inside a Docker container. This is because\nby default a container is not allowed to access any devices, but a\n\"privileged\" container is given access to all devices (see lxc-template.go \nand documentation on cgroups devices ). When the operator executes docker run --privileged , Docker will enable\nto access to all devices on the host as well as set some configuration\nin AppArmor or SELinux to allow the container nearly all the same access to the\nhost as processes running outside containers on the host. Additional\ninformation about running with --privileged is available on the Docker Blog . If you want to limit access to a specific device or devices you can use\nthe --device flag. It allows you to specify one or more devices that\nwill be accessible within the container. $ sudo docker run --device=/dev/snd:/dev/snd ... By default, the container will be able to read , write , and mknod these devices.\nThis can be overridden using a third :rwm set of options to each --device flag: $ sudo docker run --device=/dev/sda:/dev/xvdc --rm -it ubuntu fdisk /dev/xvdc\n\n Command (m for help): q\n $ sudo docker run --device=/dev/sda:/dev/xvdc:r --rm -it ubuntu fdisk /dev/xvdc\n You will not be able to write the partition table.\n\n Command (m for help): q\n\n $ sudo docker run --device=/dev/sda:/dev/xvdc:w --rm -it ubuntu fdisk /dev/xvdc\n crash....\n\n $ sudo docker run --device=/dev/sda:/dev/xvdc:m --rm -it ubuntu fdisk /dev/xvdc\n fdisk: unable to open /dev/xvdc: Operation not permitted In addition to --privileged , the operator can have fine grain control over the\ncapabilities using --cap-add and --cap-drop . By default, Docker has a default\nlist of capabilities that are kept. Both flags support the value all , so if the\noperator wants to have all capabilities but MKNOD they could use: $ sudo docker run --cap-add=ALL --cap-drop=MKNOD ... For interacting with the network stack, instead of using --privileged they\nshould use --cap-add=NET_ADMIN to modify the network interfaces. $ docker run -t -i --rm ubuntu:14.04 ip link add dummy0 type dummy\nRTNETLINK answers: Operation not permitted\n$ docker run -t -i --rm --cap-add=NET_ADMIN ubuntu:14.04 ip link add dummy0 type dummy To mount a FUSE based filesystem, you need to combine both --cap-add and --device : $ docker run --rm -it --cap-add SYS_ADMIN sshfs sshfs sven@10.10.10.20:/home/sven /mnt\nfuse: failed to open /dev/fuse: Operation not permitted\n$ docker run --rm -it --device /dev/fuse sshfs sshfs sven@10.10.10.20:/home/sven /mnt\nfusermount: mount failed: Operation not permitted\n$ docker run --rm -it --cap-add SYS_ADMIN --device /dev/fuse sshfs\n# sshfs sven@10.10.10.20:/home/sven /mnt\nThe authenticity of host '10.10.10.20 (10.10.10.20)' can't be established.\nECDSA key fingerprint is 25:34:85:75:25:b0:17:46:05:19:04:93:b5:dd:5f:c6.\nAre you sure you want to continue connecting (yes/no)? yes\nsven@10.10.10.20's password:\nroot@30aa0cfaf1b5:/# ls -la /mnt/src/docker\ntotal 1516\ndrwxrwxr-x 1 1000 1000 4096 Dec 4 06:08 .\ndrwxrwxr-x 1 1000 1000 4096 Dec 4 11:46 ..\n-rw-rw-r-- 1 1000 1000 16 Oct 8 00:09 .dockerignore\n-rwxrwxr-x 1 1000 1000 464 Oct 8 00:09 .drone.yml\ndrwxrwxr-x 1 1000 1000 4096 Dec 4 06:11 .git\n-rw-rw-r-- 1 1000 1000 461 Dec 4 06:08 .gitignore\n.... If the Docker daemon was started using the lxc exec-driver\n( docker -d --exec-driver=lxc ) then the operator can also specify LXC options\nusing one or more --lxc-conf parameters. These can be new parameters or\noverride existing parameters from the lxc-template.go .\nNote that in the future, a given host's docker daemon may not use LXC, so this\nis an implementation-specific configuration meant for operators already\nfamiliar with using LXC directly. Note: \nIf you use --lxc-conf to modify a container's configuration which is also\nmanaged by the Docker daemon, then the Docker daemon will not know about this\nmodification, and you will need to manage any conflicts yourself. For example,\nyou can use --lxc-conf to set a container's IP address, but this will not be\nreflected in the /etc/hosts file.",
|
|
"title": "Runtime privilege, Linux capabilities, and LXC configuration"
|
|
},
|
|
{
|
|
"loc": "/reference/run#overriding-dockerfile-image-defaults",
|
|
"tags": "",
|
|
"text": "When a developer builds an image from a Dockerfile \nor when she commits it, the developer can set a number of default parameters\nthat take effect when the image starts up as a container. Four of the Dockerfile commands cannot be overridden at runtime: FROM , MAINTAINER , RUN , and ADD . Everything else has a corresponding override\nin docker run . We'll go through what the developer might have set in each\nDockerfile instruction and how the operator can override that setting. CMD (Default Command or Options) ENTRYPOINT (Default Command to Execute at Runtime) EXPOSE (Incoming Ports) ENV (Environment Variables) VOLUME (Shared Filesystems) USER WORKDIR",
|
|
"title": "Overriding Dockerfile image defaults"
|
|
},
|
|
{
|
|
"loc": "/reference/run#cmd-default-command-or-options",
|
|
"tags": "",
|
|
"text": "Recall the optional COMMAND in the Docker\ncommandline: $ sudo docker run [OPTIONS] IMAGE[:TAG] [COMMAND] [ARG...] This command is optional because the person who created the IMAGE may\nhave already provided a default COMMAND using the Dockerfile CMD \ninstruction. As the operator (the person running a container from the\nimage), you can override that CMD instruction just by specifying a new COMMAND . If the image also specifies an ENTRYPOINT then the CMD or COMMAND \nget appended as arguments to the ENTRYPOINT .",
|
|
"title": "CMD (default command or options)"
|
|
},
|
|
{
|
|
"loc": "/reference/run#entrypoint-default-command-to-execute-at-runtime",
|
|
"tags": "",
|
|
"text": "--entrypoint=\"\": Overwrite the default entrypoint set by the image The ENTRYPOINT of an image is similar to a COMMAND because it\nspecifies what executable to run when the container starts, but it is\n(purposely) more difficult to override. The ENTRYPOINT gives a\ncontainer its default nature or behavior, so that when you set an ENTRYPOINT you can run the container as if it were that binary ,\ncomplete with default options, and you can pass in more options via the COMMAND . But, sometimes an operator may want to run something else\ninside the container, so you can override the default ENTRYPOINT at\nruntime by using a string to specify the new ENTRYPOINT . Here is an\nexample of how to run a shell in a container that has been set up to\nautomatically run something else (like /usr/bin/redis-server ): $ sudo docker run -i -t --entrypoint /bin/bash example/redis or two examples of how to pass more parameters to that ENTRYPOINT: $ sudo docker run -i -t --entrypoint /bin/bash example/redis -c ls -l\n$ sudo docker run -i -t --entrypoint /usr/bin/redis-cli example/redis --help",
|
|
"title": "ENTRYPOINT (default command to execute at runtime)"
|
|
},
|
|
{
|
|
"loc": "/reference/run#expose-incoming-ports",
|
|
"tags": "",
|
|
"text": "The Dockerfile doesn't give much control over networking, only providing\nthe EXPOSE instruction to give a hint to the operator about what\nincoming ports might provide services. The following options work with\nor override the Dockerfile's exposed defaults: --expose=[]: Expose a port or a range of ports from the container\n without publishing it to your host\n-P=false : Publish all exposed ports to the host interfaces\n-p=[] : Publish a container\u1fbfs port or a range of ports to the host \n format: ip:hostPort:containerPort | ip::containerPort | hostPort:containerPort | containerPort\n Both hostPort and containerPort can be specified as a range of ports. \n When specifying ranges for both, the number of container ports in the range must match the number of host ports in the range. (e.g., `-p 1234-1236:1234-1236/tcp`)\n (use 'docker port' to see the actual mapping)\n--link=\"\" : Add link to another container ( name or id :alias) As mentioned previously, EXPOSE (and --expose ) makes ports available in a container for incoming connections. The port number on the\ninside of the container (where the service listens) does not need to be\nthe same number as the port exposed on the outside of the container\n(where clients connect), so inside the container you might have an HTTP\nservice listening on port 80 (and so you EXPOSE 80 in the Dockerfile),\nbut outside the container the port might be 42800. To help a new client container reach the server container's internal\nport operator --expose 'd by the operator or EXPOSE 'd by the\ndeveloper, the operator has three choices: start the server container\nwith -P or -p, or start the client container with --link . If the operator uses -P or -p then Docker will make the exposed port\naccessible on the host and the ports will be available to any client\nthat can reach the host. When using -P , Docker will bind the exposed \nports to a random port on the host between 49153 and 65535. To find the\nmapping between the host ports and the exposed ports, use docker port . If the operator uses --link when starting the new client container,\nthen the client container can access the exposed port via a private\nnetworking interface. Docker will set some environment variables in the\nclient container to help indicate which interface and port to use.",
|
|
"title": "EXPOSE (incoming ports)"
|
|
},
|
|
{
|
|
"loc": "/reference/run#env-environment-variables",
|
|
"tags": "",
|
|
"text": "When a new container is created, Docker will set the following environment\nvariables automatically: \n \n Variable \n Value \n \n \n HOME \n \n Set based on the value of USER \n \n \n \n HOSTNAME \n \n The hostname associated with the container\n \n \n \n PATH \n \n Includes popular directories, such as : \n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin \n \n \n TERM \n xterm if the container is allocated a psuedo-TTY \n The container may also include environment variables defined\nas a result of the container being linked with another container. See\nthe Container Links \nsection for more details. Additionally, the operator can set any environment variable in the \ncontainer by using one or more -e flags, even overriding those mentioned \nabove, or already defined by the developer with a Dockerfile ENV : $ sudo docker run -e \"deep=purple\" --rm ubuntu /bin/bash -c export\ndeclare -x HOME=\"/\"\ndeclare -x HOSTNAME=\"85bc26a0e200\"\ndeclare -x OLDPWD\ndeclare -x PATH=\"/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\"\ndeclare -x PWD=\"/\"\ndeclare -x SHLVL=\"1\"\ndeclare -x container=\"lxc\"\ndeclare -x deep=\"purple\" Similarly the operator can set the hostname with -h . --link name or id :alias also sets environment variables, using the alias string to\ndefine environment variables within the container that give the IP and PORT\ninformation for connecting to the service container. Let's imagine we have a\ncontainer running Redis: # Start the service container, named redis-name\n$ sudo docker run -d --name redis-name dockerfiles/redis\n4241164edf6f5aca5b0e9e4c9eccd899b0b8080c64c0cd26efe02166c73208f3\n\n# The redis-name container exposed port 6379\n$ sudo docker ps\nCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES\n4241164edf6f $ dockerfiles/redis:latest /redis-stable/src/re 5 seconds ago Up 4 seconds 6379/tcp redis-name\n\n# Note that there are no public ports exposed since we didn\u1fbft use -p or -P\n$ sudo docker port 4241164edf6f 6379\n2014/01/25 00:55:38 Error: No public port '6379' published for 4241164edf6f Yet we can get information about the Redis container's exposed ports\nwith --link . Choose an alias that will form a\nvalid environment variable! $ sudo docker run --rm --link redis-name:redis_alias --entrypoint /bin/bash dockerfiles/redis -c export\ndeclare -x HOME=\"/\"\ndeclare -x HOSTNAME=\"acda7f7b1cdc\"\ndeclare -x OLDPWD\ndeclare -x PATH=\"/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\"\ndeclare -x PWD=\"/\"\ndeclare -x REDIS_ALIAS_NAME=\"/distracted_wright/redis\"\ndeclare -x REDIS_ALIAS_PORT=\"tcp://172.17.0.32:6379\"\ndeclare -x REDIS_ALIAS_PORT_6379_TCP=\"tcp://172.17.0.32:6379\"\ndeclare -x REDIS_ALIAS_PORT_6379_TCP_ADDR=\"172.17.0.32\"\ndeclare -x REDIS_ALIAS_PORT_6379_TCP_PORT=\"6379\"\ndeclare -x REDIS_ALIAS_PORT_6379_TCP_PROTO=\"tcp\"\ndeclare -x SHLVL=\"1\"\ndeclare -x container=\"lxc\" And we can use that information to connect from another container as a client: $ sudo docker run -i -t --rm --link redis-name:redis_alias --entrypoint /bin/bash dockerfiles/redis -c '/redis-stable/src/redis-cli -h $REDIS_ALIAS_PORT_6379_TCP_ADDR -p $REDIS_ALIAS_PORT_6379_TCP_PORT'\n172.17.0.32:6379 Docker will also map the private IP address to the alias of a linked\ncontainer by inserting an entry into /etc/hosts . You can use this\nmechanism to communicate with a linked container by its alias: $ sudo docker run -d --name servicename busybox sleep 30\n$ sudo docker run -i -t --link servicename:servicealias busybox ping -c 1 servicealias If you restart the source container ( servicename in this case), the recipient\ncontainer's /etc/hosts entry will be automatically updated. Note :\nUnlike host entries in the /ets/hosts file, IP addresses stored in the\nenvironment variables are not automatically updated if the source container is\nrestarted. We recommend using the host entries in /etc/hosts to resolve the\nIP address of linked containers.",
|
|
"title": "ENV (environment variables)"
|
|
},
|
|
{
|
|
"loc": "/reference/run#volume-shared-filesystems",
|
|
"tags": "",
|
|
"text": "-v=[]: Create a bind mount with: [host-dir]:[container-dir]:[rw|ro].\n If \"container-dir\" is missing, then docker creates a new volume.\n--volumes-from=\"\": Mount all volumes from the given container(s) The volumes commands are complex enough to have their own documentation\nin section Managing data in \ncontainers . A developer can define\none or more VOLUME 's associated with an image, but only the operator\ncan give access from one container to another (or from a container to a\nvolume mounted on the host).",
|
|
"title": "VOLUME (shared filesystems)"
|
|
},
|
|
{
|
|
"loc": "/reference/run#user",
|
|
"tags": "",
|
|
"text": "The default user within a container is root (id = 0), but if the\ndeveloper created additional users, those are accessible too. The\ndeveloper can set a default user to run the first process with the\nDockerfile USER instruction, but the operator can override it: -u=\"\": Username or UID Note: if you pass numeric uid, it must be in range 0-2147483647.",
|
|
"title": "USER"
|
|
},
|
|
{
|
|
"loc": "/reference/run#workdir",
|
|
"tags": "",
|
|
"text": "The default working directory for running binaries within a container is the\nroot directory ( / ), but the developer can set a different default with the\nDockerfile WORKDIR command. The operator can override this with: -w=\"\": Working directory inside the container",
|
|
"title": "WORKDIR"
|
|
},
|
|
{
|
|
"loc": "/compose/cli/",
|
|
"tags": "",
|
|
"text": "CLI reference\nMost Docker Compose commands are run against one or more services. If\nthe service is not specified, the command will apply to all services.\nFor full usage information, run docker-compose [COMMAND] --help.\nCommands\nbuild\nBuilds or rebuilds services.\nServices are built once and then tagged as project_service, e.g.,\ncomposetest_db. If you change a service's Dockerfile or the contents of its\nbuild directory, run docker-compose build to rebuild it.\nhelp\nDisplays help and usage instructions for a command.\nkill\nForces running containers to stop by sending a SIGKILL signal. Optionally the\nsignal can be passed, for example:\n$ docker-compose kill -s SIGINT\n\nlogs\nDisplays log output from services.\nport\nPrints the public port for a port binding\nps\nLists containers.\npull\nPulls service images.\nrm\nRemoves stopped service containers.\nrun\nRuns a one-off command on a service.\nFor example,\n$ docker-compose run web python manage.py shell\n\nwill start the web service and then run manage.py shell in python.\nNote that by default, linked services will also be started, unless they are\nalready running.\nOne-off commands are started in new containers with the same configuration as a\nnormal container for that service, so volumes, links, etc will all be created as\nexpected. When using run, there are two differences from bringing up a\ncontainer normally:\n\n\nthe command will be overridden with the one specified. So, if you run\ndocker-compose run web bash, the container's web command (which could default\nto, e.g., python app.py) will be overridden to bash\n\n\nby default no ports will be created in case they collide with already opened\nports.\n\n\nLinks are also created between one-off commands and the other containers which\nare part of that service. So, for example, you could run:\n$ docker-compose run db psql -h db -U docker\n\nThis would open up an interactive PostgreSQL shell for the linked db container\n(which would get created or started as needed).\nIf you do not want linked containers to start when running the one-off command,\nspecify the --no-deps flag:\n$ docker-compose run --no-deps web python manage.py shell\n\nSimilarly, if you do want the service's ports to be created and mapped to the\nhost, specify the --service-ports flag:\n $ docker-compose run --service-ports web python manage.py shell\nscale\nSets the number of containers to run for a service.\nNumbers are specified as arguments in the form service=num. For example:\n$ docker-compose scale web=2 worker=3\n\nstart\nStarts existing containers for a service.\nstop\nStops running containers without removing them. They can be started again with\ndocker-compose start.\nup\nBuilds, (re)creates, starts, and attaches to containers for a service.\nLinked services will be started, unless they are already running.\nBy default, docker-compose up will aggregate the output of each container and,\nwhen it exits, all containers will be stopped. Running docker-compose up -d,\nwill start the containers in the background and leave them running.\nBy default, if there are existing containers for a service, docker-compose up will stop and recreate them (preserving mounted volumes with volumes-from), so that changes in docker-compose.yml are picked up. If you do not want containers stopped and recreated, use docker-compose up --no-recreate. This will still start any stopped containers, if needed.\nOptions\n--verbose\nShows more output\n--version\nPrints version and exits\n-f, --file FILE\nSpecifies an alternate Compose yaml file (default: docker-compose.yml)\n-p, --project-name NAME\nSpecifies an alternate project name (default: current directory name)\nEnvironment Variables\nSeveral environment variables are available for you to configure Compose's behaviour.\nVariables starting with DOCKER_ are the same as those used to configure the\nDocker command-line client. If you're using boot2docker, $(boot2docker shellinit)\nwill set them to their correct values.\nCOMPOSE_PROJECT_NAME\nSets the project name, which is prepended to the name of every container started by Compose. Defaults to the basename of the current working directory.\nCOMPOSE_FILE\nSets the path to the docker-compose.yml to use. Defaults to docker-compose.yml in the current working directory.\nDOCKER_HOST\nSets the URL of the docker daemon. As with the Docker client, defaults to unix:///var/run/docker.sock.\nDOCKER_TLS_VERIFY\nWhen set to anything other than an empty string, enables TLS communication with\nthe daemon.\nDOCKER_CERT_PATH\nConfigures the path to the ca.pem, cert.pem, and key.pem files used for TLS verification. Defaults to ~/.docker.\nCompose documentation\n\nInstalling Compose\nUser guide\nYaml file reference\nCompose environment variables\nCompose command line completion",
|
|
"title": "Compose command line"
|
|
},
|
|
{
|
|
"loc": "/compose/cli#cli-reference",
|
|
"tags": "",
|
|
"text": "Most Docker Compose commands are run against one or more services. If\nthe service is not specified, the command will apply to all services. For full usage information, run docker-compose [COMMAND] --help .",
|
|
"title": "CLI reference"
|
|
},
|
|
{
|
|
"loc": "/compose/cli#commands",
|
|
"tags": "",
|
|
"text": "build Builds or rebuilds services. Services are built once and then tagged as project_service , e.g., composetest_db . If you change a service's Dockerfile or the contents of its\nbuild directory, run docker-compose build to rebuild it. help Displays help and usage instructions for a command. kill Forces running containers to stop by sending a SIGKILL signal. Optionally the\nsignal can be passed, for example: $ docker-compose kill -s SIGINT logs Displays log output from services. port Prints the public port for a port binding ps Lists containers. pull Pulls service images. rm Removes stopped service containers. run Runs a one-off command on a service. For example, $ docker-compose run web python manage.py shell will start the web service and then run manage.py shell in python.\nNote that by default, linked services will also be started, unless they are\nalready running. One-off commands are started in new containers with the same configuration as a\nnormal container for that service, so volumes, links, etc will all be created as\nexpected. When using run , there are two differences from bringing up a\ncontainer normally: the command will be overridden with the one specified. So, if you run docker-compose run web bash , the container's web command (which could default\nto, e.g., python app.py ) will be overridden to bash by default no ports will be created in case they collide with already opened\nports. Links are also created between one-off commands and the other containers which\nare part of that service. So, for example, you could run: $ docker-compose run db psql -h db -U docker This would open up an interactive PostgreSQL shell for the linked db container\n(which would get created or started as needed). If you do not want linked containers to start when running the one-off command,\nspecify the --no-deps flag: $ docker-compose run --no-deps web python manage.py shell Similarly, if you do want the service's ports to be created and mapped to the\nhost, specify the --service-ports flag:\n $ docker-compose run --service-ports web python manage.py shell scale Sets the number of containers to run for a service. Numbers are specified as arguments in the form service=num . For example: $ docker-compose scale web=2 worker=3 start Starts existing containers for a service. stop Stops running containers without removing them. They can be started again with docker-compose start . up Builds, (re)creates, starts, and attaches to containers for a service. Linked services will be started, unless they are already running. By default, docker-compose up will aggregate the output of each container and,\nwhen it exits, all containers will be stopped. Running docker-compose up -d ,\nwill start the containers in the background and leave them running. By default, if there are existing containers for a service, docker-compose up will stop and recreate them (preserving mounted volumes with volumes-from ), so that changes in docker-compose.yml are picked up. If you do not want containers stopped and recreated, use docker-compose up --no-recreate . This will still start any stopped containers, if needed.",
|
|
"title": "Commands"
|
|
},
|
|
{
|
|
"loc": "/compose/cli#options",
|
|
"tags": "",
|
|
"text": "--verbose Shows more output --version Prints version and exits -f, --file FILE Specifies an alternate Compose yaml file (default: docker-compose.yml ) -p, --project-name NAME Specifies an alternate project name (default: current directory name)",
|
|
"title": "Options"
|
|
},
|
|
{
|
|
"loc": "/compose/cli#environment-variables",
|
|
"tags": "",
|
|
"text": "Several environment variables are available for you to configure Compose's behaviour. Variables starting with DOCKER_ are the same as those used to configure the\nDocker command-line client. If you're using boot2docker, $(boot2docker shellinit) \nwill set them to their correct values. COMPOSE_PROJECT_NAME Sets the project name, which is prepended to the name of every container started by Compose. Defaults to the basename of the current working directory. COMPOSE_FILE Sets the path to the docker-compose.yml to use. Defaults to docker-compose.yml in the current working directory. DOCKER_HOST Sets the URL of the docker daemon. As with the Docker client, defaults to unix:///var/run/docker.sock . DOCKER_TLS_VERIFY When set to anything other than an empty string, enables TLS communication with\nthe daemon. DOCKER_CERT_PATH Configures the path to the ca.pem , cert.pem , and key.pem files used for TLS verification. Defaults to ~/.docker .",
|
|
"title": "Environment Variables"
|
|
},
|
|
{
|
|
"loc": "/compose/cli#compose-documentation",
|
|
"tags": "",
|
|
"text": "Installing Compose User guide Yaml file reference Compose environment variables Compose command line completion",
|
|
"title": "Compose documentation"
|
|
},
|
|
{
|
|
"loc": "/compose/yml/",
|
|
"tags": "",
|
|
"text": "docker-compose.yml reference\nEach service defined in docker-compose.yml must specify exactly one of\nimage or build. Other keys are optional, and are analogous to their\ndocker run command-line counterparts.\nAs with docker run, options specified in the Dockerfile (e.g., CMD,\nEXPOSE, VOLUME, ENV) are respected by default - you don't need to\nspecify them again in docker-compose.yml.\nimage\nTag or partial image ID. Can be local or remote - Compose will attempt to\npull if it doesn't exist locally.\nimage: ubuntu\nimage: orchardup/postgresql\nimage: a4bc65fd\n\n\nbuild\nPath to a directory containing a Dockerfile. This directory is also the\nbuild context that is sent to the Docker daemon.\nCompose will build and tag it with a generated name, and use that image thereafter.\nbuild: /path/to/build/dir\n\n\ncommand\nOverride the default command.\ncommand: bundle exec thin -p 3000\n\n\n\nlinks\nLink to containers in another service. Either specify both the service name and\nthe link alias (SERVICE:ALIAS), or just the service name (which will also be\nused for the alias).\nlinks:\n - db\n - db:database\n - redis\n\n\nAn entry with the alias' name will be created in /etc/hosts inside containers\nfor this service, e.g:\n172.17.2.186 db\n172.17.2.186 database\n172.17.2.187 redis\n\n\nEnvironment variables will also be created - see the environment variable\nreference for details.\nexternal_links\nLink to containers started outside this docker-compose.yml or even outside\nof Compose, especially for containers that provide shared or common services.\nexternal_links follow semantics similar to links when specifying both the\ncontainer name and the link alias (CONTAINER:ALIAS).\nexternal_links:\n - redis_1\n - project_db_1:mysql\n - project_db_1:postgresql\n\n\nports\nExpose ports. Either specify both ports (HOST:CONTAINER), or just the container\nport (a random host port will be chosen).\n\nNote: When mapping ports in the HOST:CONTAINER format, you may experience\nerroneous results when using a container port lower than 60, because YAML will\nparse numbers in the format xx:yy as sexagesimal (base 60). For this reason,\nwe recommend always explicitly specifying your port mappings as strings.\n\nports:\n - 3000\n - 8000:8000\n - 49100:22\n - 127.0.0.1:8001:8001\n\n\nexpose\nExpose ports without publishing them to the host machine - they'll only be\naccessible to linked services. Only the internal port can be specified.\nexpose:\n - 3000\n - 8000\n\n\nvolumes\nMount paths as volumes, optionally specifying a path on the host machine\n(HOST:CONTAINER), or an access mode (HOST:CONTAINER:ro).\nvolumes:\n - /var/lib/mysql\n - cache/:/tmp/cache\n - ~/configs:/etc/configs/:ro\n\n\nvolumes_from\nMount all of the volumes from another service or container.\nvolumes_from:\n - service_name\n - container_name\n\n\nenvironment\nAdd environment variables. You can use either an array or a dictionary.\nEnvironment variables with only a key are resolved to their values on the\nmachine Compose is running on, which can be helpful for secret or host-specific values.\nenvironment:\n RACK_ENV: development\n SESSION_SECRET:\n\nenvironment:\n - RACK_ENV=development\n - SESSION_SECRET\n\n\nenv_file\nAdd environment variables from a file. Can be a single value or a list.\nEnvironment variables specified in environment override these values.\nenv_file:\n - .env\n\n\nRACK_ENV: development\n\n\nnet\nNetworking mode. Use the same values as the docker client --net parameter.\nnet: bridge\nnet: none\nnet: container:[name or id]\nnet: host\n\n\ndns\nCustom DNS servers. Can be a single value or a list.\ndns: 8.8.8.8\ndns:\n - 8.8.8.8\n - 9.9.9.9\n\n\ncap_add, cap_drop\nAdd or drop container capabilities.\nSee man 7 capabilities for a full list.\ncap_add:\n - ALL\n\ncap_drop:\n - NET_ADMIN\n - SYS_ADMIN\n\n\ndns_search\nCustom DNS search domains. Can be a single value or a list.\ndns_search: example.com\ndns_search:\n - dc1.example.com\n - dc2.example.com\n\n\nworking_dir, entrypoint, user, hostname, domainname, mem_limit, privileged, restart, stdin_open, tty, cpu_shares\nEach of these is a single value, analogous to its\ndocker run counterpart.\ncpu_shares: 73\n\nworking_dir: /code\nentrypoint: /code/entrypoint.sh\nuser: postgresql\n\nhostname: foo\ndomainname: foo.com\n\nmem_limit: 1000000000\nprivileged: true\n\nrestart: always\n\nstdin_open: true\ntty: true\n\n\nCompose documentation\n\nInstalling Compose\nUser guide\nCommand line reference\nCompose environment variables\nCompose command line completion",
|
|
"title": "Compose yml"
|
|
},
|
|
{
|
|
"loc": "/compose/yml#docker-composeyml-reference",
|
|
"tags": "",
|
|
"text": "Each service defined in docker-compose.yml must specify exactly one of image or build . Other keys are optional, and are analogous to their docker run command-line counterparts. As with docker run , options specified in the Dockerfile (e.g., CMD , EXPOSE , VOLUME , ENV ) are respected by default - you don't need to\nspecify them again in docker-compose.yml . image Tag or partial image ID. Can be local or remote - Compose will attempt to\npull if it doesn't exist locally. image: ubuntu\nimage: orchardup/postgresql\nimage: a4bc65fd build Path to a directory containing a Dockerfile. This directory is also the\nbuild context that is sent to the Docker daemon. Compose will build and tag it with a generated name, and use that image thereafter. build: /path/to/build/dir command Override the default command. command: bundle exec thin -p 3000 links Link to containers in another service. Either specify both the service name and\nthe link alias ( SERVICE:ALIAS ), or just the service name (which will also be\nused for the alias). links:\n - db\n - db:database\n - redis An entry with the alias' name will be created in /etc/hosts inside containers\nfor this service, e.g: 172.17.2.186 db\n172.17.2.186 database\n172.17.2.187 redis Environment variables will also be created - see the environment variable\nreference for details. external_links Link to containers started outside this docker-compose.yml or even outside\nof Compose, especially for containers that provide shared or common services. external_links follow semantics similar to links when specifying both the\ncontainer name and the link alias ( CONTAINER:ALIAS ). external_links:\n - redis_1\n - project_db_1:mysql\n - project_db_1:postgresql ports Expose ports. Either specify both ports ( HOST:CONTAINER ), or just the container\nport (a random host port will be chosen). Note: When mapping ports in the HOST:CONTAINER format, you may experience\nerroneous results when using a container port lower than 60, because YAML will\nparse numbers in the format xx:yy as sexagesimal (base 60). For this reason,\nwe recommend always explicitly specifying your port mappings as strings. ports:\n - 3000 \n - 8000:8000 \n - 49100:22 \n - 127.0.0.1:8001:8001 expose Expose ports without publishing them to the host machine - they'll only be\naccessible to linked services. Only the internal port can be specified. expose:\n - 3000 \n - 8000 volumes Mount paths as volumes, optionally specifying a path on the host machine\n( HOST:CONTAINER ), or an access mode ( HOST:CONTAINER:ro ). volumes:\n - /var/lib/mysql\n - cache/:/tmp/cache\n - ~/configs:/etc/configs/:ro volumes_from Mount all of the volumes from another service or container. volumes_from:\n - service_name\n - container_name environment Add environment variables. You can use either an array or a dictionary. Environment variables with only a key are resolved to their values on the\nmachine Compose is running on, which can be helpful for secret or host-specific values. environment:\n RACK_ENV: development\n SESSION_SECRET:\n\nenvironment:\n - RACK_ENV=development\n - SESSION_SECRET env_file Add environment variables from a file. Can be a single value or a list. Environment variables specified in environment override these values. env_file:\n - .env RACK_ENV: development net Networking mode. Use the same values as the docker client --net parameter. net: bridge \nnet: none \nnet: container:[name or id] \nnet: host dns Custom DNS servers. Can be a single value or a list. dns: 8.8.8.8\ndns:\n - 8.8.8.8\n - 9.9.9.9 cap_add, cap_drop Add or drop container capabilities.\nSee man 7 capabilities for a full list. cap_add:\n - ALL\n\ncap_drop:\n - NET_ADMIN\n - SYS_ADMIN dns_search Custom DNS search domains. Can be a single value or a list. dns_search: example.com\ndns_search:\n - dc1.example.com\n - dc2.example.com working_dir, entrypoint, user, hostname, domainname, mem_limit, privileged, restart, stdin_open, tty, cpu_shares Each of these is a single value, analogous to its docker run counterpart. cpu_shares: 73\n\nworking_dir: /code\nentrypoint: /code/entrypoint.sh\nuser: postgresql\n\nhostname: foo\ndomainname: foo.com\n\nmem_limit: 1000000000\nprivileged: true\n\nrestart: always\n\nstdin_open: true\ntty: true",
|
|
"title": "docker-compose.yml reference"
|
|
},
|
|
{
|
|
"loc": "/compose/yml#compose-documentation",
|
|
"tags": "",
|
|
"text": "Installing Compose User guide Command line reference Compose environment variables Compose command line completion",
|
|
"title": "Compose documentation"
|
|
},
|
|
{
|
|
"loc": "/compose/env/",
|
|
"tags": "",
|
|
"text": "Environment variables reference\nNote: Environment variables are no longer the recommended method for connecting to linked services. Instead, you should use the link name (by default, the name of the linked service) as the hostname to connect to. See the docker-compose.yml documentation for details.\nCompose uses Docker links to expose services' containers to one another. Each linked container injects a set of environment variables, each of which begins with the uppercase name of the container.\nTo see what environment variables are available to a service, run docker-compose run SERVICE env.\nname_PORT\nFull URL, e.g. DB_PORT=tcp://172.17.0.5:5432\nname_PORT_num_protocol\nFull URL, e.g. DB_PORT_5432_TCP=tcp://172.17.0.5:5432\nname_PORT_num_protocol_ADDR\nContainer's IP address, e.g. DB_PORT_5432_TCP_ADDR=172.17.0.5\nname_PORT_num_protocol_PORT\nExposed port number, e.g. DB_PORT_5432_TCP_PORT=5432\nname_PORT_num_protocol_PROTO\nProtocol (tcp or udp), e.g. DB_PORT_5432_TCP_PROTO=tcp\nname_NAME\nFully qualified container name, e.g. DB_1_NAME=/myapp_web_1/myapp_db_1\nCompose documentation\n\nInstalling Compose\nUser guide\nCommand line reference\nYaml file reference\nCompose command line completion",
|
|
"title": "Compose ENV variables"
|
|
},
|
|
{
|
|
"loc": "/compose/env#environment-variables-reference",
|
|
"tags": "",
|
|
"text": "Note: Environment variables are no longer the recommended method for connecting to linked services. Instead, you should use the link name (by default, the name of the linked service) as the hostname to connect to. See the docker-compose.yml documentation for details. Compose uses Docker links to expose services' containers to one another. Each linked container injects a set of environment variables, each of which begins with the uppercase name of the container. To see what environment variables are available to a service, run docker-compose run SERVICE env . name _PORT \nFull URL, e.g. DB_PORT=tcp://172.17.0.5:5432 name _PORT_ num _ protocol \nFull URL, e.g. DB_PORT_5432_TCP=tcp://172.17.0.5:5432 name _PORT_ num _ protocol _ADDR \nContainer's IP address, e.g. DB_PORT_5432_TCP_ADDR=172.17.0.5 name _PORT_ num _ protocol _PORT \nExposed port number, e.g. DB_PORT_5432_TCP_PORT=5432 name _PORT_ num _ protocol _PROTO \nProtocol (tcp or udp), e.g. DB_PORT_5432_TCP_PROTO=tcp name _NAME \nFully qualified container name, e.g. DB_1_NAME=/myapp_web_1/myapp_db_1",
|
|
"title": "Environment variables reference"
|
|
},
|
|
{
|
|
"loc": "/compose/env#compose-documentation",
|
|
"tags": "",
|
|
"text": "Installing Compose User guide Command line reference Yaml file reference Compose command line completion",
|
|
"title": "Compose documentation"
|
|
},
|
|
{
|
|
"loc": "/compose/completion/",
|
|
"tags": "",
|
|
"text": "Command Completion\nCompose comes with command completion\nfor the bash shell.\nInstalling Command Completion\nMake sure bash completion is installed. If you use a current Linux in a non-minimal installation, bash completion should be available.\nOn a Mac, install with brew install bash-completion\nPlace the completion script in /etc/bash_completion.d/ (/usr/local/etc/bash_completion.d/ on a Mac), using e.g. \n curl -L https://raw.githubusercontent.com/docker/compose/1.1.0/contrib/completion/bash/docker-compose /etc/bash_completion.d/docker-compose\n\nCompletion will be available upon next login.\nAvailable completions\nDepending on what you typed on the command line so far, it will complete\n\navailable docker-compose commands\noptions that are available for a particular command\nservice names that make sense in a given context (e.g. services with running or stopped instances or services based on images vs. services based on Dockerfiles). For docker-compose scale, completed service names will automatically have \"=\" appended.\narguments for selected options, e.g. docker-compose kill -s will complete some signals like SIGHUP and SIGUSR1.\n\nEnjoy working with Compose faster and with less typos!\nCompose documentation\n\nInstalling Compose\nUser guide\nCommand line reference\nYaml file reference\nCompose environment variables",
|
|
"title": "Compose commandline completion"
|
|
},
|
|
{
|
|
"loc": "/compose/completion#command-completion",
|
|
"tags": "",
|
|
"text": "Compose comes with command completion \nfor the bash shell. Installing Command Completion Make sure bash completion is installed. If you use a current Linux in a non-minimal installation, bash completion should be available.\nOn a Mac, install with brew install bash-completion Place the completion script in /etc/bash_completion.d/ ( /usr/local/etc/bash_completion.d/ on a Mac), using e.g. curl -L https://raw.githubusercontent.com/docker/compose/1.1.0/contrib/completion/bash/docker-compose /etc/bash_completion.d/docker-compose Completion will be available upon next login. Available completions\nDepending on what you typed on the command line so far, it will complete available docker-compose commands options that are available for a particular command service names that make sense in a given context (e.g. services with running or stopped instances or services based on images vs. services based on Dockerfiles). For docker-compose scale , completed service names will automatically have \"=\" appended. arguments for selected options, e.g. docker-compose kill -s will complete some signals like SIGHUP and SIGUSR1. Enjoy working with Compose faster and with less typos!",
|
|
"title": "Command Completion"
|
|
},
|
|
{
|
|
"loc": "/compose/completion#compose-documentation",
|
|
"tags": "",
|
|
"text": "Installing Compose User guide Command line reference Yaml file reference Compose environment variables",
|
|
"title": "Compose documentation"
|
|
},
|
|
{
|
|
"loc": "/swarm/discovery/",
|
|
"tags": "",
|
|
"text": "Discovery\nDocker Swarm comes with multiple Discovery backends\nExamples\nUsing the hosted discovery service\n# create a cluster\n$ swarm create\n6856663cdefdec325839a4b7e1de38e8 # - this is your unique cluster_id\n\n# on each of your nodes, start the swarm agent\n# node_ip doesn't have to be public (eg. 192.168.0.X),\n# as long as the swarm manager can access it.\n$ swarm join --addr=node_ip:2375 token://cluster_id\n\n# start the manager on any machine or your laptop\n$ swarm manage -H tcp://swarm_ip:swarm_port token://cluster_id\n\n# use the regular docker cli\n$ docker -H tcp://swarm_ip:swarm_port info\n$ docker -H tcp://swarm_ip:swarm_port run ...\n$ docker -H tcp://swarm_ip:swarm_port ps\n$ docker -H tcp://swarm_ip:swarm_port logs ...\n...\n\n# list nodes in your cluster\n$ swarm list token://cluster_id\nnode_ip:2375\n\n\nUsing a static file describing the cluster\n# for each of your nodes, add a line to a file\n# node_ip doesn't have to be public (eg. 192.168.0.X),\n# as long as the swarm manager can access it.\n$ echo node_ip1:2375 /tmp/my_cluster\n$ echo node_ip2:2375 /tmp/my_cluster\n$ echo node_ip3:2375 /tmp/my_cluster\n\n# start the manager on any machine or your laptop\n$ swarm manage -H tcp://swarm_ip:swarm_port file:///tmp/my_cluster\n\n# use the regular docker cli\n$ docker -H tcp://swarm_ip:swarm_port info\n$ docker -H tcp://swarm_ip:swarm_port run ...\n$ docker -H tcp://swarm_ip:swarm_port ps\n$ docker -H tcp://swarm_ip:swarm_port logs ...\n...\n\n# list nodes in your cluster\n$ swarm list file:///tmp/my_cluster\nnode_ip1:2375\nnode_ip2:2375\nnode_ip3:2375\n\n\nUsing etcd\n# on each of your nodes, start the swarm agent\n# node_ip doesn't have to be public (eg. 192.168.0.X),\n# as long as the swarm manager can access it.\n$ swarm join --addr=node_ip:2375 etcd://etcd_ip/path\n\n# start the manager on any machine or your laptop\n$ swarm manage -H tcp://swarm_ip:swarm_port etcd://etcd_ip/path\n\n# use the regular docker cli\n$ docker -H tcp://swarm_ip:swarm_port info\n$ docker -H tcp://swarm_ip:swarm_port run ...\n$ docker -H tcp://swarm_ip:swarm_port ps\n$ docker -H tcp://swarm_ip:swarm_port logs ...\n...\n\n# list nodes in your cluster\n$ swarm list etcd://etcd_ip/path\nnode_ip:2375\n\n\nUsing consul\n# on each of your nodes, start the swarm agent\n# node_ip doesn't have to be public (eg. 192.168.0.X),\n# as long as the swarm manager can access it.\n$ swarm join --addr=node_ip:2375 consul://consul_addr/path\n\n# start the manager on any machine or your laptop\n$ swarm manage -H tcp://swarm_ip:swarm_port consul://consul_addr/path\n\n# use the regular docker cli\n$ docker -H tcp://swarm_ip:swarm_port info\n$ docker -H tcp://swarm_ip:swarm_port run ...\n$ docker -H tcp://swarm_ip:swarm_port ps\n$ docker -H tcp://swarm_ip:swarm_port logs ...\n...\n\n# list nodes in your cluster\n$ swarm list consul://consul_addr/path\nnode_ip:2375\n\n\nUsing zookeeper\n# on each of your nodes, start the swarm agent\n# node_ip doesn't have to be public (eg. 192.168.0.X),\n# as long as the swarm manager can access it.\n$ swarm join --addr=node_ip:2375 zk://zookeeper_addr1,zookeeper_addr2/path\n\n# start the manager on any machine or your laptop\n$ swarm manage -H tcp://swarm_ip:swarm_port zk://zookeeper_addr1,zookeeper_addr2/path\n\n# use the regular docker cli\n$ docker -H tcp://swarm_ip:swarm_port info\n$ docker -H tcp://swarm_ip:swarm_port run ...\n$ docker -H tcp://swarm_ip:swarm_port ps\n$ docker -H tcp://swarm_ip:swarm_port logs ...\n...\n\n# list nodes in your cluster\n$ swarm list zk://zookeeper_addr1,zookeeper_addr2/path\nnode_ip:2375\n\n\nUsing a static list of ips\n# start the manager on any machine or your laptop\n$ swarm manage -H swarm_ip:swarm_port nodes://node_ip1:2375,node_ip2:2375\n#or\n$ swarm manage -H swarm_ip:swarm_port nodes://node_ip1:2375,node_ip2:2375\n\n# use the regular docker cli\n$ docker -H swarm_ip:swarm_port info\n$ docker -H swarm_ip:swarm_port run ...\n$ docker -H swarm_ip:swarm_port ps\n$ docker -H swarm_ip:swarm_port logs ...\n...\n\n\nContributing\nContributing a new discovery backend is easy,\nsimply implements this interface:\ntype DiscoveryService interface {\n Initialize(string, int) error\n Fetch() ([]string, error)\n Watch(WatchCallback)\n Register(string) error\n}\n\n\nExtra tips\nInitialize\ntake the discovery without the scheme and a heartbeat (in seconds)\nFetch\nreturns the list of all the nodes from the discovery\nWatch\ntriggers an update (Fetch),it can happen either via\na timer (like token) or use backend specific features (like etcd)\nRegister\nadd a new node to the discovery",
|
|
"title": "Swarm discovery"
|
|
},
|
|
{
|
|
"loc": "/swarm/discovery#discovery",
|
|
"tags": "",
|
|
"text": "Docker Swarm comes with multiple Discovery backends",
|
|
"title": "Discovery"
|
|
},
|
|
{
|
|
"loc": "/swarm/discovery#examples",
|
|
"tags": "",
|
|
"text": "Using the hosted discovery service # create a cluster\n$ swarm create\n6856663cdefdec325839a4b7e1de38e8 # - this is your unique cluster_id \n\n# on each of your nodes, start the swarm agent\n# node_ip doesn't have to be public (eg. 192.168.0.X),\n# as long as the swarm manager can access it.\n$ swarm join --addr= node_ip:2375 token:// cluster_id \n\n# start the manager on any machine or your laptop\n$ swarm manage -H tcp:// swarm_ip:swarm_port token:// cluster_id \n\n# use the regular docker cli\n$ docker -H tcp:// swarm_ip:swarm_port info\n$ docker -H tcp:// swarm_ip:swarm_port run ...\n$ docker -H tcp:// swarm_ip:swarm_port ps\n$ docker -H tcp:// swarm_ip:swarm_port logs ...\n...\n\n# list nodes in your cluster\n$ swarm list token:// cluster_id node_ip:2375 Using a static file describing the cluster # for each of your nodes, add a line to a file\n# node_ip doesn't have to be public (eg. 192.168.0.X),\n# as long as the swarm manager can access it.\n$ echo node_ip1:2375 /tmp/my_cluster\n$ echo node_ip2:2375 /tmp/my_cluster\n$ echo node_ip3:2375 /tmp/my_cluster\n\n# start the manager on any machine or your laptop\n$ swarm manage -H tcp:// swarm_ip:swarm_port file:///tmp/my_cluster\n\n# use the regular docker cli\n$ docker -H tcp:// swarm_ip:swarm_port info\n$ docker -H tcp:// swarm_ip:swarm_port run ...\n$ docker -H tcp:// swarm_ip:swarm_port ps\n$ docker -H tcp:// swarm_ip:swarm_port logs ...\n...\n\n# list nodes in your cluster\n$ swarm list file:///tmp/my_cluster node_ip1:2375 node_ip2:2375 node_ip3:2375 Using etcd # on each of your nodes, start the swarm agent\n# node_ip doesn't have to be public (eg. 192.168.0.X),\n# as long as the swarm manager can access it.\n$ swarm join --addr= node_ip:2375 etcd:// etcd_ip / path \n\n# start the manager on any machine or your laptop\n$ swarm manage -H tcp:// swarm_ip:swarm_port etcd:// etcd_ip / path \n\n# use the regular docker cli\n$ docker -H tcp:// swarm_ip:swarm_port info\n$ docker -H tcp:// swarm_ip:swarm_port run ...\n$ docker -H tcp:// swarm_ip:swarm_port ps\n$ docker -H tcp:// swarm_ip:swarm_port logs ...\n...\n\n# list nodes in your cluster\n$ swarm list etcd:// etcd_ip / path node_ip:2375 Using consul # on each of your nodes, start the swarm agent\n# node_ip doesn't have to be public (eg. 192.168.0.X),\n# as long as the swarm manager can access it.\n$ swarm join --addr= node_ip:2375 consul:// consul_addr / path \n\n# start the manager on any machine or your laptop\n$ swarm manage -H tcp:// swarm_ip:swarm_port consul:// consul_addr / path \n\n# use the regular docker cli\n$ docker -H tcp:// swarm_ip:swarm_port info\n$ docker -H tcp:// swarm_ip:swarm_port run ...\n$ docker -H tcp:// swarm_ip:swarm_port ps\n$ docker -H tcp:// swarm_ip:swarm_port logs ...\n...\n\n# list nodes in your cluster\n$ swarm list consul:// consul_addr / path node_ip:2375 Using zookeeper # on each of your nodes, start the swarm agent\n# node_ip doesn't have to be public (eg. 192.168.0.X),\n# as long as the swarm manager can access it.\n$ swarm join --addr= node_ip:2375 zk:// zookeeper_addr1 , zookeeper_addr2 / path \n\n# start the manager on any machine or your laptop\n$ swarm manage -H tcp:// swarm_ip:swarm_port zk:// zookeeper_addr1 , zookeeper_addr2 / path \n\n# use the regular docker cli\n$ docker -H tcp:// swarm_ip:swarm_port info\n$ docker -H tcp:// swarm_ip:swarm_port run ...\n$ docker -H tcp:// swarm_ip:swarm_port ps\n$ docker -H tcp:// swarm_ip:swarm_port logs ...\n...\n\n# list nodes in your cluster\n$ swarm list zk:// zookeeper_addr1 , zookeeper_addr2 / path node_ip:2375 Using a static list of ips # start the manager on any machine or your laptop\n$ swarm manage -H swarm_ip:swarm_port nodes:// node_ip1:2375 , node_ip2:2375 \n#or\n$ swarm manage -H swarm_ip:swarm_port nodes:// node_ip1:2375 , node_ip2:2375 \n\n# use the regular docker cli\n$ docker -H swarm_ip:swarm_port info\n$ docker -H swarm_ip:swarm_port run ...\n$ docker -H swarm_ip:swarm_port ps\n$ docker -H swarm_ip:swarm_port logs ...\n...",
|
|
"title": "Examples"
|
|
},
|
|
{
|
|
"loc": "/swarm/discovery#contributing",
|
|
"tags": "",
|
|
"text": "Contributing a new discovery backend is easy,\nsimply implements this interface: type DiscoveryService interface {\n Initialize(string, int) error\n Fetch() ([]string, error)\n Watch(WatchCallback)\n Register(string) error\n}",
|
|
"title": "Contributing"
|
|
},
|
|
{
|
|
"loc": "/swarm/discovery#extra-tips",
|
|
"tags": "",
|
|
"text": "Initialize take the discovery without the scheme and a heartbeat (in seconds) Fetch returns the list of all the nodes from the discovery Watch triggers an update ( Fetch ),it can happen either via\na timer (like token ) or use backend specific features (like etcd ) Register add a new node to the discovery",
|
|
"title": "Extra tips"
|
|
},
|
|
{
|
|
"loc": "/swarm/scheduler/strategy/",
|
|
"tags": "",
|
|
"text": "Strategies\nThe Docker Swarm scheduler comes with multiple strategies.\nThese strategies are used to rank nodes using a scores computed by the strategy.\nDocker Swarm currently supports 2 strategies:\n BinPacking\n Random\nYou can choose the strategy you want to use with the --strategy flag of swarm manage\nBinPacking strategy\nThe BinPacking strategy will rank the nodes using their CPU and RAM available and will return the\nnode the most packed already. This avoid fragementation, it will leave room for bigger containers\non usunsed machines.\nFor instance, let's says that both node-1 and node-2 have 2G of RAM:\n$ docker run -d -P -m 1G --name db mysql\nf8b693db9cd6\n\n$ docker ps\nCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NODE NAMES\nf8b693db9cd6 mysql:latest mysqld Less than a second ago running 192.168.0.42:49178-3306/tcp node-1 db\n\n\nIn this case, node-1 was chosen randomly, because no container were running, so node-1 and\nnode-2 had the same score.\nNow we start another container, asking for 1G of RAM again.\n$ docker run -d -P -m 1G --name frontend nginx\n963841b138d8\n\n$ docker ps\nCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NODE NAMES\n963841b138d8 nginx:latest nginx Less than a second ago running 192.168.0.42:49177-80/tcp node-1 frontend\nf8b693db9cd6 mysql:latest mysqld Up About a minute running 192.168.0.42:49178-3306/tcp node-1 db\n\n\nThe container frontend was also started on node-1 because it was the node the most packed\nalready. This allows us to start a container requiring 2G of RAM on node-2.\nRandom strategy\nThe Random strategy, as it's name says, chose a random node, it's used mainly for debug.",
|
|
"title": "Swarm strategies"
|
|
},
|
|
{
|
|
"loc": "/swarm/scheduler/strategy#strategies",
|
|
"tags": "",
|
|
"text": "The Docker Swarm scheduler comes with multiple strategies. These strategies are used to rank nodes using a scores computed by the strategy. Docker Swarm currently supports 2 strategies: BinPacking Random You can choose the strategy you want to use with the --strategy flag of swarm manage",
|
|
"title": "Strategies"
|
|
},
|
|
{
|
|
"loc": "/swarm/scheduler/strategy#binpacking-strategy",
|
|
"tags": "",
|
|
"text": "The BinPacking strategy will rank the nodes using their CPU and RAM available and will return the\nnode the most packed already. This avoid fragementation, it will leave room for bigger containers\non usunsed machines. For instance, let's says that both node-1 and node-2 have 2G of RAM: $ docker run -d -P -m 1G --name db mysql\nf8b693db9cd6\n\n$ docker ps\nCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NODE NAMES\nf8b693db9cd6 mysql:latest mysqld Less than a second ago running 192.168.0.42:49178- 3306/tcp node-1 db In this case, node-1 was chosen randomly, because no container were running, so node-1 and node-2 had the same score. Now we start another container, asking for 1G of RAM again. $ docker run -d -P -m 1G --name frontend nginx\n963841b138d8\n\n$ docker ps\nCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NODE NAMES\n963841b138d8 nginx:latest nginx Less than a second ago running 192.168.0.42:49177- 80/tcp node-1 frontend\nf8b693db9cd6 mysql:latest mysqld Up About a minute running 192.168.0.42:49178- 3306/tcp node-1 db The container frontend was also started on node-1 because it was the node the most packed\nalready. This allows us to start a container requiring 2G of RAM on node-2 .",
|
|
"title": "BinPacking strategy"
|
|
},
|
|
{
|
|
"loc": "/swarm/scheduler/strategy#random-strategy",
|
|
"tags": "",
|
|
"text": "The Random strategy, as it's name says, chose a random node, it's used mainly for debug.",
|
|
"title": "Random strategy"
|
|
},
|
|
{
|
|
"loc": "/swarm/scheduler/filter/",
|
|
"tags": "",
|
|
"text": "Filters\nThe Docker Swarm scheduler comes with multiple filters.\nThese filters are used to schedule containers on a subset of nodes.\nDocker Swarm currently supports 4 filters:\n Constraint\n Affinity\n Port\n Health\nYou can choose the filter(s) you want to use with the --filter flag of swarm manage\nConstraint Filter\nConstraints are key/value pairs associated to particular nodes. You can see them\nas node tags.\nWhen creating a container, the user can select a subset of nodes that should be\nconsidered for scheduling by specifying one or more sets of matching key/value pairs.\nThis approach has several practical use cases such as:\n Selecting specific host properties (such as storage=ssd, in order to schedule\n containers on specific hardware).\n Tagging nodes based on their physical location (region=us-east, to force\n containers to run on a given location).\n* Logical cluster partitioning (environment=production, to split a cluster into\n sub-clusters with different properties).\nTo tag a node with a specific set of key/value pairs, one must pass a list of\n--label options at docker startup time.\nFor instance, let's start node-1 with the storage=ssd label:\n$ docker -d --label storage=ssd\n$ swarm join --addr=192.168.0.42:2375 token://XXXXXXXXXXXXXXXXXX\n\n\nAgain, but this time node-2 with storage=disk:\n$ docker -d --label storage=disk\n$ swarm join --addr=192.168.0.43:2375 token://XXXXXXXXXXXXXXXXXX\n\n\nOnce the nodes are registered with the cluster, the master pulls their respective\ntags and will take them into account when scheduling new containers.\nLet's start a MySQL server and make sure it gets good I/O performance by selecting\nnodes with flash drives:\n$ docker run -d -P -e constraint:storage==ssd --name db mysql\nf8b693db9cd6\n\n$ docker ps\nCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NODE NAMES\nf8b693db9cd6 mysql:latest mysqld Less than a second ago running 192.168.0.42:49178-3306/tcp node-1 db\n\n\nIn this case, the master selected all nodes that met the storage=ssd constraint\nand applied resource management on top of them, as discussed earlier.\nnode-1 was selected in this example since it's the only host running flash.\nNow we want to run an nginx frontend in our cluster. However, we don't want\nflash drives since we'll mostly write logs to disk.\n$ docker run -d -P -e constraint:storage==disk --name frontend nginx\n963841b138d8\n\n$ docker ps\nCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NODE NAMES\n963841b138d8 nginx:latest nginx Less than a second ago running 192.168.0.43:49177-80/tcp node-2 frontend\nf8b693db9cd6 mysql:latest mysqld Up About a minute running 192.168.0.42:49178-3306/tcp node-1 db\n\n\nThe scheduler selected node-2 since it was started with the storage=disk label.\nStandard Constraints\nAdditionally, a standard set of constraints can be used when scheduling containers\nwithout specifying them when starting the node. Those tags are sourced from\ndocker info and currently include:\n\nstoragedriver\nexecutiondriver\nkernelversion\noperatingsystem\n\nAffinity Filter\nContainers\nYou can schedule 2 containers and make the container #2 next to the container #1.\n$ docker run -d -p 80:80 --name front nginx\n 87c4376856a8\n\n$ docker ps\nCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NODE NAMES\n87c4376856a8 nginx:latest nginx Less than a second ago running 192.168.0.42:80-80/tcp node-1 front\n\n\nUsing -e affinity:container==front will schedule a container next to the container front.\nYou can also use IDs instead of name: -e affinity:container==87c4376856a8\n$ docker run -d --name logger -e affinity:container==front logger\n 87c4376856a8\n\n$ docker ps\nCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NODE NAMES\n87c4376856a8 nginx:latest nginx Less than a second ago running 192.168.0.42:80-80/tcp node-1 front\n963841b138d8 logger:latest logger Less than a second ago running node-1 logger\n\n\nThe logger container ends up on node-1 because his affinity with the container front.\nImages\nYou can schedule a container only on nodes where the images is already pulled.\n$ docker -H node-1:2375 pull redis\n$ docker -H node-2:2375 pull mysql\n$ docker -H node-3:2375 pull redis\n\n\nHere only node-1 and node-3 have the redis image. Using -e affinity:image=redis we can\nschedule container only on these 2 nodes. You can also use the image ID instead of it's name.\n$ docker run -d --name redis1 -e affinity:image==redis redis\n$ docker run -d --name redis2 -e affinity:image==redis redis\n$ docker run -d --name redis3 -e affinity:image==redis redis\n$ docker run -d --name redis4 -e affinity:image==redis redis\n$ docker run -d --name redis5 -e affinity:image==redis redis\n$ docker run -d --name redis6 -e affinity:image==redis redis\n$ docker run -d --name redis7 -e affinity:image==redis redis\n$ docker run -d --name redis8 -e affinity:image==redis redis\n\n$ docker ps\nCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NODE NAMES\n87c4376856a8 redis:latest redis Less than a second ago running node-1 redis1\n1212386856a8 redis:latest redis Less than a second ago running node-1 redis2\n87c4376639a8 redis:latest redis Less than a second ago running node-3 redis3\n1234376856a8 redis:latest redis Less than a second ago running node-1 redis4\n86c2136253a8 redis:latest redis Less than a second ago running node-3 redis5\n87c3236856a8 redis:latest redis Less than a second ago running node-3 redis6\n87c4376856a8 redis:latest redis Less than a second ago running node-3 redis7\n963841b138d8 redis:latest redis Less than a second ago running node-1 redis8\n\n\nAs you can see here, the containers were only scheduled on nodes with the redis image already pulled.\nExpression Syntax\nAn affinity or a constraint expression consists of a key and a value.\nA key must conform the alpha-numeric pattern, with the leading alphabet or underscore.\nA value must be one of the following:\n An alpha-numeric string, dots, hyphens, and underscores.\n A globbing pattern, i.e., abc*.\n* A regular expression in the form of /regexp/. We support the Go's regular expression syntax.\nCurrent swarm supports affinity/constraint operators as the following: == and !=.\nFor example,\n constraint:node==node1 will match node node1.\n constraint:node!=node1 will match all nodes, except node1.\n constraint:region!=us* will match all nodes outside the regions prefixed with us.\n constraint:node==/node[12]/ will match nodes node1 and node2.\n constraint:node==/node\\d/ will match all nodes with node + 1 digit.\n constraint:node!=/node-[01]/ will match all nodes, except node-0 and node-1.\n constraint:node!=/foo\\[bar\\]/ will match all nodes, except foo[bar]. You can see the use of escape characters here.\n constraint:node==/(?i)node1/ will match node node1 case-insensitive. So 'NoDe1' or 'NODE1' will also matched.\nPort Filter\nWith this filter, ports are considered as a unique resource.\n$ docker run -d -p 80:80 nginx\n87c4376856a8\n\n$ docker ps\nCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NODE NAMES\n87c4376856a8 nginx:latest nginx Less than a second ago running 192.168.0.42:80-80/tcp node-1 prickly_engelbart\n\n\nDocker cluster selects a node where the public 80 port is available and schedules\na container on it, in this case node-1.\nAttempting to run another container with the public 80 port will result in\nclustering selecting a different node, since that port is already occupied on node-1:\n$ docker run -d -p 80:80 nginx\n963841b138d8\n\n$ docker ps\nCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NODE NAMES\n963841b138d8 nginx:latest nginx Less than a second ago running 192.168.0.43:80-80/tcp node-2 dreamy_turing\n87c4376856a8 nginx:latest nginx Up About a minute running 192.168.0.42:80-80/tcp node-1 prickly_engelbart\n\n\nAgain, repeating the same command will result in the selection of node-3, since\nport 80 is neither available on node-1 nor node-2:\n$ docker run -d -p 80:80 nginx\n963841b138d8\n\n$ docker ps\nCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NODE NAMES\nf8b693db9cd6 nginx:latest nginx Less than a second ago running 192.168.0.44:80-80/tcp node-3 stoic_albattani\n963841b138d8 nginx:latest nginx Up About a minute running 192.168.0.43:80-80/tcp node-2 dreamy_turing\n87c4376856a8 nginx:latest nginx Up About a minute running 192.168.0.42:80-80/tcp node-1 prickly_engelbart\n\n\nFinally, Docker Cluster will refuse to run another container that requires port\n80 since not a single node in the cluster has it available:\n$ docker run -d -p 80:80 nginx\n2014/10/29 00:33:20 Error response from daemon: no resources available to schedule container\n\n\nDependency Filter\nThis filter co-schedules dependent containers on the same node.\nCurrently, dependencies are declared as follows:\n\nShared volumes: --volumes-from=dependency\nLinks: --link=dependency:alias\nShared network stack: --net=container:dependency\n\nSwarm will attempt to co-locate the dependent container on the same node. If it\ncannot be done (because the dependent container doesn't exist, or because the\nnode doesn't have enough resources), it will prevent the container creation.\nThe combination of multiple dependencies will be honored if possible. For\ninstance, --volumes-from=A --net=container:B will attempt to co-locate the\ncontainer on the same node as A and B. If those containers are running on\ndifferent nodes, Swarm will prevent you from scheduling the container.\nHealth Filter\nThis filter will prevent scheduling containers on unhealthy nodes.",
|
|
"title": "Swarm filters"
|
|
},
|
|
{
|
|
"loc": "/swarm/scheduler/filter#filters",
|
|
"tags": "",
|
|
"text": "The Docker Swarm scheduler comes with multiple filters. These filters are used to schedule containers on a subset of nodes. Docker Swarm currently supports 4 filters: Constraint Affinity Port Health You can choose the filter(s) you want to use with the --filter flag of swarm manage",
|
|
"title": "Filters"
|
|
},
|
|
{
|
|
"loc": "/swarm/scheduler/filter#constraint-filter",
|
|
"tags": "",
|
|
"text": "Constraints are key/value pairs associated to particular nodes. You can see them\nas node tags . When creating a container, the user can select a subset of nodes that should be\nconsidered for scheduling by specifying one or more sets of matching key/value pairs. This approach has several practical use cases such as: Selecting specific host properties (such as storage=ssd , in order to schedule\n containers on specific hardware). Tagging nodes based on their physical location ( region=us-east , to force\n containers to run on a given location).\n* Logical cluster partitioning ( environment=production , to split a cluster into\n sub-clusters with different properties). To tag a node with a specific set of key/value pairs, one must pass a list of --label options at docker startup time. For instance, let's start node-1 with the storage=ssd label: $ docker -d --label storage=ssd\n$ swarm join --addr=192.168.0.42:2375 token://XXXXXXXXXXXXXXXXXX Again, but this time node-2 with storage=disk : $ docker -d --label storage=disk\n$ swarm join --addr=192.168.0.43:2375 token://XXXXXXXXXXXXXXXXXX Once the nodes are registered with the cluster, the master pulls their respective\ntags and will take them into account when scheduling new containers. Let's start a MySQL server and make sure it gets good I/O performance by selecting\nnodes with flash drives: $ docker run -d -P -e constraint:storage==ssd --name db mysql\nf8b693db9cd6\n\n$ docker ps\nCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NODE NAMES\nf8b693db9cd6 mysql:latest mysqld Less than a second ago running 192.168.0.42:49178- 3306/tcp node-1 db In this case, the master selected all nodes that met the storage=ssd constraint\nand applied resource management on top of them, as discussed earlier. node-1 was selected in this example since it's the only host running flash. Now we want to run an nginx frontend in our cluster. However, we don't want flash drives since we'll mostly write logs to disk. $ docker run -d -P -e constraint:storage==disk --name frontend nginx\n963841b138d8\n\n$ docker ps\nCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NODE NAMES\n963841b138d8 nginx:latest nginx Less than a second ago running 192.168.0.43:49177- 80/tcp node-2 frontend\nf8b693db9cd6 mysql:latest mysqld Up About a minute running 192.168.0.42:49178- 3306/tcp node-1 db The scheduler selected node-2 since it was started with the storage=disk label.",
|
|
"title": "Constraint Filter"
|
|
},
|
|
{
|
|
"loc": "/swarm/scheduler/filter#standard-constraints",
|
|
"tags": "",
|
|
"text": "Additionally, a standard set of constraints can be used when scheduling containers\nwithout specifying them when starting the node. Those tags are sourced from docker info and currently include: storagedriver executiondriver kernelversion operatingsystem",
|
|
"title": "Standard Constraints"
|
|
},
|
|
{
|
|
"loc": "/swarm/scheduler/filter#affinity-filter",
|
|
"tags": "",
|
|
"text": "Containers You can schedule 2 containers and make the container #2 next to the container #1. $ docker run -d -p 80:80 --name front nginx\n 87c4376856a8\n\n$ docker ps\nCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NODE NAMES\n87c4376856a8 nginx:latest nginx Less than a second ago running 192.168.0.42:80- 80/tcp node-1 front Using -e affinity:container==front will schedule a container next to the container front .\nYou can also use IDs instead of name: -e affinity:container==87c4376856a8 $ docker run -d --name logger -e affinity:container==front logger\n 87c4376856a8\n\n$ docker ps\nCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NODE NAMES\n87c4376856a8 nginx:latest nginx Less than a second ago running 192.168.0.42:80- 80/tcp node-1 front\n963841b138d8 logger:latest logger Less than a second ago running node-1 logger The logger container ends up on node-1 because his affinity with the container front . Images You can schedule a container only on nodes where the images is already pulled. $ docker -H node-1:2375 pull redis\n$ docker -H node-2:2375 pull mysql\n$ docker -H node-3:2375 pull redis Here only node-1 and node-3 have the redis image. Using -e affinity:image=redis we can\nschedule container only on these 2 nodes. You can also use the image ID instead of it's name. $ docker run -d --name redis1 -e affinity:image==redis redis\n$ docker run -d --name redis2 -e affinity:image==redis redis\n$ docker run -d --name redis3 -e affinity:image==redis redis\n$ docker run -d --name redis4 -e affinity:image==redis redis\n$ docker run -d --name redis5 -e affinity:image==redis redis\n$ docker run -d --name redis6 -e affinity:image==redis redis\n$ docker run -d --name redis7 -e affinity:image==redis redis\n$ docker run -d --name redis8 -e affinity:image==redis redis\n\n$ docker ps\nCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NODE NAMES\n87c4376856a8 redis:latest redis Less than a second ago running node-1 redis1\n1212386856a8 redis:latest redis Less than a second ago running node-1 redis2\n87c4376639a8 redis:latest redis Less than a second ago running node-3 redis3\n1234376856a8 redis:latest redis Less than a second ago running node-1 redis4\n86c2136253a8 redis:latest redis Less than a second ago running node-3 redis5\n87c3236856a8 redis:latest redis Less than a second ago running node-3 redis6\n87c4376856a8 redis:latest redis Less than a second ago running node-3 redis7\n963841b138d8 redis:latest redis Less than a second ago running node-1 redis8 As you can see here, the containers were only scheduled on nodes with the redis image already pulled. Expression Syntax An affinity or a constraint expression consists of a key and a value .\nA key must conform the alpha-numeric pattern, with the leading alphabet or underscore. A value must be one of the following: An alpha-numeric string, dots, hyphens, and underscores. A globbing pattern, i.e., abc* .\n* A regular expression in the form of /regexp/ . We support the Go's regular expression syntax. Current swarm supports affinity/constraint operators as the following: == and != . For example, constraint:node==node1 will match node node1 . constraint:node!=node1 will match all nodes, except node1 . constraint:region!=us* will match all nodes outside the regions prefixed with us . constraint:node==/node[12]/ will match nodes node1 and node2 . constraint:node==/node\\d/ will match all nodes with node + 1 digit. constraint:node!=/node-[01]/ will match all nodes, except node-0 and node-1 . constraint:node!=/foo\\[bar\\]/ will match all nodes, except foo[bar] . You can see the use of escape characters here. constraint:node==/(?i)node1/ will match node node1 case-insensitive. So 'NoDe1' or 'NODE1' will also matched.",
|
|
"title": "Affinity Filter"
|
|
},
|
|
{
|
|
"loc": "/swarm/scheduler/filter#port-filter",
|
|
"tags": "",
|
|
"text": "With this filter, ports are considered as a unique resource. $ docker run -d -p 80:80 nginx\n87c4376856a8\n\n$ docker ps\nCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NODE NAMES\n87c4376856a8 nginx:latest nginx Less than a second ago running 192.168.0.42:80- 80/tcp node-1 prickly_engelbart Docker cluster selects a node where the public 80 port is available and schedules\na container on it, in this case node-1 . Attempting to run another container with the public 80 port will result in\nclustering selecting a different node, since that port is already occupied on node-1 : $ docker run -d -p 80:80 nginx\n963841b138d8\n\n$ docker ps\nCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NODE NAMES\n963841b138d8 nginx:latest nginx Less than a second ago running 192.168.0.43:80- 80/tcp node-2 dreamy_turing\n87c4376856a8 nginx:latest nginx Up About a minute running 192.168.0.42:80- 80/tcp node-1 prickly_engelbart Again, repeating the same command will result in the selection of node-3 , since\nport 80 is neither available on node-1 nor node-2 : $ docker run -d -p 80:80 nginx\n963841b138d8\n\n$ docker ps\nCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NODE NAMES\nf8b693db9cd6 nginx:latest nginx Less than a second ago running 192.168.0.44:80- 80/tcp node-3 stoic_albattani\n963841b138d8 nginx:latest nginx Up About a minute running 192.168.0.43:80- 80/tcp node-2 dreamy_turing\n87c4376856a8 nginx:latest nginx Up About a minute running 192.168.0.42:80- 80/tcp node-1 prickly_engelbart Finally, Docker Cluster will refuse to run another container that requires port 80 since not a single node in the cluster has it available: $ docker run -d -p 80:80 nginx\n2014/10/29 00:33:20 Error response from daemon: no resources available to schedule container",
|
|
"title": "Port Filter"
|
|
},
|
|
{
|
|
"loc": "/swarm/scheduler/filter#dependency-filter",
|
|
"tags": "",
|
|
"text": "This filter co-schedules dependent containers on the same node. Currently, dependencies are declared as follows: Shared volumes: --volumes-from=dependency Links: --link=dependency:alias Shared network stack: --net=container:dependency Swarm will attempt to co-locate the dependent container on the same node. If it\ncannot be done (because the dependent container doesn't exist, or because the\nnode doesn't have enough resources), it will prevent the container creation. The combination of multiple dependencies will be honored if possible. For\ninstance, --volumes-from=A --net=container:B will attempt to co-locate the\ncontainer on the same node as A and B . If those containers are running on\ndifferent nodes, Swarm will prevent you from scheduling the container.",
|
|
"title": "Dependency Filter"
|
|
},
|
|
{
|
|
"loc": "/swarm/scheduler/filter#health-filter",
|
|
"tags": "",
|
|
"text": "This filter will prevent scheduling containers on unhealthy nodes.",
|
|
"title": "Health Filter"
|
|
},
|
|
{
|
|
"loc": "/swarm/API/",
|
|
"tags": "",
|
|
"text": "Docker Swarm API\nThe Docker Swarm API is compatible with the Offical Docker API:\nHere are the main differences:\nSome endpoints are not (yet) implemented\nGET /images/get\nGET /images/{name:.*}/get\nGET /containers/{name:.*}/attach/ws\n\nPOST /commit\nPOST /build\nPOST /images/create\nPOST /images/load\nPOST /images/{name:.*}/push\nPOST /images/{name:.*}/tag\n\nDELETE /images/{name:.*}\n\n\nSome endpoints have more information\n\nGET \"/containers/{name:.*}/json\": New field Node added:\n\nNode: {\n ID: ODAI:IC6Q:MSBL:TPB5:HIEE:6IKC:VCAM:QRNH:PRGX:ERZT:OK46:PMFX,\n IP: 0.0.0.0,\n Addr: http://0.0.0.0:4243,\n Name: vagrant-ubuntu-saucy-64,\n Cpus: 1,\n Memory: 2099654656,\n Labels: {\n executiondriver: native-0.2,\n kernelversion: 3.11.0-15-generic,\n operatingsystem: Ubuntu 13.10,\n storagedriver: aufs\n }\n },\n\n\n\n\nGET \"/containers/{name:.*}/json\": HostIP replaced by the the actual Node's IP if HostIP is 0.0.0.0\n\n\nGET \"/containers/json\": Node's name prepended to the container name.\n\n\nGET \"/containers/json\": HostIP replaced by the the actual Node's IP if HostIP is 0.0.0.0\n\n\nGET \"/containers/json\" : Containers started from the swarm official image are hidden by default, use all=1 to display them.",
|
|
"title": "Swarm API"
|
|
},
|
|
{
|
|
"loc": "/swarm/API#docker-swarm-api",
|
|
"tags": "",
|
|
"text": "The Docker Swarm API is compatible with the Offical Docker API : Here are the main differences:",
|
|
"title": "Docker Swarm API"
|
|
},
|
|
{
|
|
"loc": "/swarm/API#some-endpoints-are-not-yet-implemented",
|
|
"tags": "",
|
|
"text": "GET /images/get \nGET /images/{name:.*}/get \nGET /containers/{name:.*}/attach/ws \n\nPOST /commit \nPOST /build \nPOST /images/create \nPOST /images/load \nPOST /images/{name:.*}/push \nPOST /images/{name:.*}/tag \n\nDELETE /images/{name:.*}",
|
|
"title": "Some endpoints are not (yet) implemented"
|
|
},
|
|
{
|
|
"loc": "/swarm/API#some-endpoints-have-more-information",
|
|
"tags": "",
|
|
"text": "GET \"/containers/{name:.*}/json\" : New field Node added: Node : {\n ID : ODAI:IC6Q:MSBL:TPB5:HIEE:6IKC:VCAM:QRNH:PRGX:ERZT:OK46:PMFX ,\n IP : 0.0.0.0 ,\n Addr : http://0.0.0.0:4243 ,\n Name : vagrant-ubuntu-saucy-64 ,\n Cpus : 1,\n Memory : 2099654656,\n Labels : {\n executiondriver : native-0.2 ,\n kernelversion : 3.11.0-15-generic ,\n operatingsystem : Ubuntu 13.10 ,\n storagedriver : aufs \n }\n }, GET \"/containers/{name:.*}/json\" : HostIP replaced by the the actual Node's IP if HostIP is 0.0.0.0 GET \"/containers/json\" : Node's name prepended to the container name. GET \"/containers/json\" : HostIP replaced by the the actual Node's IP if HostIP is 0.0.0.0 GET \"/containers/json\" : Containers started from the swarm official image are hidden by default, use all=1 to display them.",
|
|
"title": "Some endpoints have more information"
|
|
},
|
|
{
|
|
"loc": "/reference/api/",
|
|
"tags": "",
|
|
"text": "Table of Contents\nAbout\n\n\nDocker\n\n\nRelease Notes\n\n\nUnderstanding Docker\n\n\nInstallation\n\n\nUbuntu\n\n\nMac OS X\n\n\nMicrosoft Windows\n\n\nAmazon EC2\n\n\nArch Linux\n\n\nBinaries\n\n\nCentOS\n\n\nCRUX Linux\n\n\nDebian\n\n\nFedora\n\n\nFrugalWare\n\n\nGoogle Cloud Platform\n\n\nGentoo\n\n\nIBM Softlayer\n\n\nRackspace Cloud\n\n\nRed Hat Enterprise Linux\n\n\nOracle Linux\n\n\nSUSE\n\n\nDocker Compose\n\n\nUser Guide\n\n\nThe Docker User Guide\n\n\nGetting Started with Docker Hub\n\n\nDockerizing Applications\n\n\nWorking with Containers\n\n\nWorking with Docker Images\n\n\nLinking containers together\n\n\nManaging data in containers\n\n\nWorking with Docker Hub\n\n\nDocker Compose\n\n\nDocker Machine\n\n\nDocker Swarm\n\n\nDocker Hub\n\n\nDocker Hub\n\n\nAccounts\n\n\nRepositories\n\n\nAutomated Builds\n\n\nOfficial Repo Guidelines\n\n\nExamples\n\n\nDockerizing a Node.js web application\n\n\nDockerizing MongoDB\n\n\nDockerizing a Redis service\n\n\nDockerizing a PostgreSQL service\n\n\nDockerizing a Riak service\n\n\nDockerizing an SSH service\n\n\nDockerizing a CouchDB service\n\n\nDockerizing an Apt-Cacher-ng service\n\n\nGetting started with Compose and Django\n\n\nGetting started with Compose and Rails\n\n\nGetting started with Compose and Wordpress\n\n\nArticles\n\n\nDocker basics\n\n\nAdvanced networking\n\n\nSecurity\n\n\nRunning Docker with HTTPS\n\n\nRun a local registry mirror\n\n\nAutomatically starting containers\n\n\nCreating a base image\n\n\nBest practices for writing Dockerfiles\n\n\nUsing certificates for repository client verification\n\n\nUsing Supervisor\n\n\nProcess management with CFEngine\n\n\nUsing Puppet\n\n\nUsing Chef\n\n\nUsing PowerShell DSC\n\n\nCross-Host linking using ambassador containers\n\n\nRuntime metrics\n\n\nIncreasing a Boot2Docker volume\n\n\nControlling and configuring Docker using Systemd\n\n\nReference\n\n\nCommand line\n\n\nDockerfile\n\n\nFAQ\n\n\nRun Reference\n\n\nCompose command line\n\n\nCompose yml\n\n\nCompose ENV variables\n\n\nCompose commandline completion\n\n\nSwarm discovery\n\n\nSwarm strategies\n\n\nSwarm filters\n\n\nSwarm API\n\n\nDocker Hub API\n\n\nDocker Registry API\n\n\nDocker Registry API Client Libraries\n\n\nDocker Hub and Registry Spec\n\n\nDocker Remote API\n\n\nDocker Remote API v1.17\n\n\nDocker Remote API v1.16\n\n\nDocker Remote API Client Libraries\n\n\nDocker Hub Accounts API\n\n\nContributor Guide\n\n\nREADME first\n\n\nGet required software\n\n\nConfigure Git for contributing\n\n\nWork with a development container\n\n\nRun tests and test documentation\n\n\nUnderstand contribution workflow\n\n\nFind an issue\n\n\nWork on an issue\n\n\nCreate a pull request\n\n\nParticipate in the PR review\n\n\nAdvanced contributing\n\n\nWhere to get help\n\n\nCoding style guide\n\n\nDocumentation style guide",
|
|
"title": "**HIDDEN**"
|
|
},
|
|
{
|
|
"loc": "/reference/api#table-of-contents",
|
|
"tags": "",
|
|
"text": "",
|
|
"title": "Table of Contents"
|
|
},
|
|
{
|
|
"loc": "/reference/api#about",
|
|
"tags": "",
|
|
"text": "Docker Release Notes Understanding Docker",
|
|
"title": "About"
|
|
},
|
|
{
|
|
"loc": "/reference/api#installation",
|
|
"tags": "",
|
|
"text": "Ubuntu Mac OS X Microsoft Windows Amazon EC2 Arch Linux Binaries CentOS CRUX Linux Debian Fedora FrugalWare Google Cloud Platform Gentoo IBM Softlayer Rackspace Cloud Red Hat Enterprise Linux Oracle Linux SUSE Docker Compose",
|
|
"title": "Installation"
|
|
},
|
|
{
|
|
"loc": "/reference/api#user-guide",
|
|
"tags": "",
|
|
"text": "The Docker User Guide Getting Started with Docker Hub Dockerizing Applications Working with Containers Working with Docker Images Linking containers together Managing data in containers Working with Docker Hub Docker Compose Docker Machine Docker Swarm",
|
|
"title": "User Guide"
|
|
},
|
|
{
|
|
"loc": "/reference/api#docker-hub",
|
|
"tags": "",
|
|
"text": "Docker Hub Accounts Repositories Automated Builds Official Repo Guidelines",
|
|
"title": "Docker Hub"
|
|
},
|
|
{
|
|
"loc": "/reference/api#examples",
|
|
"tags": "",
|
|
"text": "Dockerizing a Node.js web application Dockerizing MongoDB Dockerizing a Redis service Dockerizing a PostgreSQL service Dockerizing a Riak service Dockerizing an SSH service Dockerizing a CouchDB service Dockerizing an Apt-Cacher-ng service Getting started with Compose and Django Getting started with Compose and Rails Getting started with Compose and Wordpress",
|
|
"title": "Examples"
|
|
},
|
|
{
|
|
"loc": "/reference/api#articles",
|
|
"tags": "",
|
|
"text": "Docker basics Advanced networking Security Running Docker with HTTPS Run a local registry mirror Automatically starting containers Creating a base image Best practices for writing Dockerfiles Using certificates for repository client verification Using Supervisor Process management with CFEngine Using Puppet Using Chef Using PowerShell DSC Cross-Host linking using ambassador containers Runtime metrics Increasing a Boot2Docker volume Controlling and configuring Docker using Systemd",
|
|
"title": "Articles"
|
|
},
|
|
{
|
|
"loc": "/reference/api#reference",
|
|
"tags": "",
|
|
"text": "Command line Dockerfile FAQ Run Reference Compose command line Compose yml Compose ENV variables Compose commandline completion Swarm discovery Swarm strategies Swarm filters Swarm API Docker Hub API Docker Registry API Docker Registry API Client Libraries Docker Hub and Registry Spec Docker Remote API Docker Remote API v1.17 Docker Remote API v1.16 Docker Remote API Client Libraries Docker Hub Accounts API",
|
|
"title": "Reference"
|
|
},
|
|
{
|
|
"loc": "/reference/api#contributor-guide",
|
|
"tags": "",
|
|
"text": "README first Get required software Configure Git for contributing Work with a development container Run tests and test documentation Understand contribution workflow Find an issue Work on an issue Create a pull request Participate in the PR review Advanced contributing Where to get help Coding style guide Documentation style guide",
|
|
"title": "Contributor Guide"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker-io_api/",
|
|
"tags": "",
|
|
"text": "Docker Hub API\n\nThis is the REST API for Docker Hub.\nAuthorization is done with basic auth over SSL\nNot all commands require authentication, only those noted as such.\n\nRepositories\nUser Repository\nCreate a user repository\nPUT /v1/repositories/(namespace)/(repo_name)/\nCreate a user repository with the given namespace and repo_name.\nExample Request:\n PUT /v1/repositories/foo/bar/ HTTP/1.1\n Host: index.docker.io\n Accept: application/json\n Content-Type: application/json\n Authorization: Basic akmklmasadalkm==\n X-Docker-Token: true\n\n [{\"id\": \"9e89cc6f0bc3c38722009fe6857087b486531f9a779a0c17e3ed29dae8f12c4f\"}]\n\nParameters:\n\nnamespace \u2013 the namespace for the repo\nrepo_name \u2013 the name for the repo\n\nExample Response:\n HTTP/1.1 200\n Vary: Accept\n Content-Type: application/json\n WWW-Authenticate: Token signature=123abc,repository=\"foo/bar\",access=write\n X-Docker-Token: signature=123abc,repository=\"foo/bar\",access=write\n X-Docker-Endpoints: registry-1.docker.io [, registry-2.docker.io]\n\n \"\"\n\nStatus Codes:\n\n200 \u2013 Created\n400 \u2013 Errors (invalid json, missing or invalid fields, etc)\n401 \u2013 Unauthorized\n403 \u2013 Account is not Active\n\nDelete a user repository\nDELETE /v1/repositories/(namespace)/(repo_name)/\nDelete a user repository with the given namespace and repo_name.\nExample Request:\n DELETE /v1/repositories/foo/bar/ HTTP/1.1\n Host: index.docker.io\n Accept: application/json\n Content-Type: application/json\n Authorization: Basic akmklmasadalkm==\n X-Docker-Token: true\n\n \"\"\n\nParameters:\n\nnamespace \u2013 the namespace for the repo\nrepo_name \u2013 the name for the repo\n\nExample Response:\n HTTP/1.1 202\n Vary: Accept\n Content-Type: application/json\n WWW-Authenticate: Token signature=123abc,repository=\"foo/bar\",access=delete\n X-Docker-Token: signature=123abc,repository=\"foo/bar\",access=delete\n X-Docker-Endpoints: registry-1.docker.io [, registry-2.docker.io]\n\n \"\"\n\nStatus Codes:\n\n200 \u2013 Deleted\n202 \u2013 Accepted\n400 \u2013 Errors (invalid json, missing or invalid fields, etc)\n401 \u2013 Unauthorized\n403 \u2013 Account is not Active\n\nLibrary Repository\nCreate a library repository\nPUT /v1/repositories/(repo_name)/\nCreate a library repository with the given repo_name.\nThis is a restricted feature only available to docker admins.\n\nWhen namespace is missing, it is assumed to be library\n\nExample Request:\n PUT /v1/repositories/foobar/ HTTP/1.1\n Host: index.docker.io\n Accept: application/json\n Content-Type: application/json\n Authorization: Basic akmklmasadalkm==\n X-Docker-Token: true\n\n [{\"id\": \"9e89cc6f0bc3c38722009fe6857087b486531f9a779a0c17e3ed29dae8f12c4f\"}]\n\nParameters:\n\nrepo_name \u2013 the library name for the repo\n\nExample Response:\n HTTP/1.1 200\n Vary: Accept\n Content-Type: application/json\n WWW-Authenticate: Token signature=123abc,repository=\"library/foobar\",access=write\n X-Docker-Token: signature=123abc,repository=\"foo/bar\",access=write\n X-Docker-Endpoints: registry-1.docker.io [, registry-2.docker.io]\n\n \"\"\n\nStatus Codes:\n\n200 \u2013 Created\n400 \u2013 Errors (invalid json, missing or invalid fields, etc)\n401 \u2013 Unauthorized\n403 \u2013 Account is not Active\n\nDelete a library repository\nDELETE /v1/repositories/(repo_name)/\nDelete a library repository with the given repo_name.\nThis is a restricted feature only available to docker admins.\n\nWhen namespace is missing, it is assumed to be library\n\nExample Request:\n DELETE /v1/repositories/foobar/ HTTP/1.1\n Host: index.docker.io\n Accept: application/json\n Content-Type: application/json\n Authorization: Basic akmklmasadalkm==\n X-Docker-Token: true\n\n \"\"\n\nParameters:\n\nrepo_name \u2013 the library name for the repo\n\nExample Response:\n HTTP/1.1 202\n Vary: Accept\n Content-Type: application/json\n WWW-Authenticate: Token signature=123abc,repository=\"library/foobar\",access=delete\n X-Docker-Token: signature=123abc,repository=\"foo/bar\",access=delete\n X-Docker-Endpoints: registry-1.docker.io [, registry-2.docker.io]\n\n \"\"\n\nStatus Codes:\n\n200 \u2013 Deleted\n202 \u2013 Accepted\n400 \u2013 Errors (invalid json, missing or invalid fields, etc)\n401 \u2013 Unauthorized\n403 \u2013 Account is not Active\n\nRepository Images\nUser Repository Images\nUpdate user repository images\nPUT /v1/repositories/(namespace)/(repo_name)/images\nUpdate the images for a user repo.\nExample Request:\n PUT /v1/repositories/foo/bar/images HTTP/1.1\n Host: index.docker.io\n Accept: application/json\n Content-Type: application/json\n Authorization: Basic akmklmasadalkm==\n\n [{\"id\": \"9e89cc6f0bc3c38722009fe6857087b486531f9a779a0c17e3ed29dae8f12c4f\",\n \"checksum\": \"b486531f9a779a0c17e3ed29dae8f12c4f9e89cc6f0bc3c38722009fe6857087\"}]\n\nParameters:\n\nnamespace \u2013 the namespace for the repo\nrepo_name \u2013 the name for the repo\n\nExample Response:\n HTTP/1.1 204\n Vary: Accept\n Content-Type: application/json\n\n \"\"\n\nStatus Codes:\n\n204 \u2013 Created\n400 \u2013 Errors (invalid json, missing or invalid fields, etc)\n401 \u2013 Unauthorized\n403 \u2013 Account is not Active or permission denied\n\nList user repository images\nGET /v1/repositories/(namespace)/(repo_name)/images\nGet the images for a user repo.\nExample Request:\n GET /v1/repositories/foo/bar/images HTTP/1.1\n Host: index.docker.io\n Accept: application/json\n\nParameters:\n\nnamespace \u2013 the namespace for the repo\nrepo_name \u2013 the name for the repo\n\nExample Response:\n HTTP/1.1 200\n Vary: Accept\n Content-Type: application/json\n\n [{\"id\": \"9e89cc6f0bc3c38722009fe6857087b486531f9a779a0c17e3ed29dae8f12c4f\",\n \"checksum\": \"b486531f9a779a0c17e3ed29dae8f12c4f9e89cc6f0bc3c38722009fe6857087\"},\n {\"id\": \"ertwetewtwe38722009fe6857087b486531f9a779a0c1dfddgfgsdgdsgds\",\n \"checksum\": \"34t23f23fc17e3ed29dae8f12c4f9e89cc6f0bsdfgfsdgdsgdsgerwgew\"}]\n\nStatus Codes:\n\n200 \u2013 OK\n404 \u2013 Not found\n\nLibrary Repository Images\nUpdate library repository images\nPUT /v1/repositories/(repo_name)/images\nUpdate the images for a library repo.\nExample Request:\n PUT /v1/repositories/foobar/images HTTP/1.1\n Host: index.docker.io\n Accept: application/json\n Content-Type: application/json\n Authorization: Basic akmklmasadalkm==\n\n [{\"id\": \"9e89cc6f0bc3c38722009fe6857087b486531f9a779a0c17e3ed29dae8f12c4f\",\n \"checksum\": \"b486531f9a779a0c17e3ed29dae8f12c4f9e89cc6f0bc3c38722009fe6857087\"}]\n\nParameters:\n\nrepo_name \u2013 the library name for the repo\n\nExample Response:\n HTTP/1.1 204\n Vary: Accept\n Content-Type: application/json\n\n \"\"\n\nStatus Codes:\n\n204 \u2013 Created\n400 \u2013 Errors (invalid json, missing or invalid fields, etc)\n401 \u2013 Unauthorized\n403 \u2013 Account is not Active or permission denied\n\nList library repository images\nGET /v1/repositories/(repo_name)/images\nGet the images for a library repo.\nExample Request:\n GET /v1/repositories/foobar/images HTTP/1.1\n Host: index.docker.io\n Accept: application/json\n\nParameters:\n\nrepo_name \u2013 the library name for the repo\n\nExample Response:\n HTTP/1.1 200\n Vary: Accept\n Content-Type: application/json\n\n [{\"id\": \"9e89cc6f0bc3c38722009fe6857087b486531f9a779a0c17e3ed29dae8f12c4f\",\n \"checksum\": \"b486531f9a779a0c17e3ed29dae8f12c4f9e89cc6f0bc3c38722009fe6857087\"},\n {\"id\": \"ertwetewtwe38722009fe6857087b486531f9a779a0c1dfddgfgsdgdsgds\",\n \"checksum\": \"34t23f23fc17e3ed29dae8f12c4f9e89cc6f0bsdfgfsdgdsgdsgerwgew\"}]\n\nStatus Codes:\n\n200 \u2013 OK\n404 \u2013 Not found\n\nRepository Authorization\nLibrary Repository\nAuthorize a token for a library\nPUT /v1/repositories/(repo_name)/auth\nAuthorize a token for a library repo\nExample Request:\n PUT /v1/repositories/foobar/auth HTTP/1.1\n Host: index.docker.io\n Accept: application/json\n Authorization: Token signature=123abc,repository=\"library/foobar\",access=write\n\nParameters:\n\nrepo_name \u2013 the library name for the repo\n\nExample Response:\n HTTP/1.1 200\n Vary: Accept\n Content-Type: application/json\n\n \"OK\"\n\nStatus Codes:\n\n200 \u2013 OK\n403 \u2013 Permission denied\n404 \u2013 Not found\n\nUser Repository\nAuthorize a token for a user repository\nPUT /v1/repositories/(namespace)/(repo_name)/auth\nAuthorize a token for a user repo\nExample Request:\n PUT /v1/repositories/foo/bar/auth HTTP/1.1\n Host: index.docker.io\n Accept: application/json\n Authorization: Token signature=123abc,repository=\"foo/bar\",access=write\n\nParameters:\n\nnamespace \u2013 the namespace for the repo\nrepo_name \u2013 the name for the repo\n\nExample Response:\n HTTP/1.1 200\n Vary: Accept\n Content-Type: application/json\n\n \"OK\"\n\nStatus Codes:\n\n200 \u2013 OK\n403 \u2013 Permission denied\n404 \u2013 Not found\n\nUsers\nUser Login\nGET /v1/users/\nIf you want to check your login, you can try this endpoint\nExample Request:\n GET /v1/users/ HTTP/1.1\n Host: index.docker.io\n Accept: application/json\n Authorization: Basic akmklmasadalkm==\n\nExample Response:\n HTTP/1.1 200 OK\n Vary: Accept\n Content-Type: application/json\n\n OK\n\nStatus Codes:\n\n200 \u2013 no error\n401 \u2013 Unauthorized\n403 \u2013 Account is not Active\n\nUser Register\nPOST /v1/users/\nRegistering a new account.\nExample request:\n POST /v1/users/ HTTP/1.1\n Host: index.docker.io\n Accept: application/json\n Content-Type: application/json\n\n {\"email\": \"sam@docker.com\",\n \"password\": \"toto42\",\n \"username\": \"foobar\"}\n\nJson Parameters:\n\nemail \u2013 valid email address, that needs to be confirmed\nusername \u2013 min 4 character, max 30 characters, must match\n the regular expression [a-z0-9_].\npassword \u2013 min 5 characters\n\nExample Response:\n HTTP/1.1 201 OK\n Vary: Accept\n Content-Type: application/json\n\n \"User Created\"\n\nStatus Codes:\n\n201 \u2013 User Created\n400 \u2013 Errors (invalid json, missing or invalid fields, etc)\n\nUpdate User\nPUT /v1/users/(username)/\nChange a password or email address for given user. If you pass in an\nemail, it will add it to your account, it will not remove the old\none. Passwords will be updated.\nIt is up to the client to verify that that password that is sent is\nthe one that they want. Common approach is to have them type it\ntwice.\nExample Request:\n PUT /v1/users/fakeuser/ HTTP/1.1\n Host: index.docker.io\n Accept: application/json\n Content-Type: application/json\n Authorization: Basic akmklmasadalkm==\n\n {\"email\": \"sam@docker.com\",\n \"password\": \"toto42\"}\n\nParameters:\n\nusername \u2013 username for the person you want to update\n\nExample Response:\n HTTP/1.1 204\n Vary: Accept\n Content-Type: application/json\n\n \"\"\n\nStatus Codes:\n\n204 \u2013 User Updated\n400 \u2013 Errors (invalid json, missing or invalid fields, etc)\n401 \u2013 Unauthorized\n403 \u2013 Account is not Active\n404 \u2013 User not found",
|
|
"title": "Docker Hub API"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker-io_api#docker-hub-api",
|
|
"tags": "",
|
|
"text": "This is the REST API for Docker Hub . Authorization is done with basic auth over SSL Not all commands require authentication, only those noted as such.",
|
|
"title": "Docker Hub API"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker-io_api#repositories",
|
|
"tags": "",
|
|
"text": "",
|
|
"title": "Repositories"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker-io_api#user-repository",
|
|
"tags": "",
|
|
"text": "Create a user repository PUT /v1/repositories/(namespace)/(repo_name)/ Create a user repository with the given namespace and repo_name . Example Request : PUT /v1/repositories/foo/bar/ HTTP/1.1\n Host: index.docker.io\n Accept: application/json\n Content-Type: application/json\n Authorization: Basic akmklmasadalkm==\n X-Docker-Token: true\n\n [{\"id\": \"9e89cc6f0bc3c38722009fe6857087b486531f9a779a0c17e3ed29dae8f12c4f\"}] Parameters: namespace \u2013 the namespace for the repo repo_name \u2013 the name for the repo Example Response : HTTP/1.1 200\n Vary: Accept\n Content-Type: application/json\n WWW-Authenticate: Token signature=123abc,repository=\"foo/bar\",access=write\n X-Docker-Token: signature=123abc,repository=\"foo/bar\",access=write\n X-Docker-Endpoints: registry-1.docker.io [, registry-2.docker.io]\n\n \"\" Status Codes: 200 \u2013 Created 400 \u2013 Errors (invalid json, missing or invalid fields, etc) 401 \u2013 Unauthorized 403 \u2013 Account is not Active Delete a user repository DELETE /v1/repositories/(namespace)/(repo_name)/ Delete a user repository with the given namespace and repo_name . Example Request : DELETE /v1/repositories/foo/bar/ HTTP/1.1\n Host: index.docker.io\n Accept: application/json\n Content-Type: application/json\n Authorization: Basic akmklmasadalkm==\n X-Docker-Token: true\n\n \"\" Parameters: namespace \u2013 the namespace for the repo repo_name \u2013 the name for the repo Example Response : HTTP/1.1 202\n Vary: Accept\n Content-Type: application/json\n WWW-Authenticate: Token signature=123abc,repository=\"foo/bar\",access=delete\n X-Docker-Token: signature=123abc,repository=\"foo/bar\",access=delete\n X-Docker-Endpoints: registry-1.docker.io [, registry-2.docker.io]\n\n \"\" Status Codes: 200 \u2013 Deleted 202 \u2013 Accepted 400 \u2013 Errors (invalid json, missing or invalid fields, etc) 401 \u2013 Unauthorized 403 \u2013 Account is not Active",
|
|
"title": "User Repository"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker-io_api#library-repository",
|
|
"tags": "",
|
|
"text": "Create a library repository PUT /v1/repositories/(repo_name)/ Create a library repository with the given repo_name .\nThis is a restricted feature only available to docker admins. When namespace is missing, it is assumed to be library Example Request : PUT /v1/repositories/foobar/ HTTP/1.1\n Host: index.docker.io\n Accept: application/json\n Content-Type: application/json\n Authorization: Basic akmklmasadalkm==\n X-Docker-Token: true\n\n [{\"id\": \"9e89cc6f0bc3c38722009fe6857087b486531f9a779a0c17e3ed29dae8f12c4f\"}] Parameters: repo_name \u2013 the library name for the repo Example Response : HTTP/1.1 200\n Vary: Accept\n Content-Type: application/json\n WWW-Authenticate: Token signature=123abc,repository=\"library/foobar\",access=write\n X-Docker-Token: signature=123abc,repository=\"foo/bar\",access=write\n X-Docker-Endpoints: registry-1.docker.io [, registry-2.docker.io]\n\n \"\" Status Codes: 200 \u2013 Created 400 \u2013 Errors (invalid json, missing or invalid fields, etc) 401 \u2013 Unauthorized 403 \u2013 Account is not Active Delete a library repository DELETE /v1/repositories/(repo_name)/ Delete a library repository with the given repo_name .\nThis is a restricted feature only available to docker admins. When namespace is missing, it is assumed to be library Example Request : DELETE /v1/repositories/foobar/ HTTP/1.1\n Host: index.docker.io\n Accept: application/json\n Content-Type: application/json\n Authorization: Basic akmklmasadalkm==\n X-Docker-Token: true\n\n \"\" Parameters: repo_name \u2013 the library name for the repo Example Response : HTTP/1.1 202\n Vary: Accept\n Content-Type: application/json\n WWW-Authenticate: Token signature=123abc,repository=\"library/foobar\",access=delete\n X-Docker-Token: signature=123abc,repository=\"foo/bar\",access=delete\n X-Docker-Endpoints: registry-1.docker.io [, registry-2.docker.io]\n\n \"\" Status Codes: 200 \u2013 Deleted 202 \u2013 Accepted 400 \u2013 Errors (invalid json, missing or invalid fields, etc) 401 \u2013 Unauthorized 403 \u2013 Account is not Active",
|
|
"title": "Library Repository"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker-io_api#repository-images",
|
|
"tags": "",
|
|
"text": "",
|
|
"title": "Repository Images"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker-io_api#user-repository-images",
|
|
"tags": "",
|
|
"text": "Update user repository images PUT /v1/repositories/(namespace)/(repo_name)/images Update the images for a user repo. Example Request : PUT /v1/repositories/foo/bar/images HTTP/1.1\n Host: index.docker.io\n Accept: application/json\n Content-Type: application/json\n Authorization: Basic akmklmasadalkm==\n\n [{\"id\": \"9e89cc6f0bc3c38722009fe6857087b486531f9a779a0c17e3ed29dae8f12c4f\",\n \"checksum\": \"b486531f9a779a0c17e3ed29dae8f12c4f9e89cc6f0bc3c38722009fe6857087\"}] Parameters: namespace \u2013 the namespace for the repo repo_name \u2013 the name for the repo Example Response : HTTP/1.1 204\n Vary: Accept\n Content-Type: application/json\n\n \"\" Status Codes: 204 \u2013 Created 400 \u2013 Errors (invalid json, missing or invalid fields, etc) 401 \u2013 Unauthorized 403 \u2013 Account is not Active or permission denied List user repository images GET /v1/repositories/(namespace)/(repo_name)/images Get the images for a user repo. Example Request : GET /v1/repositories/foo/bar/images HTTP/1.1\n Host: index.docker.io\n Accept: application/json Parameters: namespace \u2013 the namespace for the repo repo_name \u2013 the name for the repo Example Response : HTTP/1.1 200\n Vary: Accept\n Content-Type: application/json\n\n [{\"id\": \"9e89cc6f0bc3c38722009fe6857087b486531f9a779a0c17e3ed29dae8f12c4f\",\n \"checksum\": \"b486531f9a779a0c17e3ed29dae8f12c4f9e89cc6f0bc3c38722009fe6857087\"},\n {\"id\": \"ertwetewtwe38722009fe6857087b486531f9a779a0c1dfddgfgsdgdsgds\",\n \"checksum\": \"34t23f23fc17e3ed29dae8f12c4f9e89cc6f0bsdfgfsdgdsgdsgerwgew\"}] Status Codes: 200 \u2013 OK 404 \u2013 Not found",
|
|
"title": "User Repository Images"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker-io_api#library-repository-images",
|
|
"tags": "",
|
|
"text": "Update library repository images PUT /v1/repositories/(repo_name)/images Update the images for a library repo. Example Request : PUT /v1/repositories/foobar/images HTTP/1.1\n Host: index.docker.io\n Accept: application/json\n Content-Type: application/json\n Authorization: Basic akmklmasadalkm==\n\n [{\"id\": \"9e89cc6f0bc3c38722009fe6857087b486531f9a779a0c17e3ed29dae8f12c4f\",\n \"checksum\": \"b486531f9a779a0c17e3ed29dae8f12c4f9e89cc6f0bc3c38722009fe6857087\"}] Parameters: repo_name \u2013 the library name for the repo Example Response : HTTP/1.1 204\n Vary: Accept\n Content-Type: application/json\n\n \"\" Status Codes: 204 \u2013 Created 400 \u2013 Errors (invalid json, missing or invalid fields, etc) 401 \u2013 Unauthorized 403 \u2013 Account is not Active or permission denied List library repository images GET /v1/repositories/(repo_name)/images Get the images for a library repo. Example Request : GET /v1/repositories/foobar/images HTTP/1.1\n Host: index.docker.io\n Accept: application/json Parameters: repo_name \u2013 the library name for the repo Example Response : HTTP/1.1 200\n Vary: Accept\n Content-Type: application/json\n\n [{\"id\": \"9e89cc6f0bc3c38722009fe6857087b486531f9a779a0c17e3ed29dae8f12c4f\",\n \"checksum\": \"b486531f9a779a0c17e3ed29dae8f12c4f9e89cc6f0bc3c38722009fe6857087\"},\n {\"id\": \"ertwetewtwe38722009fe6857087b486531f9a779a0c1dfddgfgsdgdsgds\",\n \"checksum\": \"34t23f23fc17e3ed29dae8f12c4f9e89cc6f0bsdfgfsdgdsgdsgerwgew\"}] Status Codes: 200 \u2013 OK 404 \u2013 Not found",
|
|
"title": "Library Repository Images"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker-io_api#repository-authorization",
|
|
"tags": "",
|
|
"text": "",
|
|
"title": "Repository Authorization"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker-io_api#library-repository_1",
|
|
"tags": "",
|
|
"text": "Authorize a token for a library PUT /v1/repositories/(repo_name)/auth Authorize a token for a library repo Example Request : PUT /v1/repositories/foobar/auth HTTP/1.1\n Host: index.docker.io\n Accept: application/json\n Authorization: Token signature=123abc,repository=\"library/foobar\",access=write Parameters: repo_name \u2013 the library name for the repo Example Response : HTTP/1.1 200\n Vary: Accept\n Content-Type: application/json\n\n \"OK\" Status Codes: 200 \u2013 OK 403 \u2013 Permission denied 404 \u2013 Not found",
|
|
"title": "Library Repository"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker-io_api#user-repository_1",
|
|
"tags": "",
|
|
"text": "Authorize a token for a user repository PUT /v1/repositories/(namespace)/(repo_name)/auth Authorize a token for a user repo Example Request : PUT /v1/repositories/foo/bar/auth HTTP/1.1\n Host: index.docker.io\n Accept: application/json\n Authorization: Token signature=123abc,repository=\"foo/bar\",access=write Parameters: namespace \u2013 the namespace for the repo repo_name \u2013 the name for the repo Example Response : HTTP/1.1 200\n Vary: Accept\n Content-Type: application/json\n\n \"OK\" Status Codes: 200 \u2013 OK 403 \u2013 Permission denied 404 \u2013 Not found",
|
|
"title": "User Repository"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker-io_api#users",
|
|
"tags": "",
|
|
"text": "User Login GET /v1/users/ If you want to check your login, you can try this endpoint Example Request : GET /v1/users/ HTTP/1.1\n Host: index.docker.io\n Accept: application/json\n Authorization: Basic akmklmasadalkm== Example Response : HTTP/1.1 200 OK\n Vary: Accept\n Content-Type: application/json\n\n OK Status Codes: 200 \u2013 no error 401 \u2013 Unauthorized 403 \u2013 Account is not Active User Register POST /v1/users/ Registering a new account. Example request : POST /v1/users/ HTTP/1.1\n Host: index.docker.io\n Accept: application/json\n Content-Type: application/json\n\n {\"email\": \"sam@docker.com\",\n \"password\": \"toto42\",\n \"username\": \"foobar\"} Json Parameters: email \u2013 valid email address, that needs to be confirmed username \u2013 min 4 character, max 30 characters, must match\n the regular expression [a-z0-9_]. password \u2013 min 5 characters Example Response : HTTP/1.1 201 OK\n Vary: Accept\n Content-Type: application/json\n\n \"User Created\" Status Codes: 201 \u2013 User Created 400 \u2013 Errors (invalid json, missing or invalid fields, etc) Update User PUT /v1/users/(username)/ Change a password or email address for given user. If you pass in an\nemail, it will add it to your account, it will not remove the old\none. Passwords will be updated. It is up to the client to verify that that password that is sent is\nthe one that they want. Common approach is to have them type it\ntwice. Example Request : PUT /v1/users/fakeuser/ HTTP/1.1\n Host: index.docker.io\n Accept: application/json\n Content-Type: application/json\n Authorization: Basic akmklmasadalkm==\n\n {\"email\": \"sam@docker.com\",\n \"password\": \"toto42\"} Parameters: username \u2013 username for the person you want to update Example Response : HTTP/1.1 204\n Vary: Accept\n Content-Type: application/json\n\n \"\" Status Codes: 204 \u2013 User Updated 400 \u2013 Errors (invalid json, missing or invalid fields, etc) 401 \u2013 Unauthorized 403 \u2013 Account is not Active 404 \u2013 User not found",
|
|
"title": "Users"
|
|
},
|
|
{
|
|
"loc": "/reference/api/registry_api/",
|
|
"tags": "",
|
|
"text": "Docker Registry API\nIntroduction\n\nThis is the REST API for the Docker Registry\nIt stores the images and the graph for a set of repositories\nIt does not have user accounts data\nIt has no notion of user accounts or authorization\nIt delegates authentication and authorization to the Index Auth\n service using tokens\nIt supports different storage backends (S3, cloud files, local FS)\nIt doesn't have a local database\nThe registry is open source: Docker Registry\n\nWe expect that there will be multiple registries out there. To help to\ngrasp the context, here are some examples of registries:\n\nsponsor registry: such a registry is provided by a third-party\n hosting infrastructure as a convenience for their customers and the\n Docker community as a whole. Its costs are supported by the third\n party, but the management and operation of the registry are\n supported by Docker. It features read/write access, and delegates\n authentication and authorization to the Index.\nmirror registry: such a registry is provided by a third-party\n hosting infrastructure but is targeted at their customers only. Some\n mechanism (unspecified to date) ensures that public images are\n pulled from a sponsor registry to the mirror registry, to make sure\n that the customers of the third-party provider can docker pull\n those images locally.\nvendor registry: such a registry is provided by a software\n vendor, who wants to distribute Docker images. It would be operated\n and managed by the vendor. Only users authorized by the vendor would\n be able to get write access. Some images would be public (accessible\n for anyone), others private (accessible only for authorized users).\n Authentication and authorization would be delegated to the Index.\n The goal of vendor registries is to let someone do docker pull\n basho/riak1.3 and automatically push from the vendor registry\n (instead of a sponsor registry); i.e., get all the convenience of a\n sponsor registry, while retaining control on the asset distribution.\nprivate registry: such a registry is located behind a firewall,\n or protected by an additional security layer (HTTP authorization,\n SSL client-side certificates, IP address authorization...). The\n registry is operated by a private entity, outside of Docker's\n control. It can optionally delegate additional authorization to the\n Index, but it is not mandatory.\n\n\nNote:\nMirror registries and private registries which do not use the Index\ndon't even need to run the registry code. They can be implemented by any\nkind of transport implementing HTTP GET and PUT. Read-only registries\ncan be powered by a simple static HTTPS server.\nNote:\nThe latter implies that while HTTP is the protocol of choice for a registry,\nmultiple schemes are possible (and in some cases, trivial):\n\nHTTP with GET (and PUT for read-write registries);\nlocal mount point;\nremote Docker addressed through SSH.\n\n\nThe latter would only require two new commands in Docker, e.g.,\nregistryget and registryput, wrapping access to the local filesystem\n(and optionally doing consistency checks). Authentication and authorization\nare then delegated to SSH (e.g., with public keys).\n\nNote:\nPrivate registry servers that expose an HTTP endpoint need to be secured with\nTLS (preferably TLSv1.2, but at least TLSv1.0). Make sure to put the CA\ncertificate at /etc/docker/certs.d/my.registry.com:5000/ca.crt on the Docker\nhost, so that the daemon can securely access the private registry.\nSupport for SSLv3 and lower is not available due to security issues.\n\nThe default namespace for a private repository is library.\nEndpoints\nImages\nGet image layer\nGET /v1/images/(image_id)/layer\nGet image layer for a given image_id\nExample Request:\n GET /v1/images/088b4505aa3adc3d35e79c031fa126b403200f02f51920fbd9b7c503e87c7a2c/layer HTTP/1.1\n Host: registry-1.docker.io\n Accept: application/json\n Content-Type: application/json\n Authorization: Token signature=123abc,repository=\"foo/bar\",access=read\n\nParameters:\n\nimage_id \u2013 the id for the layer you want to get\n\nExample Response:\n HTTP/1.1 200\n Vary: Accept\n X-Docker-Registry-Version: 0.6.0\n Cookie: (Cookie provided by the Registry)\n\n {layer binary data stream}\n\nStatus Codes:\n\n200 \u2013 OK\n401 \u2013 Requires authorization\n404 \u2013 Image not found\n\nPut image layer\nPUT /v1/images/(image_id)/layer\nPut image layer for a given image_id\nExample Request:\n PUT /v1/images/088b4505aa3adc3d35e79c031fa126b403200f02f51920fbd9b7c503e87c7a2c/layer HTTP/1.1\n Host: registry-1.docker.io\n Transfer-Encoding: chunked\n Authorization: Token signature=123abc,repository=\"foo/bar\",access=write\n\n {layer binary data stream}\n\nParameters:\n\nimage_id \u2013 the id for the layer you want to get\n\nExample Response:\n HTTP/1.1 200\n Vary: Accept\n Content-Type: application/json\n X-Docker-Registry-Version: 0.6.0\n\n \"\"\n\nStatus Codes:\n\n200 \u2013 OK\n401 \u2013 Requires authorization\n404 \u2013 Image not found\n\nImage\nPut image layer\nPUT /v1/images/(image_id)/json\nPut image for a given image_id\nExample Request:\n PUT /v1/images/088b4505aa3adc3d35e79c031fa126b403200f02f51920fbd9b7c503e87c7a2c/json HTTP/1.1\n Host: registry-1.docker.io\n Accept: application/json\n Content-Type: application/json\n Cookie: (Cookie provided by the Registry)\n\n {\n id: \"088b4505aa3adc3d35e79c031fa126b403200f02f51920fbd9b7c503e87c7a2c\",\n parent: \"aeee6396d62273d180a49c96c62e45438d87c7da4a5cf5d2be6bee4e21bc226f\",\n created: \"2013-04-30T17:46:10.843673+03:00\",\n container: \"8305672a76cc5e3d168f97221106ced35a76ec7ddbb03209b0f0d96bf74f6ef7\",\n container_config: {\n Hostname: \"host-test\",\n User: \"\",\n Memory: 0,\n MemorySwap: 0,\n AttachStdin: false,\n AttachStdout: false,\n AttachStderr: false,\n PortSpecs: null,\n Tty: false,\n OpenStdin: false,\n StdinOnce: false,\n Env: null,\n Cmd: [\n \"/bin/bash\",\n \"-c\",\n \"apt-get -q -yy -f install libevent-dev\"\n ],\n Dns: null,\n Image: \"imagename/blah\",\n Volumes: { },\n VolumesFrom: \"\"\n },\n docker_version: \"0.1.7\"\n }\n\nParameters:\n\nimage_id \u2013 the id for the layer you want to get\n\nExample Response:\n HTTP/1.1 200\n Vary: Accept\n Content-Type: application/json\n X-Docker-Registry-Version: 0.6.0\n\n \"\"\n\nStatus Codes:\n\n200 \u2013 OK\n401 \u2013 Requires authorization\n\nGet image layer\nGET /v1/images/(image_id)/json\nGet image for a given image_id\nExample Request:\n GET /v1/images/088b4505aa3adc3d35e79c031fa126b403200f02f51920fbd9b7c503e87c7a2c/json HTTP/1.1\n Host: registry-1.docker.io\n Accept: application/json\n Content-Type: application/json\n Cookie: (Cookie provided by the Registry)\n\nParameters:\n\nimage_id \u2013 the id for the layer you want to get\n\nExample Response:\n HTTP/1.1 200\n Vary: Accept\n Content-Type: application/json\n X-Docker-Registry-Version: 0.6.0\n X-Docker-Size: 456789\n X-Docker-Checksum: b486531f9a779a0c17e3ed29dae8f12c4f9e89cc6f0bc3c38722009fe6857087\n\n {\n id: \"088b4505aa3adc3d35e79c031fa126b403200f02f51920fbd9b7c503e87c7a2c\",\n parent: \"aeee6396d62273d180a49c96c62e45438d87c7da4a5cf5d2be6bee4e21bc226f\",\n created: \"2013-04-30T17:46:10.843673+03:00\",\n container: \"8305672a76cc5e3d168f97221106ced35a76ec7ddbb03209b0f0d96bf74f6ef7\",\n container_config: {\n Hostname: \"host-test\",\n User: \"\",\n Memory: 0,\n MemorySwap: 0,\n AttachStdin: false,\n AttachStdout: false,\n AttachStderr: false,\n PortSpecs: null,\n Tty: false,\n OpenStdin: false,\n StdinOnce: false,\n Env: null,\n Cmd: [\n \"/bin/bash\",\n \"-c\",\n \"apt-get -q -yy -f install libevent-dev\"\n ],\n Dns: null,\n Image: \"imagename/blah\",\n Volumes: { },\n VolumesFrom: \"\"\n },\n docker_version: \"0.1.7\"\n }\n\nStatus Codes:\n\n200 \u2013 OK\n401 \u2013 Requires authorization\n404 \u2013 Image not found\n\nAncestry\nGet image ancestry\nGET /v1/images/(image_id)/ancestry\nGet ancestry for an image given an image_id\nExample Request:\n GET /v1/images/088b4505aa3adc3d35e79c031fa126b403200f02f51920fbd9b7c503e87c7a2c/ancestry HTTP/1.1\n Host: registry-1.docker.io\n Accept: application/json\n Content-Type: application/json\n Cookie: (Cookie provided by the Registry)\n\nParameters:\n\nimage_id \u2013 the id for the layer you want to get\n\nExample Response:\n HTTP/1.1 200\n Vary: Accept\n Content-Type: application/json\n X-Docker-Registry-Version: 0.6.0\n\n [\"088b4502f51920fbd9b7c503e87c7a2c05aa3adc3d35e79c031fa126b403200f\",\n \"aeee63968d87c7da4a5cf5d2be6bee4e21bc226fd62273d180a49c96c62e4543\",\n \"bfa4c5326bc764280b0863b46a4b20d940bc1897ef9c1dfec060604bdc383280\",\n \"6ab5893c6927c15a15665191f2c6cf751f5056d8b95ceee32e43c5e8a3648544\"]\n\nStatus Codes:\n\n200 \u2013 OK\n401 \u2013 Requires authorization\n404 \u2013 Image not found\n\nTags\nList repository tags\nGET /v1/repositories/(namespace)/(repository)/tags\nGet all of the tags for the given repo.\nExample Request:\n GET /v1/repositories/reynholm/help-system-server/tags HTTP/1.1\n Host: registry-1.docker.io\n Accept: application/json\n Content-Type: application/json\n X-Docker-Registry-Version: 0.6.0\n Cookie: (Cookie provided by the Registry)\n\nParameters:\n\nnamespace \u2013 namespace for the repo\nrepository \u2013 name for the repo\n\nExample Response:\n HTTP/1.1 200\n Vary: Accept\n Content-Type: application/json\n X-Docker-Registry-Version: 0.6.0\n\n {\n \"latest\": \"9e89cc6f0bc3c38722009fe6857087b486531f9a779a0c17e3ed29dae8f12c4f\",\n \"0.1.1\": \"b486531f9a779a0c17e3ed29dae8f12c4f9e89cc6f0bc3c38722009fe6857087\"\n }\n\nStatus Codes:\n\n200 \u2013 OK\n401 \u2013 Requires authorization\n404 \u2013 Repository not found\n\nGet image id for a particular tag\nGET /v1/repositories/(namespace)/(repository)/tags/(tag*)\nGet a tag for the given repo.\nExample Request:\n GET /v1/repositories/reynholm/help-system-server/tags/latest HTTP/1.1\n Host: registry-1.docker.io\n Accept: application/json\n Content-Type: application/json\n X-Docker-Registry-Version: 0.6.0\n Cookie: (Cookie provided by the Registry)\n\nParameters:\n\nnamespace \u2013 namespace for the repo\nrepository \u2013 name for the repo\ntag \u2013 name of tag you want to get\n\nExample Response:\n HTTP/1.1 200\n Vary: Accept\n Content-Type: application/json\n X-Docker-Registry-Version: 0.6.0\n\n \"9e89cc6f0bc3c38722009fe6857087b486531f9a779a0c17e3ed29dae8f12c4f\"\n\nStatus Codes:\n\n200 \u2013 OK\n401 \u2013 Requires authorization\n404 \u2013 Tag not found\n\nDelete a repository tag\nDELETE /v1/repositories/(namespace)/(repository)/tags/(tag*)\nDelete the tag for the repo\nExample Request:\n DELETE /v1/repositories/reynholm/help-system-server/tags/latest HTTP/1.1\n Host: registry-1.docker.io\n Accept: application/json\n Content-Type: application/json\n Cookie: (Cookie provided by the Registry)\n\nParameters:\n\nnamespace \u2013 namespace for the repo\nrepository \u2013 name for the repo\ntag \u2013 name of tag you want to delete\n\nExample Response:\n HTTP/1.1 200\n Vary: Accept\n Content-Type: application/json\n X-Docker-Registry-Version: 0.6.0\n\n \"\"\n\nStatus Codes:\n\n200 \u2013 OK\n401 \u2013 Requires authorization\n404 \u2013 Tag not found\n\nSet a tag for a specified image id\nPUT /v1/repositories/(namespace)/(repository)/tags/(tag*)\nPut a tag for the given repo.\nExample Request:\n PUT /v1/repositories/reynholm/help-system-server/tags/latest HTTP/1.1\n Host: registry-1.docker.io\n Accept: application/json\n Content-Type: application/json\n Cookie: (Cookie provided by the Registry)\n\n \"9e89cc6f0bc3c38722009fe6857087b486531f9a779a0c17e3ed29dae8f12c4f\"\n\nParameters:\n\nnamespace \u2013 namespace for the repo\nrepository \u2013 name for the repo\ntag \u2013 name of tag you want to add\n\nExample Response:\n HTTP/1.1 200\n Vary: Accept\n Content-Type: application/json\n X-Docker-Registry-Version: 0.6.0\n\n \"\"\n\nStatus Codes:\n\n200 \u2013 OK\n400 \u2013 Invalid data\n401 \u2013 Requires authorization\n404 \u2013 Image not found\n\nRepositories\nDelete a repository\nDELETE /v1/repositories/(namespace)/(repository)/\nDelete a repository\nExample Request:\n DELETE /v1/repositories/reynholm/help-system-server/ HTTP/1.1\n Host: registry-1.docker.io\n Accept: application/json\n Content-Type: application/json\n Cookie: (Cookie provided by the Registry)\n\n \"\"\n\nParameters:\n\nnamespace \u2013 namespace for the repo\nrepository \u2013 name for the repo\n\nExample Response:\n HTTP/1.1 200\n Vary: Accept\n Content-Type: application/json\n X-Docker-Registry-Version: 0.6.0\n\n \"\"\n\nStatus Codes:\n\n200 \u2013 OK\n401 \u2013 Requires authorization\n404 \u2013 Repository not found\n\nSearch\nIf you need to search the index, this is the endpoint you would use.\nGET /v1/search\nSearch the Index given a search term. It accepts\n[GET](http://www.w3.org/Protocols/rfc2616/rfc2616-sec9.html#sec9.3)\nonly.\n\nExample request:\n GET /v1/search?q=search_termpage=1n=25 HTTP/1.1\n Host: index.docker.io\n Accept: application/json\n\nQuery Parameters:\n\nq \u2013 what you want to search for\nn - number of results you want returned per page (default: 25, min:1, max:100)\npage - page number of results\n\nExample response:\n HTTP/1.1 200 OK\n Vary: Accept\n Content-Type: application/json\n\n {\"num_pages\": 1,\n \"num_results\": 3,\n \"results\" : [\n {\"name\": \"ubuntu\", \"description\": \"An ubuntu image...\"},\n {\"name\": \"centos\", \"description\": \"A centos image...\"},\n {\"name\": \"fedora\", \"description\": \"A fedora image...\"}\n ],\n \"page_size\": 25,\n \"query\":\"search_term\",\n \"page\": 1\n }\n\nResponse Items:\n- num_pages - Total number of pages returned by query\n- num_results - Total number of results returned by query\n- results - List of results for the current page\n- page_size - How many results returned per page\n- query - Your search term\n- page - Current page number\nStatus Codes:\n\n200 \u2013 no error\n500 \u2013 server error\n\nStatus\nStatus check for registry\nGET /v1/_ping\nCheck status of the registry. This endpoint is also used to\ndetermine if the registry supports SSL.\nExample Request:\n GET /v1/_ping HTTP/1.1\n Host: registry-1.docker.io\n Accept: application/json\n Content-Type: application/json\n\n \"\"\n\nExample Response:\n HTTP/1.1 200\n Vary: Accept\n Content-Type: application/json\n X-Docker-Registry-Version: 0.6.0\n\n \"\"\n\nStatus Codes:\n\n200 \u2013 OK\n\nAuthorization\nThis is where we describe the authorization process, including the\ntokens and cookies.",
|
|
"title": "Docker Registry API"
|
|
},
|
|
{
|
|
"loc": "/reference/api/registry_api#docker-registry-api",
|
|
"tags": "",
|
|
"text": "",
|
|
"title": "Docker Registry API"
|
|
},
|
|
{
|
|
"loc": "/reference/api/registry_api#introduction",
|
|
"tags": "",
|
|
"text": "This is the REST API for the Docker Registry It stores the images and the graph for a set of repositories It does not have user accounts data It has no notion of user accounts or authorization It delegates authentication and authorization to the Index Auth\n service using tokens It supports different storage backends (S3, cloud files, local FS) It doesn't have a local database The registry is open source: Docker Registry We expect that there will be multiple registries out there. To help to\ngrasp the context, here are some examples of registries: sponsor registry : such a registry is provided by a third-party\n hosting infrastructure as a convenience for their customers and the\n Docker community as a whole. Its costs are supported by the third\n party, but the management and operation of the registry are\n supported by Docker. It features read/write access, and delegates\n authentication and authorization to the Index. mirror registry : such a registry is provided by a third-party\n hosting infrastructure but is targeted at their customers only. Some\n mechanism (unspecified to date) ensures that public images are\n pulled from a sponsor registry to the mirror registry, to make sure\n that the customers of the third-party provider can docker pull \n those images locally. vendor registry : such a registry is provided by a software\n vendor, who wants to distribute Docker images. It would be operated\n and managed by the vendor. Only users authorized by the vendor would\n be able to get write access. Some images would be public (accessible\n for anyone), others private (accessible only for authorized users).\n Authentication and authorization would be delegated to the Index.\n The goal of vendor registries is to let someone do docker pull\n basho/riak1.3 and automatically push from the vendor registry\n (instead of a sponsor registry); i.e., get all the convenience of a\n sponsor registry, while retaining control on the asset distribution. private registry : such a registry is located behind a firewall,\n or protected by an additional security layer (HTTP authorization,\n SSL client-side certificates, IP address authorization...). The\n registry is operated by a private entity, outside of Docker's\n control. It can optionally delegate additional authorization to the\n Index, but it is not mandatory. Note :\nMirror registries and private registries which do not use the Index\ndon't even need to run the registry code. They can be implemented by any\nkind of transport implementing HTTP GET and PUT. Read-only registries\ncan be powered by a simple static HTTPS server. Note :\nThe latter implies that while HTTP is the protocol of choice for a registry,\nmultiple schemes are possible (and in some cases, trivial): HTTP with GET (and PUT for read-write registries); local mount point; remote Docker addressed through SSH. The latter would only require two new commands in Docker, e.g., registryget and registryput , wrapping access to the local filesystem\n(and optionally doing consistency checks). Authentication and authorization\nare then delegated to SSH (e.g., with public keys). Note :\nPrivate registry servers that expose an HTTP endpoint need to be secured with\nTLS (preferably TLSv1.2, but at least TLSv1.0). Make sure to put the CA\ncertificate at /etc/docker/certs.d/my.registry.com:5000/ca.crt on the Docker\nhost, so that the daemon can securely access the private registry.\nSupport for SSLv3 and lower is not available due to security issues. The default namespace for a private repository is library .",
|
|
"title": "Introduction"
|
|
},
|
|
{
|
|
"loc": "/reference/api/registry_api#endpoints",
|
|
"tags": "",
|
|
"text": "",
|
|
"title": "Endpoints"
|
|
},
|
|
{
|
|
"loc": "/reference/api/registry_api#images",
|
|
"tags": "",
|
|
"text": "Get image layer GET /v1/images/(image_id)/layer Get image layer for a given image_id Example Request : GET /v1/images/088b4505aa3adc3d35e79c031fa126b403200f02f51920fbd9b7c503e87c7a2c/layer HTTP/1.1\n Host: registry-1.docker.io\n Accept: application/json\n Content-Type: application/json\n Authorization: Token signature=123abc,repository=\"foo/bar\",access=read Parameters: image_id \u2013 the id for the layer you want to get Example Response : HTTP/1.1 200\n Vary: Accept\n X-Docker-Registry-Version: 0.6.0\n Cookie: (Cookie provided by the Registry)\n\n {layer binary data stream} Status Codes: 200 \u2013 OK 401 \u2013 Requires authorization 404 \u2013 Image not found Put image layer PUT /v1/images/(image_id)/layer Put image layer for a given image_id Example Request : PUT /v1/images/088b4505aa3adc3d35e79c031fa126b403200f02f51920fbd9b7c503e87c7a2c/layer HTTP/1.1\n Host: registry-1.docker.io\n Transfer-Encoding: chunked\n Authorization: Token signature=123abc,repository=\"foo/bar\",access=write\n\n {layer binary data stream} Parameters: image_id \u2013 the id for the layer you want to get Example Response : HTTP/1.1 200\n Vary: Accept\n Content-Type: application/json\n X-Docker-Registry-Version: 0.6.0\n\n \"\" Status Codes: 200 \u2013 OK 401 \u2013 Requires authorization 404 \u2013 Image not found",
|
|
"title": "Images"
|
|
},
|
|
{
|
|
"loc": "/reference/api/registry_api#image",
|
|
"tags": "",
|
|
"text": "Put image layer PUT /v1/images/(image_id)/json Put image for a given image_id Example Request : PUT /v1/images/088b4505aa3adc3d35e79c031fa126b403200f02f51920fbd9b7c503e87c7a2c/json HTTP/1.1\n Host: registry-1.docker.io\n Accept: application/json\n Content-Type: application/json\n Cookie: (Cookie provided by the Registry)\n\n {\n id: \"088b4505aa3adc3d35e79c031fa126b403200f02f51920fbd9b7c503e87c7a2c\",\n parent: \"aeee6396d62273d180a49c96c62e45438d87c7da4a5cf5d2be6bee4e21bc226f\",\n created: \"2013-04-30T17:46:10.843673+03:00\",\n container: \"8305672a76cc5e3d168f97221106ced35a76ec7ddbb03209b0f0d96bf74f6ef7\",\n container_config: {\n Hostname: \"host-test\",\n User: \"\",\n Memory: 0,\n MemorySwap: 0,\n AttachStdin: false,\n AttachStdout: false,\n AttachStderr: false,\n PortSpecs: null,\n Tty: false,\n OpenStdin: false,\n StdinOnce: false,\n Env: null,\n Cmd: [\n \"/bin/bash\",\n \"-c\",\n \"apt-get -q -yy -f install libevent-dev\"\n ],\n Dns: null,\n Image: \"imagename/blah\",\n Volumes: { },\n VolumesFrom: \"\"\n },\n docker_version: \"0.1.7\"\n } Parameters: image_id \u2013 the id for the layer you want to get Example Response : HTTP/1.1 200\n Vary: Accept\n Content-Type: application/json\n X-Docker-Registry-Version: 0.6.0\n\n \"\" Status Codes: 200 \u2013 OK 401 \u2013 Requires authorization Get image layer GET /v1/images/(image_id)/json Get image for a given image_id Example Request : GET /v1/images/088b4505aa3adc3d35e79c031fa126b403200f02f51920fbd9b7c503e87c7a2c/json HTTP/1.1\n Host: registry-1.docker.io\n Accept: application/json\n Content-Type: application/json\n Cookie: (Cookie provided by the Registry) Parameters: image_id \u2013 the id for the layer you want to get Example Response : HTTP/1.1 200\n Vary: Accept\n Content-Type: application/json\n X-Docker-Registry-Version: 0.6.0\n X-Docker-Size: 456789\n X-Docker-Checksum: b486531f9a779a0c17e3ed29dae8f12c4f9e89cc6f0bc3c38722009fe6857087\n\n {\n id: \"088b4505aa3adc3d35e79c031fa126b403200f02f51920fbd9b7c503e87c7a2c\",\n parent: \"aeee6396d62273d180a49c96c62e45438d87c7da4a5cf5d2be6bee4e21bc226f\",\n created: \"2013-04-30T17:46:10.843673+03:00\",\n container: \"8305672a76cc5e3d168f97221106ced35a76ec7ddbb03209b0f0d96bf74f6ef7\",\n container_config: {\n Hostname: \"host-test\",\n User: \"\",\n Memory: 0,\n MemorySwap: 0,\n AttachStdin: false,\n AttachStdout: false,\n AttachStderr: false,\n PortSpecs: null,\n Tty: false,\n OpenStdin: false,\n StdinOnce: false,\n Env: null,\n Cmd: [\n \"/bin/bash\",\n \"-c\",\n \"apt-get -q -yy -f install libevent-dev\"\n ],\n Dns: null,\n Image: \"imagename/blah\",\n Volumes: { },\n VolumesFrom: \"\"\n },\n docker_version: \"0.1.7\"\n } Status Codes: 200 \u2013 OK 401 \u2013 Requires authorization 404 \u2013 Image not found",
|
|
"title": "Image"
|
|
},
|
|
{
|
|
"loc": "/reference/api/registry_api#ancestry",
|
|
"tags": "",
|
|
"text": "Get image ancestry GET /v1/images/(image_id)/ancestry Get ancestry for an image given an image_id Example Request : GET /v1/images/088b4505aa3adc3d35e79c031fa126b403200f02f51920fbd9b7c503e87c7a2c/ancestry HTTP/1.1\n Host: registry-1.docker.io\n Accept: application/json\n Content-Type: application/json\n Cookie: (Cookie provided by the Registry) Parameters: image_id \u2013 the id for the layer you want to get Example Response : HTTP/1.1 200\n Vary: Accept\n Content-Type: application/json\n X-Docker-Registry-Version: 0.6.0\n\n [\"088b4502f51920fbd9b7c503e87c7a2c05aa3adc3d35e79c031fa126b403200f\",\n \"aeee63968d87c7da4a5cf5d2be6bee4e21bc226fd62273d180a49c96c62e4543\",\n \"bfa4c5326bc764280b0863b46a4b20d940bc1897ef9c1dfec060604bdc383280\",\n \"6ab5893c6927c15a15665191f2c6cf751f5056d8b95ceee32e43c5e8a3648544\"] Status Codes: 200 \u2013 OK 401 \u2013 Requires authorization 404 \u2013 Image not found",
|
|
"title": "Ancestry"
|
|
},
|
|
{
|
|
"loc": "/reference/api/registry_api#tags",
|
|
"tags": "",
|
|
"text": "List repository tags GET /v1/repositories/(namespace)/(repository)/tags Get all of the tags for the given repo. Example Request : GET /v1/repositories/reynholm/help-system-server/tags HTTP/1.1\n Host: registry-1.docker.io\n Accept: application/json\n Content-Type: application/json\n X-Docker-Registry-Version: 0.6.0\n Cookie: (Cookie provided by the Registry) Parameters: namespace \u2013 namespace for the repo repository \u2013 name for the repo Example Response : HTTP/1.1 200\n Vary: Accept\n Content-Type: application/json\n X-Docker-Registry-Version: 0.6.0\n\n {\n \"latest\": \"9e89cc6f0bc3c38722009fe6857087b486531f9a779a0c17e3ed29dae8f12c4f\",\n \"0.1.1\": \"b486531f9a779a0c17e3ed29dae8f12c4f9e89cc6f0bc3c38722009fe6857087\"\n } Status Codes: 200 \u2013 OK 401 \u2013 Requires authorization 404 \u2013 Repository not found Get image id for a particular tag GET /v1/repositories/(namespace)/(repository)/tags/(tag*) Get a tag for the given repo. Example Request : GET /v1/repositories/reynholm/help-system-server/tags/latest HTTP/1.1\n Host: registry-1.docker.io\n Accept: application/json\n Content-Type: application/json\n X-Docker-Registry-Version: 0.6.0\n Cookie: (Cookie provided by the Registry) Parameters: namespace \u2013 namespace for the repo repository \u2013 name for the repo tag \u2013 name of tag you want to get Example Response : HTTP/1.1 200\n Vary: Accept\n Content-Type: application/json\n X-Docker-Registry-Version: 0.6.0\n\n \"9e89cc6f0bc3c38722009fe6857087b486531f9a779a0c17e3ed29dae8f12c4f\" Status Codes: 200 \u2013 OK 401 \u2013 Requires authorization 404 \u2013 Tag not found Delete a repository tag DELETE /v1/repositories/(namespace)/(repository)/tags/(tag*) Delete the tag for the repo Example Request : DELETE /v1/repositories/reynholm/help-system-server/tags/latest HTTP/1.1\n Host: registry-1.docker.io\n Accept: application/json\n Content-Type: application/json\n Cookie: (Cookie provided by the Registry) Parameters: namespace \u2013 namespace for the repo repository \u2013 name for the repo tag \u2013 name of tag you want to delete Example Response : HTTP/1.1 200\n Vary: Accept\n Content-Type: application/json\n X-Docker-Registry-Version: 0.6.0\n\n \"\" Status Codes: 200 \u2013 OK 401 \u2013 Requires authorization 404 \u2013 Tag not found Set a tag for a specified image id PUT /v1/repositories/(namespace)/(repository)/tags/(tag*) Put a tag for the given repo. Example Request : PUT /v1/repositories/reynholm/help-system-server/tags/latest HTTP/1.1\n Host: registry-1.docker.io\n Accept: application/json\n Content-Type: application/json\n Cookie: (Cookie provided by the Registry)\n\n \"9e89cc6f0bc3c38722009fe6857087b486531f9a779a0c17e3ed29dae8f12c4f\" Parameters: namespace \u2013 namespace for the repo repository \u2013 name for the repo tag \u2013 name of tag you want to add Example Response : HTTP/1.1 200\n Vary: Accept\n Content-Type: application/json\n X-Docker-Registry-Version: 0.6.0\n\n \"\" Status Codes: 200 \u2013 OK 400 \u2013 Invalid data 401 \u2013 Requires authorization 404 \u2013 Image not found",
|
|
"title": "Tags"
|
|
},
|
|
{
|
|
"loc": "/reference/api/registry_api#repositories",
|
|
"tags": "",
|
|
"text": "Delete a repository DELETE /v1/repositories/(namespace)/(repository)/ Delete a repository Example Request : DELETE /v1/repositories/reynholm/help-system-server/ HTTP/1.1\n Host: registry-1.docker.io\n Accept: application/json\n Content-Type: application/json\n Cookie: (Cookie provided by the Registry)\n\n \"\" Parameters: namespace \u2013 namespace for the repo repository \u2013 name for the repo Example Response : HTTP/1.1 200\n Vary: Accept\n Content-Type: application/json\n X-Docker-Registry-Version: 0.6.0\n\n \"\" Status Codes: 200 \u2013 OK 401 \u2013 Requires authorization 404 \u2013 Repository not found",
|
|
"title": "Repositories"
|
|
},
|
|
{
|
|
"loc": "/reference/api/registry_api#search",
|
|
"tags": "",
|
|
"text": "If you need to search the index, this is the endpoint you would use. GET /v1/search Search the Index given a search term. It accepts [GET](http://www.w3.org/Protocols/rfc2616/rfc2616-sec9.html#sec9.3)\nonly. Example request : GET /v1/search?q=search_term page=1 n=25 HTTP/1.1\n Host: index.docker.io\n Accept: application/json Query Parameters: q \u2013 what you want to search for n - number of results you want returned per page (default: 25, min:1, max:100) page - page number of results Example response : HTTP/1.1 200 OK\n Vary: Accept\n Content-Type: application/json\n\n {\"num_pages\": 1,\n \"num_results\": 3,\n \"results\" : [\n {\"name\": \"ubuntu\", \"description\": \"An ubuntu image...\"},\n {\"name\": \"centos\", \"description\": \"A centos image...\"},\n {\"name\": \"fedora\", \"description\": \"A fedora image...\"}\n ],\n \"page_size\": 25,\n \"query\":\"search_term\",\n \"page\": 1\n } Response Items:\n- num_pages - Total number of pages returned by query\n- num_results - Total number of results returned by query\n- results - List of results for the current page\n- page_size - How many results returned per page\n- query - Your search term\n- page - Current page number Status Codes: 200 \u2013 no error 500 \u2013 server error",
|
|
"title": "Search"
|
|
},
|
|
{
|
|
"loc": "/reference/api/registry_api#status",
|
|
"tags": "",
|
|
"text": "Status check for registry GET /v1/_ping Check status of the registry. This endpoint is also used to\ndetermine if the registry supports SSL. Example Request : GET /v1/_ping HTTP/1.1\n Host: registry-1.docker.io\n Accept: application/json\n Content-Type: application/json\n\n \"\" Example Response : HTTP/1.1 200\n Vary: Accept\n Content-Type: application/json\n X-Docker-Registry-Version: 0.6.0\n\n \"\" Status Codes: 200 \u2013 OK",
|
|
"title": "Status"
|
|
},
|
|
{
|
|
"loc": "/reference/api/registry_api#authorization",
|
|
"tags": "",
|
|
"text": "This is where we describe the authorization process, including the\ntokens and cookies.",
|
|
"title": "Authorization"
|
|
},
|
|
{
|
|
"loc": "/reference/api/registry_api_client_libraries/",
|
|
"tags": "",
|
|
"text": "Docker Registry API Client Libraries\nThese libraries have not been tested by the Docker maintainers for\ncompatibility. Please file issues with the library owners. If you find\nmore library implementations, please submit a PR with an update to this page\nor open an issue in the Docker \nproject and we will add the libraries here.\n\n \n \n \n \n \n \n \n Language/Framework\n Name\n Repository\n Status\n \n \n \n \n JavaScript (AngularJS) WebUI\n docker-registry-frontend\n https://github.com/kwk/docker-registry-frontend\n Active\n \n \n Go\n docker-reg-client\n https://github.com/CenturyLinkLabs/docker-reg-client\n Active",
|
|
"title": "Docker Registry API Client Libraries"
|
|
},
|
|
{
|
|
"loc": "/reference/api/registry_api_client_libraries#docker-registry-api-client-libraries",
|
|
"tags": "",
|
|
"text": "These libraries have not been tested by the Docker maintainers for\ncompatibility. Please file issues with the library owners. If you find\nmore library implementations, please submit a PR with an update to this page\nor open an issue in the Docker \nproject and we will add the libraries here. \n \n \n \n \n \n \n \n Language/Framework \n Name \n Repository \n Status \n \n \n \n \n JavaScript (AngularJS) WebUI \n docker-registry-frontend \n https://github.com/kwk/docker-registry-frontend \n Active \n \n \n Go \n docker-reg-client \n https://github.com/CenturyLinkLabs/docker-reg-client \n Active",
|
|
"title": "Docker Registry API Client Libraries"
|
|
},
|
|
{
|
|
"loc": "/reference/api/hub_registry_spec/",
|
|
"tags": "",
|
|
"text": "The Docker Hub and the Registry spec\nThe three roles\nThere are three major components playing a role in the Docker ecosystem.\nDocker Hub\nThe Docker Hub is responsible for centralizing information about:\n\nUser accounts\nChecksums of the images\nPublic namespaces\n\nThe Docker Hub has different components:\n\nWeb UI\nMeta-data store (comments, stars, list public repositories)\nAuthentication service\nTokenization\n\nThe Docker Hub is authoritative for that information.\nThere is only one instance of the Docker Hub, run and\nmanaged by Docker Inc.\nRegistry\nThe registry has the following characteristics:\n\nIt stores the images and the graph for a set of repositories\nIt does not have user accounts data\nIt has no notion of user accounts or authorization\nIt delegates authentication and authorization to the Docker Hub Auth\n service using tokens\nIt supports different storage backends (S3, cloud files, local FS)\nIt doesn't have a local database\nSource Code\n\nWe expect that there will be multiple registries out there. To help you\ngrasp the context, here are some examples of registries:\n\nsponsor registry: such a registry is provided by a third-party\n hosting infrastructure as a convenience for their customers and the\n Docker community as a whole. Its costs are supported by the third\n party, but the management and operation of the registry are\n supported by Docker, Inc. It features read/write access, and delegates\n authentication and authorization to the Docker Hub.\nmirror registry: such a registry is provided by a third-party\n hosting infrastructure but is targeted at their customers only. Some\n mechanism (unspecified to date) ensures that public images are\n pulled from a sponsor registry to the mirror registry, to make sure\n that the customers of the third-party provider can docker pull\n those images locally.\nvendor registry: such a registry is provided by a software\n vendor who wants to distribute docker images. It would be operated\n and managed by the vendor. Only users authorized by the vendor would\n be able to get write access. Some images would be public (accessible\n for anyone), others private (accessible only for authorized users).\n Authentication and authorization would be delegated to the Docker Hub.\n The goal of vendor registries is to let someone do docker pull\n basho/riak1.3 and automatically push from the vendor registry\n (instead of a sponsor registry); i.e., vendors get all the convenience of a\n sponsor registry, while retaining control on the asset distribution.\nprivate registry: such a registry is located behind a firewall,\n or protected by an additional security layer (HTTP authorization,\n SSL client-side certificates, IP address authorization...). The\n registry is operated by a private entity, outside of Docker's\n control. It can optionally delegate additional authorization to the\n Docker Hub, but it is not mandatory.\n\n\nNote: The latter implies that while HTTP is the protocol\nof choice for a registry, multiple schemes are possible (and\nin some cases, trivial):\n\nHTTP with GET (and PUT for read-write registries);\nlocal mount point;\nremote docker addressed through SSH.\n\n\nThe latter would only require two new commands in Docker, e.g.,\nregistryget and registryput,\nwrapping access to the local filesystem (and optionally doing\nconsistency checks). Authentication and authorization are then delegated\nto SSH (e.g., with public keys).\nDocker\nOn top of being a runtime for LXC, Docker is the Registry client. It\nsupports:\n\nPush / Pull on the registry\nClient authentication on the Docker Hub\n\nWorkflow\nPull\n\n\nContact the Docker Hub to know where I should download \u201csamalba/busybox\u201d\nDocker Hub replies: a. samalba/busybox is on Registry A b. here are the\n checksums for samalba/busybox (for all layers) c. token\nContact Registry A to receive the layers for samalba/busybox (all of\n them to the base image). Registry A is authoritative for \u201csamalba/busybox\u201d\n but keeps a copy of all inherited layers and serve them all from the same\n location.\nregistry contacts Docker Hub to verify if token/user is allowed to download images\nDocker Hub returns true/false lettings registry know if it should proceed or error\n out\nGet the payload for all layers\n\nIt's possible to run:\n$ sudo docker pull https://registry/repositories/samalba/busybox\n\nIn this case, Docker bypasses the Docker Hub. However the security is not\nguaranteed (in case Registry A is corrupted) because there won't be any\nchecksum checks.\nCurrently registry redirects to s3 urls for downloads, going forward all\ndownloads need to be streamed through the registry. The Registry will\nthen abstract the calls to S3 by a top-level class which implements\nsub-classes for S3 and local storage.\nToken is only returned when the X-Docker-Token\nheader is sent with request.\nBasic Auth is required to pull private repos. Basic auth isn't required\nfor pulling public repos, but if one is provided, it needs to be valid\nand for an active account.\nAPI (pulling repository foo/bar):\n\n(Docker - Docker Hub) GET /v1/repositories/foo/bar/images:\n\nHeaders:\n Authorization: Basic QWxhZGRpbjpvcGVuIHNlc2FtZQ==\n X-Docker-Token: true\n\nAction:\n (looking up the foo/bar in db and gets images and checksums\n for that repo (all if no tag is specified, if tag, only\n checksums for those tags) see part 4.4.1)\n\n\n(Docker Hub - Docker) HTTP 200 OK\n\nHeaders:\n Authorization: Token\n signature=123abc,repository=\u201dfoo/bar\u201d,access=write\n X-Docker-Endpoints: registry.docker.io [,registry2.docker.io]\n\nBody:\n Jsonified checksums (see part 4.4.1)\n\n\n(Docker - Registry) GET /v1/repositories/foo/bar/tags/latest\n\nHeaders:\n Authorization: Token\n signature=123abc,repository=\u201dfoo/bar\u201d,access=write\n\n\n(Registry - Docker Hub) GET /v1/repositories/foo/bar/images\n\nHeaders:\n Authorization: Token\n signature=123abc,repository=\u201dfoo/bar\u201d,access=read\n\nBody:\n ids and checksums in payload\n\nAction:\n (Lookup token see if they have access to pull.)\n\n If good:\n HTTP 200 OK Docker Hub will invalidate the token\n\n If bad:\n HTTP 401 Unauthorized\n\n\n(Docker - Registry) GET /v1/images/928374982374/ancestry\n\nAction:\n (for each image id returned in the registry, fetch /json + /layer)\n\n\nNote:\nIf someone makes a second request, then we will always give a new token,\nnever reuse tokens.\n\nPush\n\n\nContact the Docker Hub to allocate the repository name \u201csamalba/busybox\u201d\n (authentication required with user credentials)\nIf authentication works and namespace available, \u201csamalba/busybox\u201d\n is allocated and a temporary token is returned (namespace is marked\n as initialized in Docker Hub)\nPush the image on the registry (along with the token)\nRegistry A contacts the Docker Hub to verify the token (token must\n corresponds to the repository name)\nDocker Hub validates the token. Registry A starts reading the stream\n pushed by docker and store the repository (with its images)\ndocker contacts the Docker Hub to give checksums for upload images\n\n\nNote:\nIt's possible not to use the Docker Hub at all! In this case, a deployed\nversion of the Registry is deployed to store and serve images. Those\nimages are not authenticated and the security is not guaranteed.\nNote:\nDocker Hub can be replaced! For a private Registry deployed, a custom\nDocker Hub can be used to serve and validate token according to different\npolicies.\n\nDocker computes the checksums and submit them to the Docker Hub at the end of\nthe push. When a repository name does not have checksums on the Docker Hub,\nit means that the push is in progress (since checksums are submitted at\nthe end).\nAPI (pushing repos foo/bar):\n\n(Docker - Docker Hub) PUT /v1/repositories/foo/bar/\n\nHeaders:\n Authorization: Basic sdkjfskdjfhsdkjfh== X-Docker-Token:\n true\n\nAction:\n\nin Docker Hub, we allocated a new repository, and set to\n initialized\n\nBody:\n(The body contains the list of images that are going to be\npushed, with empty checksums. The checksums will be set at\nthe end of the push):\n [{\u201cid\u201d: \u201c9e89cc6f0bc3c38722009fe6857087b486531f9a779a0c17e3ed29dae8f12c4f\u201d}]\n\n\n(Docker Hub - Docker) 200 Created\n\nHeaders:\n WWW-Authenticate: Token\n signature=123abc,repository=\u201dfoo/bar\u201d,access=write\n X-Docker-Endpoints: registry.docker.io [, registry2.docker.io]\n\n\n(Docker - Registry) PUT /v1/images/98765432_parent/json\n\nHeaders:\n Authorization: Token\n signature=123abc,repository=\u201dfoo/bar\u201d,access=write\n\n\n(Registry-Docker Hub) GET /v1/repositories/foo/bar/images\n\nHeaders:\n Authorization: Token\n signature=123abc,repository=\u201dfoo/bar\u201d,access=write\n\nAction:\n\nDocker Hub:\n will invalidate the token.\n\nRegistry:\n grants a session (if token is approved) and fetches\n the images id\n\n\n(Docker - Registry) PUT /v1/images/98765432_parent/json\n\n\nHeaders:\n Authorization: Token\n signature=123abc,repository=\u201dfoo/bar\u201d,access=write\n Cookie: (Cookie provided by the Registry)\n\n\n(Docker - Registry) PUT /v1/images/98765432/json\n\nHeaders:\n Cookie: (Cookie provided by the Registry)\n\n\n(Docker - Registry) PUT /v1/images/98765432_parent/layer\n\nHeaders:\n Cookie: (Cookie provided by the Registry)\n\n\n(Docker - Registry) PUT /v1/images/98765432/layer\n\nHeaders:\n X-Docker-Checksum: sha256:436745873465fdjkhdfjkgh\n\n\n(Docker - Registry) PUT /v1/repositories/foo/bar/tags/latest\n\nHeaders:\n Cookie: (Cookie provided by the Registry)\n\nBody:\n \u201c98765432\u201d\n\n\n(Docker - Docker Hub) PUT /v1/repositories/foo/bar/images\n\nHeaders:\n Authorization: Basic 123oislifjsldfj== X-Docker-Endpoints:\n registry1.docker.io (no validation on this right now)\n\nBody:\n (The image, id`s, tags and checksums)\n [{\u201cid\u201d:\n \u201c9e89cc6f0bc3c38722009fe6857087b486531f9a779a0c17e3ed29dae8f12c4f\u201d,\n \u201cchecksum\u201d:\n \u201cb486531f9a779a0c17e3ed29dae8f12c4f9e89cc6f0bc3c38722009fe6857087\u201d}]\n\nReturn:\n HTTP 204\n\n\nNote: If push fails and they need to start again, what happens in the Docker Hub,\nthere will already be a record for the namespace/name, but it will be\ninitialized. Should we allow it, or mark as name already used? One edge\ncase could be if someone pushes the same thing at the same time with two\ndifferent shells.\n\nIf it's a retry on the Registry, Docker has a cookie (provided by the\nregistry after token validation). So the Docker Hub won't have to provide a\nnew token.\nDelete\nIf you need to delete something from the Docker Hub or registry, we need a\nnice clean way to do that. Here is the workflow.\n\nDocker contacts the Docker Hub to request a delete of a repository\n samalba/busybox (authentication required with user credentials)\nIf authentication works and repository is valid, samalba/busybox\n is marked as deleted and a temporary token is returned\nSend a delete request to the registry for the repository (along with\n the token)\nRegistry A contacts the Docker Hub to verify the token (token must\n corresponds to the repository name)\nDocker Hub validates the token. Registry A deletes the repository and\n everything associated to it.\ndocker contacts the Docker Hub to let it know it was removed from the\n registry, the Docker Hub removes all records from the database.\n\n\nNote:\nThe Docker client should present an \"Are you sure?\" prompt to confirm\nthe deletion before starting the process. Once it starts it can't be\nundone.\n\nAPI (deleting repository foo/bar):\n\n(Docker - Docker Hub) DELETE /v1/repositories/foo/bar/\n\nHeaders:\n Authorization: Basic sdkjfskdjfhsdkjfh== X-Docker-Token:\n true\n\nAction:\n\nin Docker Hub, we make sure it is a valid repository, and set\n to deleted (logically)\n\nBody:\n Empty\n\n\n(Docker Hub - Docker) 202 Accepted\n\nHeaders:\n WWW-Authenticate: Token\n signature=123abc,repository=\u201dfoo/bar\u201d,access=delete\n X-Docker-Endpoints: registry.docker.io [, registry2.docker.io]\n # list of endpoints where this repo lives.\n\n\n(Docker - Registry) DELETE /v1/repositories/foo/bar/\n\nHeaders:\n Authorization: Token\n signature=123abc,repository=\u201dfoo/bar\u201d,access=delete\n\n\n(Registry-Docker Hub) PUT /v1/repositories/foo/bar/auth\n\nHeaders:\n Authorization: Token\n signature=123abc,repository=\u201dfoo/bar\u201d,access=delete\n\nAction:\n\nDocker Hub:\n will invalidate the token.\n\nRegistry:\n deletes the repository (if token is approved)\n\n\n(Registry - Docker) 200 OK\n200 If success 403 if forbidden 400 if bad request 404\nif repository isn't found\n\n\n\n(Docker - Docker Hub) DELETE /v1/repositories/foo/bar/\n\n\nHeaders:\n Authorization: Basic 123oislifjsldfj== X-Docker-Endpoints:\n registry-1.docker.io (no validation on this right now)\n\nBody:\n Empty\n\nReturn:\n HTTP 200\n\nHow to use the Registry in standalone mode\nThe Docker Hub has two main purposes (along with its fancy social features):\n\n\nResolve short names (to avoid passing absolute URLs all the time):\nusername/projectname -\nhttps://registry.docker.io/users//repositories//\nteam/projectname -\nhttps://registry.docker.io/team//repositories//\n\n\nAuthenticate a user as a repos owner (for a central referenced\n repository)\n\n\nWithout a Docker Hub\nUsing the Registry without the Docker Hub can be useful to store the images\non a private network without having to rely on an external entity\ncontrolled by Docker Inc.\nIn this case, the registry will be launched in a special mode\n(-standalone? ne? -no-index?). In this mode, the only thing which changes is\nthat Registry will never contact the Docker Hub to verify a token. It will be\nthe Registry owner responsibility to authenticate the user who pushes\n(or even pulls) an image using any mechanism (HTTP auth, IP based,\netc...).\nIn this scenario, the Registry is responsible for the security in case\nof data corruption since the checksums are not delivered by a trusted\nentity.\nAs hinted previously, a standalone registry can also be implemented by\nany HTTP server handling GET/PUT requests (or even only GET requests if\nno write access is necessary).\nWith a Docker Hub\nThe Docker Hub data needed by the Registry are simple:\n\nServe the checksums\nProvide and authorize a Token\n\nIn the scenario of a Registry running on a private network with the need\nof centralizing and authorizing, it's easy to use a custom Docker Hub.\nThe only challenge will be to tell Docker to contact (and trust) this\ncustom Docker Hub. Docker will be configurable at some point to use a\nspecific Docker Hub, it'll be the private entity responsibility (basically\nthe organization who uses Docker in a private environment) to maintain\nthe Docker Hub and the Docker's configuration among its consumers.\nThe API\nThe first version of the api is available here:\nhttps://github.com/jpetazzo/docker/blob/acd51ecea8f5d3c02b00a08176171c59442df8b3/docs/images-repositories-push-pull.md\nImages\nThe format returned in the images is not defined here (for layer and\nJSON), basically because Registry stores exactly the same kind of\ninformation as Docker uses to manage them.\nThe format of ancestry is a line-separated list of image ids, in age\norder, i.e. the image's parent is on the last line, the parent of the\nparent on the next-to-last line, etc.; if the image has no parent, the\nfile is empty.\nGET /v1/images/image_id/layer\nPUT /v1/images/image_id/layer\nGET /v1/images/image_id/json\nPUT /v1/images/image_id/json\nGET /v1/images/image_id/ancestry\nPUT /v1/images/image_id/ancestry\n\nUsers\nCreate a user (Docker Hub)\nPOST /v1/users:\n\nBody:\n{\"email\": \"[sam@docker.com](mailto:sam%40docker.com)\",\n\"password\": \"toto42\", \"username\": \"foobar\"`}\n\nValidation:\n\nusername: min 4 character, max 30 characters, must match the\n regular expression [a-z0-9_].\npassword: min 5 characters\n\nValid:\n return HTTP 201\n\nErrors: HTTP 400 (we should create error codes for possible errors) -\ninvalid json - missing field - wrong format (username, password, email,\netc) - forbidden name - name already exists\n\nNote:\nA user account will be valid only if the email has been validated (a\nvalidation link is sent to the email address).\n\nUpdate a user (Docker Hub)\nPUT /v1/users/username\n\nBody:\n{\"password\": \"toto\"}\n\n\nNote:\nWe can also update email address, if they do, they will need to reverify\ntheir new email address.\n\nLogin (Docker Hub)\nDoes nothing else but asking for a user authentication. Can be used to\nvalidate credentials. HTTP Basic Auth for now, maybe change in future.\nGET /v1/users\nReturn:\n- Valid: HTTP 200\n- Invalid login: HTTP 401\n- Account inactive: HTTP 403 Account is not Active\nTags (Registry)\nThe Registry does not know anything about users. Even though\nrepositories are under usernames, it's just a namespace for the\nregistry. Allowing us to implement organizations or different namespaces\nper user later, without modifying the Registry's API.\nThe following naming restrictions apply:\n\nNamespaces must match the same regular expression as usernames (See\n 4.2.1.)\nRepository names must match the regular expression [a-zA-Z0-9-_.]\n\nGet all tags:\nGET /v1/repositories/namespace/repository_name/tags\n\n**Return**: HTTP 200\n[\n {\n \"layer\": \"9e89cc6f\",\n \"name\": \"latest\"\n },\n {\n \"layer\": \"b486531f\",\n \"name\": \"0.1.1\",\n }\n]\n\n4.3.2 Read the content of a tag (resolve the image id):\nGET /v1/repositories/namespace/repo_name/tags/tag\n\nReturn:\n\"9e89cc6f0bc3c38722009fe6857087b486531f9a779a0c17e3ed29dae8f12c4f\"\n\n4.3.3 Delete a tag (registry):\nDELETE /v1/repositories/namespace/repo_name/tags/tag\n\n4.4 Images (Docker Hub)\nFor the Docker Hub to \u201cresolve\u201d the repository name to a Registry location,\nit uses the X-Docker-Endpoints header. In other terms, this requests\nalways add a X-Docker-Endpoints to indicate the\nlocation of the registry which hosts this repository.\n4.4.1 Get the images:\nGET /v1/repositories/namespace/repo_name/images\n\n**Return**: HTTP 200\n[{\u201cid\u201d:\n\u201c9e89cc6f0bc3c38722009fe6857087b486531f9a779a0c17e3ed29dae8f12c4f\u201d,\n\u201cchecksum\u201d:\n\u201c[md5:b486531f9a779a0c17e3ed29dae8f12c4f9e89cc6f0bc3c38722009fe6857087](md5:b486531f9a779a0c17e3ed29dae8f12c4f9e89cc6f0bc3c38722009fe6857087)\u201d}]\n\nAdd/update the images:\nYou always add images, you never remove them.\nPUT /v1/repositories/namespace/repo_name/images\n\nBody:\n[ {\u201cid\u201d:\n\u201c9e89cc6f0bc3c38722009fe6857087b486531f9a779a0c17e3ed29dae8f12c4f\u201d,\n\u201cchecksum\u201d:\n\u201csha256:b486531f9a779a0c17e3ed29dae8f12c4f9e89cc6f0bc3c38722009fe6857087\u201d}\n]\n\nReturn:\n204\n\nRepositories\nRemove a Repository (Registry)\nDELETE /v1/repositories//\nReturn 200 OK\nRemove a Repository (Docker Hub)\nThis starts the delete process. see 2.3 for more details.\nDELETE /v1/repositories//\nReturn 202 OK\nChaining Registries\nIt's possible to chain Registries server for several reasons:\n\nLoad balancing\nDelegate the next request to another server\n\nWhen a Registry is a reference for a repository, it should host the\nentire images chain in order to avoid breaking the chain during the\ndownload.\nThe Docker Hub and Registry use this mechanism to redirect on one or the\nother.\nExample with an image download:\nOn every request, a special header can be returned:\nX-Docker-Endpoints: server1,server2\n\nOn the next request, the client will always pick a server from this\nlist.\nAuthentication Authorization\nOn the Docker Hub\nThe Docker Hub supports both \u201cBasic\u201d and \u201cToken\u201d challenges. Usually when\nthere is a 401 Unauthorized, the Docker Hub replies\nthis:\n401 Unauthorized\nWWW-Authenticate: Basic realm=\"auth required\",Token\n\nYou have 3 options:\n\nProvide user credentials and ask for a token\n\nHeader:\n Authorization: Basic QWxhZGRpbjpvcGVuIHNlc2FtZQ==\n X-Docker-Token: true\n\nIn this case, along with the 200 response, you'll get a new token\n(if user auth is ok): If authorization isn't correct you get a 401\nresponse. If account isn't active you will get a 403 response.\nResponse:\n 200 OK\n X-Docker-Token: Token\n signature=123abc,repository=\u201dfoo/bar\u201d,access=read\n\n\nProvide user credentials only\n\nHeader:\n Authorization: Basic QWxhZGRpbjpvcGVuIHNlc2FtZQ==\n\n\nProvide Token\n\nHeader:\n Authorization: Token\n signature=123abc,repository=\u201dfoo/bar\u201d,access=read\n\n6.2 On the Registry\nThe Registry only supports the Token challenge:\n401 Unauthorized\nWWW-Authenticate: Token\n\nThe only way is to provide a token on 401 Unauthorized\nresponses:\nAuthorization: Token signature=123abc,repository=\"foo/bar\",access=read\n\nUsually, the Registry provides a Cookie when a Token verification\nsucceeded. Every time the Registry passes a Cookie, you have to pass it\nback the same cookie.:\n200 OK\nSet-Cookie: session=\"wD/J7LqL5ctqw8haL10vgfhrb2Q=?foo=UydiYXInCnAxCi4=timestamp=RjEzNjYzMTQ5NDcuNDc0NjQzCi4=\"; Path=/; HttpOnly\n\nNext request:\nGET /(...)\nCookie: session=\"wD/J7LqL5ctqw8haL10vgfhrb2Q=?foo=UydiYXInCnAxCi4=timestamp=RjEzNjYzMTQ5NDcuNDc0NjQzCi4=\"\n\nDocument Version\n\n1.0 : May 6th 2013 : initial release\n1.1 : June 1st 2013 : Added Delete Repository and way to handle new\n source namespace.",
|
|
"title": "Docker Hub and Registry Spec"
|
|
},
|
|
{
|
|
"loc": "/reference/api/hub_registry_spec#the-docker-hub-and-the-registry-spec",
|
|
"tags": "",
|
|
"text": "",
|
|
"title": "The Docker Hub and the Registry spec"
|
|
},
|
|
{
|
|
"loc": "/reference/api/hub_registry_spec#the-three-roles",
|
|
"tags": "",
|
|
"text": "There are three major components playing a role in the Docker ecosystem. Docker Hub The Docker Hub is responsible for centralizing information about: User accounts Checksums of the images Public namespaces The Docker Hub has different components: Web UI Meta-data store (comments, stars, list public repositories) Authentication service Tokenization The Docker Hub is authoritative for that information. There is only one instance of the Docker Hub, run and\nmanaged by Docker Inc. Registry The registry has the following characteristics: It stores the images and the graph for a set of repositories It does not have user accounts data It has no notion of user accounts or authorization It delegates authentication and authorization to the Docker Hub Auth\n service using tokens It supports different storage backends (S3, cloud files, local FS) It doesn't have a local database Source Code We expect that there will be multiple registries out there. To help you\ngrasp the context, here are some examples of registries: sponsor registry : such a registry is provided by a third-party\n hosting infrastructure as a convenience for their customers and the\n Docker community as a whole. Its costs are supported by the third\n party, but the management and operation of the registry are\n supported by Docker, Inc. It features read/write access, and delegates\n authentication and authorization to the Docker Hub. mirror registry : such a registry is provided by a third-party\n hosting infrastructure but is targeted at their customers only. Some\n mechanism (unspecified to date) ensures that public images are\n pulled from a sponsor registry to the mirror registry, to make sure\n that the customers of the third-party provider can docker pull \n those images locally. vendor registry : such a registry is provided by a software\n vendor who wants to distribute docker images. It would be operated\n and managed by the vendor. Only users authorized by the vendor would\n be able to get write access. Some images would be public (accessible\n for anyone), others private (accessible only for authorized users).\n Authentication and authorization would be delegated to the Docker Hub.\n The goal of vendor registries is to let someone do docker pull\n basho/riak1.3 and automatically push from the vendor registry\n (instead of a sponsor registry); i.e., vendors get all the convenience of a\n sponsor registry, while retaining control on the asset distribution. private registry : such a registry is located behind a firewall,\n or protected by an additional security layer (HTTP authorization,\n SSL client-side certificates, IP address authorization...). The\n registry is operated by a private entity, outside of Docker's\n control. It can optionally delegate additional authorization to the\n Docker Hub, but it is not mandatory. Note: The latter implies that while HTTP is the protocol\nof choice for a registry, multiple schemes are possible (and\nin some cases, trivial): HTTP with GET (and PUT for read-write registries); local mount point; remote docker addressed through SSH. The latter would only require two new commands in Docker, e.g., registryget and registryput ,\nwrapping access to the local filesystem (and optionally doing\nconsistency checks). Authentication and authorization are then delegated\nto SSH (e.g., with public keys). Docker On top of being a runtime for LXC, Docker is the Registry client. It\nsupports: Push / Pull on the registry Client authentication on the Docker Hub",
|
|
"title": "The three roles"
|
|
},
|
|
{
|
|
"loc": "/reference/api/hub_registry_spec#workflow",
|
|
"tags": "",
|
|
"text": "Pull Contact the Docker Hub to know where I should download \u201csamalba/busybox\u201d Docker Hub replies: a. samalba/busybox is on Registry A b. here are the\n checksums for samalba/busybox (for all layers) c. token Contact Registry A to receive the layers for samalba/busybox (all of\n them to the base image). Registry A is authoritative for \u201csamalba/busybox\u201d\n but keeps a copy of all inherited layers and serve them all from the same\n location. registry contacts Docker Hub to verify if token/user is allowed to download images Docker Hub returns true/false lettings registry know if it should proceed or error\n out Get the payload for all layers It's possible to run: $ sudo docker pull https:// registry /repositories/samalba/busybox In this case, Docker bypasses the Docker Hub. However the security is not\nguaranteed (in case Registry A is corrupted) because there won't be any\nchecksum checks. Currently registry redirects to s3 urls for downloads, going forward all\ndownloads need to be streamed through the registry. The Registry will\nthen abstract the calls to S3 by a top-level class which implements\nsub-classes for S3 and local storage. Token is only returned when the X-Docker-Token \nheader is sent with request. Basic Auth is required to pull private repos. Basic auth isn't required\nfor pulling public repos, but if one is provided, it needs to be valid\nand for an active account. API (pulling repository foo/bar): (Docker - Docker Hub) GET /v1/repositories/foo/bar/images: Headers : Authorization: Basic QWxhZGRpbjpvcGVuIHNlc2FtZQ==\n X-Docker-Token: true Action : (looking up the foo/bar in db and gets images and checksums\n for that repo (all if no tag is specified, if tag, only\n checksums for those tags) see part 4.4.1) (Docker Hub - Docker) HTTP 200 OK Headers : Authorization: Token\n signature=123abc,repository=\u201dfoo/bar\u201d,access=write\n X-Docker-Endpoints: registry.docker.io [,registry2.docker.io] Body : Jsonified checksums (see part 4.4.1) (Docker - Registry) GET /v1/repositories/foo/bar/tags/latest Headers : Authorization: Token\n signature=123abc,repository=\u201dfoo/bar\u201d,access=write (Registry - Docker Hub) GET /v1/repositories/foo/bar/images Headers : Authorization: Token\n signature=123abc,repository=\u201dfoo/bar\u201d,access=read Body : ids and checksums in payload Action : (Lookup token see if they have access to pull.)\n\n If good:\n HTTP 200 OK Docker Hub will invalidate the token\n\n If bad:\n HTTP 401 Unauthorized (Docker - Registry) GET /v1/images/928374982374/ancestry Action : (for each image id returned in the registry, fetch /json + /layer) Note :\nIf someone makes a second request, then we will always give a new token,\nnever reuse tokens. Push Contact the Docker Hub to allocate the repository name \u201csamalba/busybox\u201d\n (authentication required with user credentials) If authentication works and namespace available, \u201csamalba/busybox\u201d\n is allocated and a temporary token is returned (namespace is marked\n as initialized in Docker Hub) Push the image on the registry (along with the token) Registry A contacts the Docker Hub to verify the token (token must\n corresponds to the repository name) Docker Hub validates the token. Registry A starts reading the stream\n pushed by docker and store the repository (with its images) docker contacts the Docker Hub to give checksums for upload images Note: It's possible not to use the Docker Hub at all! In this case, a deployed\nversion of the Registry is deployed to store and serve images. Those\nimages are not authenticated and the security is not guaranteed. Note: Docker Hub can be replaced! For a private Registry deployed, a custom\nDocker Hub can be used to serve and validate token according to different\npolicies. Docker computes the checksums and submit them to the Docker Hub at the end of\nthe push. When a repository name does not have checksums on the Docker Hub,\nit means that the push is in progress (since checksums are submitted at\nthe end). API (pushing repos foo/bar): (Docker - Docker Hub) PUT /v1/repositories/foo/bar/ Headers : Authorization: Basic sdkjfskdjfhsdkjfh== X-Docker-Token:\n true Action : in Docker Hub, we allocated a new repository, and set to\n initialized Body : (The body contains the list of images that are going to be\npushed, with empty checksums. The checksums will be set at\nthe end of the push): [{\u201cid\u201d: \u201c9e89cc6f0bc3c38722009fe6857087b486531f9a779a0c17e3ed29dae8f12c4f\u201d}] (Docker Hub - Docker) 200 Created Headers : WWW-Authenticate: Token\n signature=123abc,repository=\u201dfoo/bar\u201d,access=write\n X-Docker-Endpoints: registry.docker.io [, registry2.docker.io] (Docker - Registry) PUT /v1/images/98765432_parent/json Headers : Authorization: Token\n signature=123abc,repository=\u201dfoo/bar\u201d,access=write (Registry- Docker Hub) GET /v1/repositories/foo/bar/images Headers : Authorization: Token\n signature=123abc,repository=\u201dfoo/bar\u201d,access=write Action : Docker Hub:\n will invalidate the token. Registry:\n grants a session (if token is approved) and fetches\n the images id (Docker - Registry) PUT /v1/images/98765432_parent/json Headers : Authorization: Token\n signature=123abc,repository=\u201dfoo/bar\u201d,access=write\n Cookie: (Cookie provided by the Registry) (Docker - Registry) PUT /v1/images/98765432/json Headers : Cookie: (Cookie provided by the Registry) (Docker - Registry) PUT /v1/images/98765432_parent/layer Headers : Cookie: (Cookie provided by the Registry) (Docker - Registry) PUT /v1/images/98765432/layer Headers : X-Docker-Checksum: sha256:436745873465fdjkhdfjkgh (Docker - Registry) PUT /v1/repositories/foo/bar/tags/latest Headers : Cookie: (Cookie provided by the Registry) Body : \u201c98765432\u201d (Docker - Docker Hub) PUT /v1/repositories/foo/bar/images Headers : Authorization: Basic 123oislifjsldfj== X-Docker-Endpoints:\n registry1.docker.io (no validation on this right now) Body : (The image, id`s, tags and checksums)\n [{\u201cid\u201d:\n \u201c9e89cc6f0bc3c38722009fe6857087b486531f9a779a0c17e3ed29dae8f12c4f\u201d,\n \u201cchecksum\u201d:\n \u201cb486531f9a779a0c17e3ed29dae8f12c4f9e89cc6f0bc3c38722009fe6857087\u201d}] Return : HTTP 204 Note: If push fails and they need to start again, what happens in the Docker Hub,\nthere will already be a record for the namespace/name, but it will be\ninitialized. Should we allow it, or mark as name already used? One edge\ncase could be if someone pushes the same thing at the same time with two\ndifferent shells. If it's a retry on the Registry, Docker has a cookie (provided by the\nregistry after token validation). So the Docker Hub won't have to provide a\nnew token. Delete If you need to delete something from the Docker Hub or registry, we need a\nnice clean way to do that. Here is the workflow. Docker contacts the Docker Hub to request a delete of a repository\n samalba/busybox (authentication required with user credentials) If authentication works and repository is valid, samalba/busybox \n is marked as deleted and a temporary token is returned Send a delete request to the registry for the repository (along with\n the token) Registry A contacts the Docker Hub to verify the token (token must\n corresponds to the repository name) Docker Hub validates the token. Registry A deletes the repository and\n everything associated to it. docker contacts the Docker Hub to let it know it was removed from the\n registry, the Docker Hub removes all records from the database. Note :\nThe Docker client should present an \"Are you sure?\" prompt to confirm\nthe deletion before starting the process. Once it starts it can't be\nundone. API (deleting repository foo/bar): (Docker - Docker Hub) DELETE /v1/repositories/foo/bar/ Headers : Authorization: Basic sdkjfskdjfhsdkjfh== X-Docker-Token:\n true Action : in Docker Hub, we make sure it is a valid repository, and set\n to deleted (logically) Body : Empty (Docker Hub - Docker) 202 Accepted Headers : WWW-Authenticate: Token\n signature=123abc,repository=\u201dfoo/bar\u201d,access=delete\n X-Docker-Endpoints: registry.docker.io [, registry2.docker.io]\n # list of endpoints where this repo lives. (Docker - Registry) DELETE /v1/repositories/foo/bar/ Headers : Authorization: Token\n signature=123abc,repository=\u201dfoo/bar\u201d,access=delete (Registry- Docker Hub) PUT /v1/repositories/foo/bar/auth Headers : Authorization: Token\n signature=123abc,repository=\u201dfoo/bar\u201d,access=delete Action : Docker Hub:\n will invalidate the token. Registry:\n deletes the repository (if token is approved) (Registry - Docker) 200 OK 200 If success 403 if forbidden 400 if bad request 404\nif repository isn't found (Docker - Docker Hub) DELETE /v1/repositories/foo/bar/ Headers : Authorization: Basic 123oislifjsldfj== X-Docker-Endpoints:\n registry-1.docker.io (no validation on this right now) Body : Empty Return : HTTP 200",
|
|
"title": "Workflow"
|
|
},
|
|
{
|
|
"loc": "/reference/api/hub_registry_spec#how-to-use-the-registry-in-standalone-mode",
|
|
"tags": "",
|
|
"text": "The Docker Hub has two main purposes (along with its fancy social features): Resolve short names (to avoid passing absolute URLs all the time): username/projectname - \nhttps://registry.docker.io/users/ /repositories/ /\nteam/projectname - \nhttps://registry.docker.io/team/ /repositories/ / Authenticate a user as a repos owner (for a central referenced\n repository) Without a Docker Hub Using the Registry without the Docker Hub can be useful to store the images\non a private network without having to rely on an external entity\ncontrolled by Docker Inc. In this case, the registry will be launched in a special mode\n(-standalone? ne? -no-index?). In this mode, the only thing which changes is\nthat Registry will never contact the Docker Hub to verify a token. It will be\nthe Registry owner responsibility to authenticate the user who pushes\n(or even pulls) an image using any mechanism (HTTP auth, IP based,\netc...). In this scenario, the Registry is responsible for the security in case\nof data corruption since the checksums are not delivered by a trusted\nentity. As hinted previously, a standalone registry can also be implemented by\nany HTTP server handling GET/PUT requests (or even only GET requests if\nno write access is necessary). With a Docker Hub The Docker Hub data needed by the Registry are simple: Serve the checksums Provide and authorize a Token In the scenario of a Registry running on a private network with the need\nof centralizing and authorizing, it's easy to use a custom Docker Hub. The only challenge will be to tell Docker to contact (and trust) this\ncustom Docker Hub. Docker will be configurable at some point to use a\nspecific Docker Hub, it'll be the private entity responsibility (basically\nthe organization who uses Docker in a private environment) to maintain\nthe Docker Hub and the Docker's configuration among its consumers.",
|
|
"title": "How to use the Registry in standalone mode"
|
|
},
|
|
{
|
|
"loc": "/reference/api/hub_registry_spec#the-api",
|
|
"tags": "",
|
|
"text": "The first version of the api is available here: https://github.com/jpetazzo/docker/blob/acd51ecea8f5d3c02b00a08176171c59442df8b3/docs/images-repositories-push-pull.md Images The format returned in the images is not defined here (for layer and\nJSON), basically because Registry stores exactly the same kind of\ninformation as Docker uses to manage them. The format of ancestry is a line-separated list of image ids, in age\norder, i.e. the image's parent is on the last line, the parent of the\nparent on the next-to-last line, etc.; if the image has no parent, the\nfile is empty. GET /v1/images/ image_id /layer\nPUT /v1/images/ image_id /layer\nGET /v1/images/ image_id /json\nPUT /v1/images/ image_id /json\nGET /v1/images/ image_id /ancestry\nPUT /v1/images/ image_id /ancestry Users Create a user (Docker Hub) POST /v1/users: Body : {\"email\": \"[sam@docker.com](mailto:sam%40docker.com)\",\n\"password\": \"toto42\", \"username\": \"foobar\"`} Validation : username : min 4 character, max 30 characters, must match the\n regular expression [a-z0-9_]. password : min 5 characters Valid : return HTTP 201 Errors: HTTP 400 (we should create error codes for possible errors) -\ninvalid json - missing field - wrong format (username, password, email,\netc) - forbidden name - name already exists Note :\nA user account will be valid only if the email has been validated (a\nvalidation link is sent to the email address). Update a user (Docker Hub) PUT /v1/users/ username Body : {\"password\": \"toto\"} Note :\nWe can also update email address, if they do, they will need to reverify\ntheir new email address. Login (Docker Hub) Does nothing else but asking for a user authentication. Can be used to\nvalidate credentials. HTTP Basic Auth for now, maybe change in future. GET /v1/users Return :\n- Valid: HTTP 200\n- Invalid login: HTTP 401\n- Account inactive: HTTP 403 Account is not Active Tags (Registry) The Registry does not know anything about users. Even though\nrepositories are under usernames, it's just a namespace for the\nregistry. Allowing us to implement organizations or different namespaces\nper user later, without modifying the Registry's API. The following naming restrictions apply: Namespaces must match the same regular expression as usernames (See\n 4.2.1.) Repository names must match the regular expression [a-zA-Z0-9-_.] Get all tags: GET /v1/repositories/ namespace / repository_name /tags\n\n**Return**: HTTP 200\n[\n {\n \"layer\": \"9e89cc6f\",\n \"name\": \"latest\"\n },\n {\n \"layer\": \"b486531f\",\n \"name\": \"0.1.1\",\n }\n] 4.3.2 Read the content of a tag (resolve the image id): GET /v1/repositories/ namespace / repo_name /tags/ tag Return : \"9e89cc6f0bc3c38722009fe6857087b486531f9a779a0c17e3ed29dae8f12c4f\" 4.3.3 Delete a tag (registry): DELETE /v1/repositories/ namespace / repo_name /tags/ tag 4.4 Images (Docker Hub) For the Docker Hub to \u201cresolve\u201d the repository name to a Registry location,\nit uses the X-Docker-Endpoints header. In other terms, this requests\nalways add a X-Docker-Endpoints to indicate the\nlocation of the registry which hosts this repository. 4.4.1 Get the images: GET /v1/repositories/ namespace / repo_name /images\n\n**Return**: HTTP 200\n[{\u201cid\u201d:\n\u201c9e89cc6f0bc3c38722009fe6857087b486531f9a779a0c17e3ed29dae8f12c4f\u201d,\n\u201cchecksum\u201d:\n\u201c[md5:b486531f9a779a0c17e3ed29dae8f12c4f9e89cc6f0bc3c38722009fe6857087](md5:b486531f9a779a0c17e3ed29dae8f12c4f9e89cc6f0bc3c38722009fe6857087)\u201d}] Add/update the images: You always add images, you never remove them. PUT /v1/repositories/ namespace / repo_name /images Body : [ {\u201cid\u201d:\n\u201c9e89cc6f0bc3c38722009fe6857087b486531f9a779a0c17e3ed29dae8f12c4f\u201d,\n\u201cchecksum\u201d:\n\u201csha256:b486531f9a779a0c17e3ed29dae8f12c4f9e89cc6f0bc3c38722009fe6857087\u201d}\n] Return : 204 Repositories Remove a Repository (Registry) DELETE /v1/repositories/ / Return 200 OK Remove a Repository (Docker Hub) This starts the delete process. see 2.3 for more details. DELETE /v1/repositories/ / Return 202 OK",
|
|
"title": "The API"
|
|
},
|
|
{
|
|
"loc": "/reference/api/hub_registry_spec#chaining-registries",
|
|
"tags": "",
|
|
"text": "It's possible to chain Registries server for several reasons: Load balancing Delegate the next request to another server When a Registry is a reference for a repository, it should host the\nentire images chain in order to avoid breaking the chain during the\ndownload. The Docker Hub and Registry use this mechanism to redirect on one or the\nother. Example with an image download: On every request, a special header can be returned: X-Docker-Endpoints: server1,server2 On the next request, the client will always pick a server from this\nlist.",
|
|
"title": "Chaining Registries"
|
|
},
|
|
{
|
|
"loc": "/reference/api/hub_registry_spec#authentication-authorization",
|
|
"tags": "",
|
|
"text": "On the Docker Hub The Docker Hub supports both \u201cBasic\u201d and \u201cToken\u201d challenges. Usually when\nthere is a 401 Unauthorized , the Docker Hub replies\nthis: 401 Unauthorized\nWWW-Authenticate: Basic realm=\"auth required\",Token You have 3 options: Provide user credentials and ask for a token Header : Authorization: Basic QWxhZGRpbjpvcGVuIHNlc2FtZQ==\n X-Docker-Token: true In this case, along with the 200 response, you'll get a new token\n(if user auth is ok): If authorization isn't correct you get a 401\nresponse. If account isn't active you will get a 403 response. Response : 200 OK\n X-Docker-Token: Token\n signature=123abc,repository=\u201dfoo/bar\u201d,access=read Provide user credentials only Header : Authorization: Basic QWxhZGRpbjpvcGVuIHNlc2FtZQ== Provide Token Header : Authorization: Token\n signature=123abc,repository=\u201dfoo/bar\u201d,access=read 6.2 On the Registry The Registry only supports the Token challenge: 401 Unauthorized\nWWW-Authenticate: Token The only way is to provide a token on 401 Unauthorized \nresponses: Authorization: Token signature=123abc,repository=\"foo/bar\",access=read Usually, the Registry provides a Cookie when a Token verification\nsucceeded. Every time the Registry passes a Cookie, you have to pass it\nback the same cookie.: 200 OK\nSet-Cookie: session=\"wD/J7LqL5ctqw8haL10vgfhrb2Q=?foo=UydiYXInCnAxCi4= timestamp=RjEzNjYzMTQ5NDcuNDc0NjQzCi4=\"; Path=/; HttpOnly Next request: GET /(...)\nCookie: session=\"wD/J7LqL5ctqw8haL10vgfhrb2Q=?foo=UydiYXInCnAxCi4= timestamp=RjEzNjYzMTQ5NDcuNDc0NjQzCi4=\"",
|
|
"title": "Authentication & Authorization"
|
|
},
|
|
{
|
|
"loc": "/reference/api/hub_registry_spec#document-version",
|
|
"tags": "",
|
|
"text": "1.0 : May 6th 2013 : initial release 1.1 : June 1st 2013 : Added Delete Repository and way to handle new\n source namespace.",
|
|
"title": "Document Version"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api/",
|
|
"tags": "",
|
|
"text": "Docker Remote API\n\nBy default the Docker daemon listens on unix:///var/run/docker.sock\n and the client must have root access to interact with the daemon.\nIf the Docker daemon is set to use an encrypted TCP socket (--tls,\n or --tlsverify) as with Boot2Docker 1.3.0, then you need to add extra\n parameters to curl or wget when making test API requests:\n curl --insecure --cert ~/.docker/cert.pem --key ~/.docker/key.pem https://boot2docker:2376/images/json\n or \n wget --no-check-certificate --certificate=$DOCKER_CERT_PATH/cert.pem --private-key=$DOCKER_CERT_PATH/key.pem https://boot2docker:2376/images/json -O - -q\nIf a group named docker exists on your system, docker will apply\n ownership of the socket to the group.\nThe API tends to be REST, but for some complex commands, like attach\n or pull, the HTTP connection is hijacked to transport STDOUT, STDIN,\n and STDERR.\nSince API version 1.2, the auth configuration is now handled client\n side, so the client has to send the authConfig as a POST in /images/(name)/push.\nauthConfig, set as the X-Registry-Auth header, is currently a Base64\n encoded (JSON) string with the following structure:\n {\"username\": \"string\", \"password\": \"string\", \"email\": \"string\",\n \"serveraddress\" : \"string\", \"auth\": \"\"}. Notice that auth is to be left\n empty, serveraddress is a domain/ip without protocol, and that double\n quotes (instead of single ones) are required.\nThe Remote API uses an open schema model. In this model, unknown \n properties in incoming messages will be ignored.\n Client applications need to take this into account to ensure\n they will not break when talking to newer Docker daemons.\n\nThe current version of the API is v1.17\nCalling /info is the same as calling\n/v1.17/info.\nYou can still call an old version of the API using\n/v1.16/info.\nv1.17\nFull Documentation\nDocker Remote API v1.17\nWhat's new\nPOST /containers/(id)/attach and POST /exec/(id)/start\nNew!\nDocker client now hints potential proxies about connection hijacking using HTTP Upgrade headers.\nGET /containers/(id)/json\nNew!\nThis endpoint now returns the list current execs associated with the container (ExecIDs).\nPOST /containers/(id)/rename\nNew!\nNew endpoint to rename a container id to a new name.\nPOST /containers/create\nPOST /containers/(id)/start\nNew!\n(ReadonlyRootfs) can be passed in the host config to mount the container's\nroot filesystem as read only.\nGET /containers/(id)/stats\nNew!\nThis endpoint returns a live stream of a container's resource usage statistics.\n\nNote: this functionality currently only works when using the libcontainer exec-driver.\n\nv1.16\nFull Documentation\nDocker Remote API v1.16\nWhat's new\nGET /info\nNew!\ninfo now returns the number of CPUs available on the machine (NCPU),\ntotal memory available (MemTotal), a user-friendly name describing the running Docker daemon (Name), a unique ID identifying the daemon (ID), and\na list of daemon labels (Labels).\nPOST /containers/create\nNew!\nYou can set the new container's MAC address explicitly.\nNew!\nVolumes are now initialized when the container is created.\nPOST /containers/(id)/copy\nNew!\nYou can now copy data which is contained in a volume.\nv1.15\nFull Documentation\nDocker Remote API v1.15\nWhat's new\nPOST /containers/create\nNew!\nIt is now possible to set a container's HostConfig when creating a container.\nPreviously this was only available when starting a container.\nv1.14\nFull Documentation\nDocker Remote API v1.14\nWhat's new\nDELETE /containers/(id)\nNew!\nWhen using force, the container will be immediately killed with SIGKILL.\nPOST /containers/(id)/start\nNew!\nThe hostConfig option now accepts the field CapAdd, which specifies a list of capabilities\nto add, and the field CapDrop, which specifies a list of capabilities to drop.\nPOST /images/create\nNew!\nThe fromImage and repo parameters now supports the repo:tag format.\nConsequently, the tag parameter is now obsolete. Using the new format and\nthe tag parameter at the same time will return an error.\nv1.13\nFull Documentation\nDocker Remote API v1.13\nWhat's new\nGET /containers/(name)/json\nNew!\nThe HostConfig.Links field is now filled correctly\nNew!\nSockets parameter added to the /info endpoint listing all the sockets the \ndaemon is configured to listen on.\nPOST /containers/(name)/start\nPOST /containers/(name)/stop\nNew!\nstart and stop will now return 304 if the container's status is not modified\nPOST /commit\nNew!\nAdded a pause parameter (default true) to pause the container during commit\nv1.12\nFull Documentation\nDocker Remote API v1.12\nWhat's new\nPOST /build\nNew!\nBuild now has support for the forcerm parameter to always remove containers\nGET /containers/(name)/json\nGET /images/(name)/json\nNew!\nAll the JSON keys are now in CamelCase\nNew!\nTrusted builds are now Automated Builds - is_trusted is now is_automated.\nRemoved Insert Endpoint\nThe insert endpoint has been removed.\nv1.11\nFull Documentation\nDocker Remote API v1.11\nWhat's new\nGET /_ping\nNew!\nYou can now ping the server via the _ping endpoint.\nGET /events\nNew!\nYou can now use the -until parameter to close connection\nafter timestamp.\nGET /containers/(id)/logs\nThis url is preferred method for getting container logs now.\nv1.10\nFull Documentation\nDocker Remote API v1.10\nWhat's new\nDELETE /images/(name)\nNew!\nYou can now use the force parameter to force delete of an\n image, even if it's tagged in multiple repositories. New!\n You\n can now use the noprune parameter to prevent the deletion of parent\n images\nDELETE /containers/(id)\nNew!\nYou can now use the force parameter to force delete a\n container, even if it is currently running\nv1.9\nFull Documentation\nDocker Remote API v1.9\nWhat's new\nPOST /build\nNew!\nThis endpoint now takes a serialized ConfigFile which it\nuses to resolve the proper registry auth credentials for pulling the\nbase image. Clients which previously implemented the version\naccepting an AuthConfig object must be updated.\nv1.8\nFull Documentation\nDocker Remote API v1.8\nWhat's new\nPOST /build\nNew!\nThis endpoint now returns build status as json stream. In\ncase of a build error, it returns the exit status of the failed\ncommand.\nGET /containers/(id)/json\nNew!\nThis endpoint now returns the host config for the\ncontainer.\nPOST /images/create\nPOST /images/(name)/insert\nPOST /images/(name)/push\nNew!\nprogressDetail object was added in the JSON. It's now\npossible to get the current value and the total of the progress\nwithout having to parse the string.\nv1.7\nFull Documentation\nDocker Remote API v1.7\nWhat's new\nGET /images/json\nThe format of the json returned from this uri changed. Instead of an\nentry for each repo/tag on an image, each image is only represented\nonce, with a nested attribute indicating the repo/tags that apply to\nthat image.\nInstead of:\nHTTP/1.1 200 OK\nContent-Type: application/json\n\n[\n {\n \"VirtualSize\": 131506275,\n \"Size\": 131506275,\n \"Created\": 1365714795,\n \"Id\": \"8dbd9e392a964056420e5d58ca5cc376ef18e2de93b5cc90e868a1bbc8318c1c\",\n \"Tag\": \"12.04\",\n \"Repository\": \"ubuntu\"\n },\n {\n \"VirtualSize\": 131506275,\n \"Size\": 131506275,\n \"Created\": 1365714795,\n \"Id\": \"8dbd9e392a964056420e5d58ca5cc376ef18e2de93b5cc90e868a1bbc8318c1c\",\n \"Tag\": \"latest\",\n \"Repository\": \"ubuntu\"\n },\n {\n \"VirtualSize\": 131506275,\n \"Size\": 131506275,\n \"Created\": 1365714795,\n \"Id\": \"8dbd9e392a964056420e5d58ca5cc376ef18e2de93b5cc90e868a1bbc8318c1c\",\n \"Tag\": \"precise\",\n \"Repository\": \"ubuntu\"\n },\n {\n \"VirtualSize\": 180116135,\n \"Size\": 24653,\n \"Created\": 1364102658,\n \"Id\": \"b750fe79269d2ec9a3c593ef05b4332b1d1a02a62b4accb2c21d589ff2f5f2dc\",\n \"Tag\": \"12.10\",\n \"Repository\": \"ubuntu\"\n },\n {\n \"VirtualSize\": 180116135,\n \"Size\": 24653,\n \"Created\": 1364102658,\n \"Id\": \"b750fe79269d2ec9a3c593ef05b4332b1d1a02a62b4accb2c21d589ff2f5f2dc\",\n \"Tag\": \"quantal\",\n \"Repository\": \"ubuntu\"\n }\n]\n\nThe returned json looks like this:\nHTTP/1.1 200 OK\nContent-Type: application/json\n\n[\n {\n \"RepoTags\": [\n \"ubuntu:12.04\",\n \"ubuntu:precise\",\n \"ubuntu:latest\"\n ],\n \"Id\": \"8dbd9e392a964056420e5d58ca5cc376ef18e2de93b5cc90e868a1bbc8318c1c\",\n \"Created\": 1365714795,\n \"Size\": 131506275,\n \"VirtualSize\": 131506275\n },\n {\n \"RepoTags\": [\n \"ubuntu:12.10\",\n \"ubuntu:quantal\"\n ],\n \"ParentId\": \"27cf784147099545\",\n \"Id\": \"b750fe79269d2ec9a3c593ef05b4332b1d1a02a62b4accb2c21d589ff2f5f2dc\",\n \"Created\": 1364102658,\n \"Size\": 24653,\n \"VirtualSize\": 180116135\n }\n]\n\nGET /images/viz\nThis URI no longer exists. The images --viz\noutput is now generated in the client, using the\n/images/json data.\nv1.6\nFull Documentation\nDocker Remote API v1.6\nWhat's new\nPOST /containers/(id)/attach\nNew!\nYou can now split stderr from stdout. This is done by\nprefixing a header to each transmission. See\nPOST /containers/(id)/attach.\nThe WebSocket attach is unchanged. Note that attach calls on the\nprevious API version didn't change. Stdout and stderr are merged.\nv1.5\nFull Documentation\nDocker Remote API v1.5\nWhat's new\nPOST /images/create\nNew!\nYou can now pass registry credentials (via an AuthConfig\n object) through the X-Registry-Auth header\nPOST /images/(name)/push\nNew!\nThe AuthConfig object now needs to be passed through the\n X-Registry-Auth header\nGET /containers/json\nNew!\nThe format of the Ports entry has been changed to a list of\ndicts each containing PublicPort, PrivatePort and Type describing a\nport mapping.\nv1.4\nFull Documentation\nDocker Remote API v1.4\nWhat's new\nPOST /images/create\nNew!\nWhen pulling a repo, all images are now downloaded in parallel.\nGET /containers/(id)/top\nNew!\nYou can now use ps args with docker top, like docker top\n aux\nGET /events\nNew!\nImage's name added in the events\nv1.3\ndocker v0.5.0\n51f6c4a\nFull Documentation\nDocker Remote API v1.3\nWhat's new\nGET /containers/(id)/top\nList the processes running inside a container.\nGET /events\nNew!\nMonitor docker's events via streaming or via polling\nBuilder (/build):\n\nSimplify the upload of the build context\nSimply stream a tarball instead of multipart upload with 4\n intermediary buffers\nSimpler, less memory usage, less disk usage and faster\n\n\nWarning: \nThe /build improvements are not reverse-compatible. Pre 1.3 clients will\nbreak on /build.\n\nList containers (/containers/json):\n\nYou can use size=1 to get the size of the containers\n\nStart containers (/containers//start):\n\nYou can now pass host-specific configuration (e.g., bind mounts) in\n the POST body for start calls\n\nv1.2\ndocker v0.4.2\n2e7649b\nFull Documentation\nDocker Remote API v1.2\nWhat's new\nThe auth configuration is now handled by the client.\nThe client should send it's authConfig as POST on each call of\n/images/(name)/push\nGET /auth\nDeprecated.\nPOST /auth\nOnly checks the configuration but doesn't store it on the server\nDeleting an image is now improved, will only untag the image if it\nhas children and remove all the untagged parents if has any.\n\nPOST /images/name/delete\nNow returns a JSON structure with the list of images\ndeleted/untagged.\nv1.1\ndocker v0.4.0\na8ae398\nFull Documentation\nDocker Remote API v1.1\nWhat's new\nPOST /images/create\nPOST /images/(name)/insert\nPOST /images/(name)/push\nUses json stream instead of HTML hijack, it looks like this:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\"status\":\"Pushing...\"}\n {\"status\":\"Pushing\", \"progress\":\"1/? (n/a)\"}\n {\"error\":\"Invalid...\"}\n ...\n\nv1.0\ndocker v0.3.4\n8d73740\nFull Documentation\nDocker Remote API v1.0\nWhat's new\nInitial version",
|
|
"title": "Docker Remote API"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api#docker-remote-api",
|
|
"tags": "",
|
|
"text": "By default the Docker daemon listens on unix:///var/run/docker.sock \n and the client must have root access to interact with the daemon. If the Docker daemon is set to use an encrypted TCP socket ( --tls ,\n or --tlsverify ) as with Boot2Docker 1.3.0, then you need to add extra\n parameters to curl or wget when making test API requests:\n curl --insecure --cert ~/.docker/cert.pem --key ~/.docker/key.pem https://boot2docker:2376/images/json \n or \n wget --no-check-certificate --certificate=$DOCKER_CERT_PATH/cert.pem --private-key=$DOCKER_CERT_PATH/key.pem https://boot2docker:2376/images/json -O - -q If a group named docker exists on your system, docker will apply\n ownership of the socket to the group. The API tends to be REST, but for some complex commands, like attach\n or pull, the HTTP connection is hijacked to transport STDOUT, STDIN,\n and STDERR. Since API version 1.2, the auth configuration is now handled client\n side, so the client has to send the authConfig as a POST in /images/(name)/push . authConfig, set as the X-Registry-Auth header, is currently a Base64\n encoded (JSON) string with the following structure:\n {\"username\": \"string\", \"password\": \"string\", \"email\": \"string\",\n \"serveraddress\" : \"string\", \"auth\": \"\"} . Notice that auth is to be left\n empty, serveraddress is a domain/ip without protocol, and that double\n quotes (instead of single ones) are required. The Remote API uses an open schema model. In this model, unknown \n properties in incoming messages will be ignored.\n Client applications need to take this into account to ensure\n they will not break when talking to newer Docker daemons. The current version of the API is v1.17 Calling /info is the same as calling /v1.17/info . You can still call an old version of the API using /v1.16/info .",
|
|
"title": "Docker Remote API"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api#v117",
|
|
"tags": "",
|
|
"text": "Full Documentation Docker Remote API v1.17 What's new POST /containers/(id)/attach and POST /exec/(id)/start New! \nDocker client now hints potential proxies about connection hijacking using HTTP Upgrade headers. GET /containers/(id)/json New! \nThis endpoint now returns the list current execs associated with the container ( ExecIDs ). POST /containers/(id)/rename New! \nNew endpoint to rename a container id to a new name. POST /containers/create POST /containers/(id)/start New! \n( ReadonlyRootfs ) can be passed in the host config to mount the container's\nroot filesystem as read only. GET /containers/(id)/stats New! \nThis endpoint returns a live stream of a container's resource usage statistics. Note : this functionality currently only works when using the libcontainer exec-driver.",
|
|
"title": "v1.17"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api#v116",
|
|
"tags": "",
|
|
"text": "Full Documentation Docker Remote API v1.16 What's new GET /info New! info now returns the number of CPUs available on the machine ( NCPU ),\ntotal memory available ( MemTotal ), a user-friendly name describing the running Docker daemon ( Name ), a unique ID identifying the daemon ( ID ), and\na list of daemon labels ( Labels ). POST /containers/create New! \nYou can set the new container's MAC address explicitly. New! \nVolumes are now initialized when the container is created. POST /containers/(id)/copy New! \nYou can now copy data which is contained in a volume.",
|
|
"title": "v1.16"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api#v115",
|
|
"tags": "",
|
|
"text": "Full Documentation Docker Remote API v1.15 What's new POST /containers/create New! \nIt is now possible to set a container's HostConfig when creating a container.\nPreviously this was only available when starting a container.",
|
|
"title": "v1.15"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api#v114",
|
|
"tags": "",
|
|
"text": "Full Documentation Docker Remote API v1.14 What's new DELETE /containers/(id) New! \nWhen using force , the container will be immediately killed with SIGKILL. POST /containers/(id)/start New! \nThe hostConfig option now accepts the field CapAdd , which specifies a list of capabilities\nto add, and the field CapDrop , which specifies a list of capabilities to drop. POST /images/create New! \nThe fromImage and repo parameters now supports the repo:tag format.\nConsequently, the tag parameter is now obsolete. Using the new format and\nthe tag parameter at the same time will return an error.",
|
|
"title": "v1.14"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api#v113",
|
|
"tags": "",
|
|
"text": "Full Documentation Docker Remote API v1.13 What's new GET /containers/(name)/json New! \nThe HostConfig.Links field is now filled correctly New! Sockets parameter added to the /info endpoint listing all the sockets the \ndaemon is configured to listen on. POST /containers/(name)/start POST /containers/(name)/stop New! start and stop will now return 304 if the container's status is not modified POST /commit New! \nAdded a pause parameter (default true ) to pause the container during commit",
|
|
"title": "v1.13"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api#v112",
|
|
"tags": "",
|
|
"text": "Full Documentation Docker Remote API v1.12 What's new POST /build New! \nBuild now has support for the forcerm parameter to always remove containers GET /containers/(name)/json GET /images/(name)/json New! \nAll the JSON keys are now in CamelCase New! \nTrusted builds are now Automated Builds - is_trusted is now is_automated . Removed Insert Endpoint \nThe insert endpoint has been removed.",
|
|
"title": "v1.12"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api#v111",
|
|
"tags": "",
|
|
"text": "Full Documentation Docker Remote API v1.11 What's new GET /_ping New! \nYou can now ping the server via the _ping endpoint. GET /events New! \nYou can now use the -until parameter to close connection\nafter timestamp. GET /containers/(id)/logs This url is preferred method for getting container logs now.",
|
|
"title": "v1.11"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api#v110",
|
|
"tags": "",
|
|
"text": "Full Documentation Docker Remote API v1.10 What's new DELETE /images/(name) New! \nYou can now use the force parameter to force delete of an\n image, even if it's tagged in multiple repositories. New! \n You\n can now use the noprune parameter to prevent the deletion of parent\n images DELETE /containers/(id) New! \nYou can now use the force parameter to force delete a\n container, even if it is currently running",
|
|
"title": "v1.10"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api#v19",
|
|
"tags": "",
|
|
"text": "Full Documentation Docker Remote API v1.9 What's new POST /build New! \nThis endpoint now takes a serialized ConfigFile which it\nuses to resolve the proper registry auth credentials for pulling the\nbase image. Clients which previously implemented the version\naccepting an AuthConfig object must be updated.",
|
|
"title": "v1.9"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api#v18",
|
|
"tags": "",
|
|
"text": "Full Documentation Docker Remote API v1.8 What's new POST /build New! \nThis endpoint now returns build status as json stream. In\ncase of a build error, it returns the exit status of the failed\ncommand. GET /containers/(id)/json New! \nThis endpoint now returns the host config for the\ncontainer. POST /images/create POST /images/(name)/insert POST /images/(name)/push New! \nprogressDetail object was added in the JSON. It's now\npossible to get the current value and the total of the progress\nwithout having to parse the string.",
|
|
"title": "v1.8"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api#v17",
|
|
"tags": "",
|
|
"text": "Full Documentation Docker Remote API v1.7 What's new GET /images/json The format of the json returned from this uri changed. Instead of an\nentry for each repo/tag on an image, each image is only represented\nonce, with a nested attribute indicating the repo/tags that apply to\nthat image. Instead of: HTTP/1.1 200 OK\nContent-Type: application/json\n\n[\n {\n \"VirtualSize\": 131506275,\n \"Size\": 131506275,\n \"Created\": 1365714795,\n \"Id\": \"8dbd9e392a964056420e5d58ca5cc376ef18e2de93b5cc90e868a1bbc8318c1c\",\n \"Tag\": \"12.04\",\n \"Repository\": \"ubuntu\"\n },\n {\n \"VirtualSize\": 131506275,\n \"Size\": 131506275,\n \"Created\": 1365714795,\n \"Id\": \"8dbd9e392a964056420e5d58ca5cc376ef18e2de93b5cc90e868a1bbc8318c1c\",\n \"Tag\": \"latest\",\n \"Repository\": \"ubuntu\"\n },\n {\n \"VirtualSize\": 131506275,\n \"Size\": 131506275,\n \"Created\": 1365714795,\n \"Id\": \"8dbd9e392a964056420e5d58ca5cc376ef18e2de93b5cc90e868a1bbc8318c1c\",\n \"Tag\": \"precise\",\n \"Repository\": \"ubuntu\"\n },\n {\n \"VirtualSize\": 180116135,\n \"Size\": 24653,\n \"Created\": 1364102658,\n \"Id\": \"b750fe79269d2ec9a3c593ef05b4332b1d1a02a62b4accb2c21d589ff2f5f2dc\",\n \"Tag\": \"12.10\",\n \"Repository\": \"ubuntu\"\n },\n {\n \"VirtualSize\": 180116135,\n \"Size\": 24653,\n \"Created\": 1364102658,\n \"Id\": \"b750fe79269d2ec9a3c593ef05b4332b1d1a02a62b4accb2c21d589ff2f5f2dc\",\n \"Tag\": \"quantal\",\n \"Repository\": \"ubuntu\"\n }\n] The returned json looks like this: HTTP/1.1 200 OK\nContent-Type: application/json\n\n[\n {\n \"RepoTags\": [\n \"ubuntu:12.04\",\n \"ubuntu:precise\",\n \"ubuntu:latest\"\n ],\n \"Id\": \"8dbd9e392a964056420e5d58ca5cc376ef18e2de93b5cc90e868a1bbc8318c1c\",\n \"Created\": 1365714795,\n \"Size\": 131506275,\n \"VirtualSize\": 131506275\n },\n {\n \"RepoTags\": [\n \"ubuntu:12.10\",\n \"ubuntu:quantal\"\n ],\n \"ParentId\": \"27cf784147099545\",\n \"Id\": \"b750fe79269d2ec9a3c593ef05b4332b1d1a02a62b4accb2c21d589ff2f5f2dc\",\n \"Created\": 1364102658,\n \"Size\": 24653,\n \"VirtualSize\": 180116135\n }\n] GET /images/viz This URI no longer exists. The images --viz \noutput is now generated in the client, using the /images/json data.",
|
|
"title": "v1.7"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api#v16",
|
|
"tags": "",
|
|
"text": "Full Documentation Docker Remote API v1.6 What's new POST /containers/(id)/attach New! \nYou can now split stderr from stdout. This is done by\nprefixing a header to each transmission. See POST /containers/(id)/attach .\nThe WebSocket attach is unchanged. Note that attach calls on the\nprevious API version didn't change. Stdout and stderr are merged.",
|
|
"title": "v1.6"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api#v15",
|
|
"tags": "",
|
|
"text": "Full Documentation Docker Remote API v1.5 What's new POST /images/create New! \nYou can now pass registry credentials (via an AuthConfig\n object) through the X-Registry-Auth header POST /images/(name)/push New! \nThe AuthConfig object now needs to be passed through the\n X-Registry-Auth header GET /containers/json New! \nThe format of the Ports entry has been changed to a list of\ndicts each containing PublicPort, PrivatePort and Type describing a\nport mapping.",
|
|
"title": "v1.5"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api#v14",
|
|
"tags": "",
|
|
"text": "Full Documentation Docker Remote API v1.4 What's new POST /images/create New! \nWhen pulling a repo, all images are now downloaded in parallel. GET /containers/(id)/top New! \nYou can now use ps args with docker top, like docker top\n aux GET /events New! \nImage's name added in the events",
|
|
"title": "v1.4"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api#v13",
|
|
"tags": "",
|
|
"text": "docker v0.5.0 51f6c4a Full Documentation Docker Remote API v1.3 What's new GET /containers/(id)/top List the processes running inside a container. GET /events New! \nMonitor docker's events via streaming or via polling Builder (/build): Simplify the upload of the build context Simply stream a tarball instead of multipart upload with 4\n intermediary buffers Simpler, less memory usage, less disk usage and faster Warning : \nThe /build improvements are not reverse-compatible. Pre 1.3 clients will\nbreak on /build. List containers (/containers/json): You can use size=1 to get the size of the containers Start containers (/containers/ /start): You can now pass host-specific configuration (e.g., bind mounts) in\n the POST body for start calls",
|
|
"title": "v1.3"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api#v12",
|
|
"tags": "",
|
|
"text": "docker v0.4.2 2e7649b Full Documentation Docker Remote API v1.2 What's new The auth configuration is now handled by the client. The client should send it's authConfig as POST on each call of /images/(name)/push GET /auth Deprecated. POST /auth Only checks the configuration but doesn't store it on the server Deleting an image is now improved, will only untag the image if it\nhas children and remove all the untagged parents if has any. POST /images/ name /delete Now returns a JSON structure with the list of images\ndeleted/untagged.",
|
|
"title": "v1.2"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api#v11",
|
|
"tags": "",
|
|
"text": "docker v0.4.0 a8ae398 Full Documentation Docker Remote API v1.1 What's new POST /images/create POST /images/(name)/insert POST /images/(name)/push Uses json stream instead of HTML hijack, it looks like this: HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\"status\":\"Pushing...\"}\n {\"status\":\"Pushing\", \"progress\":\"1/? (n/a)\"}\n {\"error\":\"Invalid...\"}\n ...",
|
|
"title": "v1.1"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api#v10",
|
|
"tags": "",
|
|
"text": "docker v0.3.4 8d73740 Full Documentation Docker Remote API v1.0 What's new Initial version",
|
|
"title": "v1.0"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api_v1.17/",
|
|
"tags": "",
|
|
"text": "Docker Remote API v1.17\n1. Brief introduction\n\nThe Remote API has replaced rcli.\nThe daemon listens on unix:///var/run/docker.sock but you can\n Bind Docker to another host/port or a Unix socket.\nThe API tends to be REST, but for some complex commands, like attach\n or pull, the HTTP connection is hijacked to transport STDOUT,\n STDIN and STDERR.\n\n2. Endpoints\n2.1 Containers\nList containers\nGET /containers/json\nList containers\nExample request:\n GET /containers/json?all=1before=8dfafdbc3a40size=1 HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n [\n {\n \"Id\": \"8dfafdbc3a40\",\n \"Image\": \"ubuntu:latest\",\n \"Command\": \"echo 1\",\n \"Created\": 1367854155,\n \"Status\": \"Exit 0\",\n \"Ports\": [{\"PrivatePort\": 2222, \"PublicPort\": 3333, \"Type\": \"tcp\"}],\n \"SizeRw\": 12288,\n \"SizeRootFs\": 0\n },\n {\n \"Id\": \"9cd87474be90\",\n \"Image\": \"ubuntu:latest\",\n \"Command\": \"echo 222222\",\n \"Created\": 1367854155,\n \"Status\": \"Exit 0\",\n \"Ports\": [],\n \"SizeRw\": 12288,\n \"SizeRootFs\": 0\n },\n {\n \"Id\": \"3176a2479c92\",\n \"Image\": \"ubuntu:latest\",\n \"Command\": \"echo 3333333333333333\",\n \"Created\": 1367854154,\n \"Status\": \"Exit 0\",\n \"Ports\":[],\n \"SizeRw\":12288,\n \"SizeRootFs\":0\n },\n {\n \"Id\": \"4cb07b47f9fb\",\n \"Image\": \"ubuntu:latest\",\n \"Command\": \"echo 444444444444444444444444444444444\",\n \"Created\": 1367854152,\n \"Status\": \"Exit 0\",\n \"Ports\": [],\n \"SizeRw\": 12288,\n \"SizeRootFs\": 0\n }\n ]\n\nQuery Parameters:\n\nall \u2013 1/True/true or 0/False/false, Show all containers.\n Only running containers are shown by default (i.e., this defaults to false)\nlimit \u2013 Show limit last created\n containers, include non-running ones.\nsince \u2013 Show only containers created since Id, include\n non-running ones.\nbefore \u2013 Show only containers created before Id, include\n non-running ones.\nsize \u2013 1/True/true or 0/False/false, Show the containers\n sizes\nfilters - a json encoded value of the filters (a map[string][]string) to process on the containers list. Available filters:\nexited=int -- containers with exit code of int\nstatus=(restarting|running|paused|exited)\n\nStatus Codes:\n\n200 \u2013 no error\n400 \u2013 bad parameter\n500 \u2013 server error\n\nCreate a container\nPOST /containers/create\nCreate a container\nExample request:\n POST /containers/create HTTP/1.1\n Content-Type: application/json\n\n {\n \"Hostname\": \"\",\n \"Domainname\": \"\",\n \"User\": \"\",\n \"Memory\": 0,\n \"MemorySwap\": 0,\n \"CpuShares\": 512,\n \"Cpuset\": \"0,1\",\n \"AttachStdin\": false,\n \"AttachStdout\": true,\n \"AttachStderr\": true,\n \"Tty\": false,\n \"OpenStdin\": false,\n \"StdinOnce\": false,\n \"Env\": null,\n \"Cmd\": [\n \"date\"\n ],\n \"Entrypoint\": \"\",\n \"Image\": \"ubuntu\",\n \"Volumes\": {\n \"/tmp\": {}\n },\n \"WorkingDir\": \"\",\n \"NetworkDisabled\": false,\n \"MacAddress\": \"12:34:56:78:9a:bc\",\n \"ExposedPorts\": {\n \"22/tcp\": {}\n },\n \"SecurityOpts\": [\"\"],\n \"HostConfig\": {\n \"Binds\": [\"/tmp:/tmp\"],\n \"Links\": [\"redis3:redis\"],\n \"LxcConf\": {\"lxc.utsname\":\"docker\"},\n \"PortBindings\": { \"22/tcp\": [{ \"HostPort\": \"11022\" }] },\n \"PublishAllPorts\": false,\n \"Privileged\": false,\n \"ReadonlyRootfs\": false,\n \"Dns\": [\"8.8.8.8\"],\n \"DnsSearch\": [\"\"],\n \"ExtraHosts\": null,\n \"VolumesFrom\": [\"parent\", \"other:ro\"],\n \"CapAdd\": [\"NET_ADMIN\"],\n \"CapDrop\": [\"MKNOD\"],\n \"RestartPolicy\": { \"Name\": \"\", \"MaximumRetryCount\": 0 },\n \"NetworkMode\": \"bridge\",\n \"Devices\": []\n }\n }\n\nExample response:\n HTTP/1.1 201 Created\n Content-Type: application/json\n\n {\n \"Id\":\"e90e34656806\"\n \"Warnings\":[]\n }\n\nJson Parameters:\n\nHostname - A string value containing the desired hostname to use for the\n container.\nDomainname - A string value containing the desired domain name to use\n for the container.\nUser - A string value containg the user to use inside the container.\nMemory - Memory limit in bytes.\nMemorySwap- Total memory usage (memory + swap); set -1 to disable swap.\nCpuShares - An integer value containing the CPU Shares for container\n (ie. the relative weight vs othercontainers).\n CpuSet - String value containg the cgroups Cpuset to use.\nAttachStdin - Boolean value, attaches to stdin.\nAttachStdout - Boolean value, attaches to stdout.\nAttachStderr - Boolean value, attaches to stderr.\nTty - Boolean value, Attach standard streams to a tty, including stdin if it is not closed.\nOpenStdin - Boolean value, opens stdin,\nStdinOnce - Boolean value, close stdin after the 1 attached client disconnects.\nEnv - A list of environment variables in the form of VAR=value\nCmd - Command to run specified as a string or an array of strings.\nEntrypoint - Set the entrypoint for the container a a string or an array\n of strings\nImage - String value containing the image name to use for the container\nVolumes \u2013 An object mapping mountpoint paths (strings) inside the\n container to empty objects.\nWorkingDir - A string value containing the working dir for commands to\n run in.\nNetworkDisabled - Boolean value, when true disables neworking for the\n container\nExposedPorts - An object mapping ports to an empty object in the form of:\n \"ExposedPorts\": { \"port/tcp|udp: {}\" }\nSecurityOpts: A list of string values to customize labels for MLS\n systems, such as SELinux.\nHostConfig\nBinds \u2013 A list of volume bindings for this container. Each volume\n binding is a string of the form container_path (to create a new\n volume for the container), host_path:container_path (to bind-mount\n a host path into the container), or host_path:container_path:ro\n (to make the bind-mount read-only inside the container).\nLinks - A list of links for the container. Each link entry should be of\n of the form \"container_name:alias\".\nLxcConf - LXC specific configurations. These configurations will only\n work when using the lxc execution driver.\nPortBindings - A map of exposed container ports and the host port they\n should map to. It should be specified in the form\n { port/protocol: [{ \"HostPort\": \"port\" }] }\n Take note that port is specified as a string and not an integer value.\nPublishAllPorts - Allocates a random host port for all of a container's\n exposed ports. Specified as a boolean value.\nPrivileged - Gives the container full access to the host. Specified as\n a boolean value.\nReadonlyRootfs - Mount the container's root filesystem as read only.\n Specified as a boolean value.\nDns - A list of dns servers for the container to use.\nDnsSearch - A list of DNS search domains\nExtraHosts - A list of hostnames/IP mappings to be added to the\n container's /etc/hosts file. Specified in the form [\"hostname:IP\"].\nVolumesFrom - A list of volumes to inherit from another container.\n Specified in the form container name[:ro|rw]\nCapAdd - A list of kernel capabilties to add to the container.\nCapdrop - A list of kernel capabilties to drop from the container.\nRestartPolicy \u2013 The behavior to apply when the container exits. The\n value is an object with a Name property of either \"always\" to\n always restart or \"on-failure\" to restart only when the container\n exit code is non-zero. If on-failure is used, MaximumRetryCount\n controls the number of times to retry before giving up.\n The default is not to restart. (optional)\n An ever increasing delay (double the previous delay, starting at 100mS)\n is added before each restart to prevent flooding the server.\nNetworkMode - Sets the networking mode for the container. Supported\n values are: bridge, host, and container:name|id\nDevices - A list of devices to add to the container specified in the\n form\n { \"PathOnHost\": \"/dev/deviceName\", \"PathInContainer\": \"/dev/deviceName\", \"CgroupPermissions\": \"mrw\"}\n\nQuery Parameters:\n\nname \u2013 Assign the specified name to the container. Must\n match /?[a-zA-Z0-9_-]+.\n\nStatus Codes:\n\n201 \u2013 no error\n404 \u2013 no such container\n406 \u2013 impossible to attach (container not running)\n500 \u2013 server error\n\nInspect a container\nGET /containers/(id)/json\nReturn low-level information on the container id\nExample request:\n GET /containers/4fa6e0f0c678/json HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n{\n \"AppArmorProfile\": \"\",\n \"Args\": [\n \"-c\",\n \"exit 9\"\n ],\n \"Config\": {\n \"AttachStderr\": true,\n \"AttachStdin\": false,\n \"AttachStdout\": true,\n \"Cmd\": [\n \"/bin/sh\",\n \"-c\",\n \"exit 9\"\n ],\n \"CpuShares\": 0,\n \"Cpuset\": \"\",\n \"Domainname\": \"\",\n \"Entrypoint\": null,\n \"Env\": [\n \"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\"\n ],\n \"ExposedPorts\": null,\n \"Hostname\": \"ba033ac44011\",\n \"Image\": \"ubuntu\",\n \"MacAddress\": \"\",\n \"Memory\": 0,\n \"MemorySwap\": 0,\n \"NetworkDisabled\": false,\n \"OnBuild\": null,\n \"OpenStdin\": false,\n \"PortSpecs\": null,\n \"StdinOnce\": false,\n \"Tty\": false,\n \"User\": \"\",\n \"Volumes\": null,\n \"WorkingDir\": \"\"\n },\n \"Created\": \"2015-01-06T15:47:31.485331387Z\",\n \"Driver\": \"devicemapper\",\n \"ExecDriver\": \"native-0.2\",\n \"ExecIDs\": null,\n \"HostConfig\": {\n \"Binds\": null,\n \"CapAdd\": null,\n \"CapDrop\": null,\n \"ContainerIDFile\": \"\",\n \"Devices\": [],\n \"Dns\": null,\n \"DnsSearch\": null,\n \"ExtraHosts\": null,\n \"IpcMode\": \"\",\n \"Links\": null,\n \"LxcConf\": [],\n \"NetworkMode\": \"bridge\",\n \"PortBindings\": {},\n \"Privileged\": false,\n \"ReadonlyRootfs\": false,\n \"PublishAllPorts\": false,\n \"RestartPolicy\": {\n \"MaximumRetryCount\": 2,\n \"Name\": \"on-failure\"\n },\n \"SecurityOpt\": null,\n \"VolumesFrom\": null\n },\n \"HostnamePath\": \"/var/lib/docker/containers/ba033ac4401106a3b513bc9d639eee123ad78ca3616b921167cd74b20e25ed39/hostname\",\n \"HostsPath\": \"/var/lib/docker/containers/ba033ac4401106a3b513bc9d639eee123ad78ca3616b921167cd74b20e25ed39/hosts\",\n \"Id\": \"ba033ac4401106a3b513bc9d639eee123ad78ca3616b921167cd74b20e25ed39\",\n \"Image\": \"04c5d3b7b0656168630d3ba35d8889bd0e9caafcaeb3004d2bfbc47e7c5d35d2\",\n \"MountLabel\": \"\",\n \"Name\": \"/boring_euclid\",\n \"NetworkSettings\": {\n \"Bridge\": \"\",\n \"Gateway\": \"\",\n \"IPAddress\": \"\",\n \"IPPrefixLen\": 0,\n \"MacAddress\": \"\",\n \"PortMapping\": null,\n \"Ports\": null\n },\n \"Path\": \"/bin/sh\",\n \"ProcessLabel\": \"\",\n \"ResolvConfPath\": \"/var/lib/docker/containers/ba033ac4401106a3b513bc9d639eee123ad78ca3616b921167cd74b20e25ed39/resolv.conf\",\n \"RestartCount\": 1,\n \"State\": {\n \"Error\": \"\",\n \"ExitCode\": 9,\n \"FinishedAt\": \"2015-01-06T15:47:32.080254511Z\",\n \"OOMKilled\": false,\n \"Paused\": false,\n \"Pid\": 0,\n \"Restarting\": false,\n \"Running\": false,\n \"StartedAt\": \"2015-01-06T15:47:32.072697474Z\"\n },\n \"Volumes\": {},\n \"VolumesRW\": {}\n}\n\nStatus Codes:\n\n200 \u2013 no error\n404 \u2013 no such container\n500 \u2013 server error\n\nList processes running inside a container\nGET /containers/(id)/top\nList processes running inside the container id\nExample request:\n GET /containers/4fa6e0f0c678/top HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\n \"Titles\": [\n \"USER\",\n \"PID\",\n \"%CPU\",\n \"%MEM\",\n \"VSZ\",\n \"RSS\",\n \"TTY\",\n \"STAT\",\n \"START\",\n \"TIME\",\n \"COMMAND\"\n ],\n \"Processes\": [\n [\"root\",\"20147\",\"0.0\",\"0.1\",\"18060\",\"1864\",\"pts/4\",\"S\",\"10:06\",\"0:00\",\"bash\"],\n [\"root\",\"20271\",\"0.0\",\"0.0\",\"4312\",\"352\",\"pts/4\",\"S+\",\"10:07\",\"0:00\",\"sleep\",\"10\"]\n ]\n }\n\nQuery Parameters:\n\nps_args \u2013 ps arguments to use (e.g., aux)\n\nStatus Codes:\n\n200 \u2013 no error\n404 \u2013 no such container\n500 \u2013 server error\n\nGet container logs\nGET /containers/(id)/logs\nGet stdout and stderr logs from the container id\nExample request:\n GET /containers/4fa6e0f0c678/logs?stderr=1stdout=1timestamps=1follow=1tail=10 HTTP/1.1\n\nExample response:\n HTTP/1.1 101 UPGRADED\n Content-Type: application/vnd.docker.raw-stream\n Connection: Upgrade\n Upgrade: tcp\n\n {{ STREAM }}\n\nQuery Parameters:\n\nfollow \u2013 1/True/true or 0/False/false, return stream. Default false\nstdout \u2013 1/True/true or 0/False/false, show stdout log. Default false\nstderr \u2013 1/True/true or 0/False/false, show stderr log. Default false\ntimestamps \u2013 1/True/true or 0/False/false, print timestamps for\n every log line. Default false\ntail \u2013 Output specified number of lines at the end of logs: all or number. Default all\n\nStatus Codes:\n\n101 \u2013 no error, hints proxy about hijacking\n200 \u2013 no error, no upgrade header found\n404 \u2013 no such container\n500 \u2013 server error\n\nInspect changes on a container's filesystem\nGET /containers/(id)/changes\nInspect changes on container id's filesystem\nExample request:\n GET /containers/4fa6e0f0c678/changes HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n [\n {\n \"Path\": \"/dev\",\n \"Kind\": 0\n },\n {\n \"Path\": \"/dev/kmsg\",\n \"Kind\": 1\n },\n {\n \"Path\": \"/test\",\n \"Kind\": 1\n }\n ]\n\nStatus Codes:\n\n200 \u2013 no error\n404 \u2013 no such container\n500 \u2013 server error\n\nExport a container\nGET /containers/(id)/export\nExport the contents of container id\nExample request:\n GET /containers/4fa6e0f0c678/export HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/octet-stream\n\n {{ TAR STREAM }}\n\nStatus Codes:\n\n200 \u2013 no error\n404 \u2013 no such container\n500 \u2013 server error\n\nGet container stats based on resource usage\nGET /containers/(id)/stats\nThis endpoint returns a live stream of a container's resource usage statistics.\n\nNote: this functionality currently only works when using the libcontainer exec-driver.\n\nExample request:\n GET /containers/redis1/stats HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\n \"read\" : \"2015-01-08T22:57:31.547920715Z\",\n \"network\" : {\n \"rx_dropped\" : 0,\n \"rx_bytes\" : 648,\n \"rx_errors\" : 0,\n \"tx_packets\" : 8,\n \"tx_dropped\" : 0,\n \"rx_packets\" : 8,\n \"tx_errors\" : 0,\n \"tx_bytes\" : 648\n },\n \"memory_stats\" : {\n \"stats\" : {\n \"total_pgmajfault\" : 0,\n \"cache\" : 0,\n \"mapped_file\" : 0,\n \"total_inactive_file\" : 0,\n \"pgpgout\" : 414,\n \"rss\" : 6537216,\n \"total_mapped_file\" : 0,\n \"writeback\" : 0,\n \"unevictable\" : 0,\n \"pgpgin\" : 477,\n \"total_unevictable\" : 0,\n \"pgmajfault\" : 0,\n \"total_rss\" : 6537216,\n \"total_rss_huge\" : 6291456,\n \"total_writeback\" : 0,\n \"total_inactive_anon\" : 0,\n \"rss_huge\" : 6291456,\n \"hierarchical_memory_limit\" : 67108864,\n \"total_pgfault\" : 964,\n \"total_active_file\" : 0,\n \"active_anon\" : 6537216,\n \"total_active_anon\" : 6537216,\n \"total_pgpgout\" : 414,\n \"total_cache\" : 0,\n \"inactive_anon\" : 0,\n \"active_file\" : 0,\n \"pgfault\" : 964,\n \"inactive_file\" : 0,\n \"total_pgpgin\" : 477\n },\n \"max_usage\" : 6651904,\n \"usage\" : 6537216,\n \"failcnt\" : 0,\n \"limit\" : 67108864\n },\n \"blkio_stats\" : {},\n \"cpu_stats\" : {\n \"cpu_usage\" : {\n \"percpu_usage\" : [\n 16970827,\n 1839451,\n 7107380,\n 10571290\n ],\n \"usage_in_usermode\" : 10000000,\n \"total_usage\" : 36488948,\n \"usage_in_kernelmode\" : 20000000\n },\n \"system_cpu_usage\" : 20091722000000000,\n \"throttling_data\" : {}\n }\n }\n\nStatus Codes:\n\n200 \u2013 no error\n404 \u2013 no such container\n500 \u2013 server error\n\nResize a container TTY\nPOST /containers/(id)/resize?h=heightw=width\nResize the TTY for container with id. The container must be restarted for the resize to take effect.\nExample request:\n POST /containers/4fa6e0f0c678/resize?h=40w=80 HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Length: 0\n Content-Type: text/plain; charset=utf-8\n\nStatus Codes:\n\n200 \u2013 no error\n404 \u2013 No such container\n500 \u2013 Cannot resize container\n\nStart a container\nPOST /containers/(id)/start\nStart the container id\nExample request:\n POST /containers/(id)/start HTTP/1.1\n Content-Type: application/json\n\nExample response:\n HTTP/1.1 204 No Content\n\nJson Parameters:\nStatus Codes:\n\n204 \u2013 no error\n304 \u2013 container already started\n404 \u2013 no such container\n500 \u2013 server error\n\nStop a container\nPOST /containers/(id)/stop\nStop the container id\nExample request:\n POST /containers/e90e34656806/stop?t=5 HTTP/1.1\n\nExample response:\n HTTP/1.1 204 No Content\n\nQuery Parameters:\n\nt \u2013 number of seconds to wait before killing the container\n\nStatus Codes:\n\n204 \u2013 no error\n304 \u2013 container already stopped\n404 \u2013 no such container\n500 \u2013 server error\n\nRestart a container\nPOST /containers/(id)/restart\nRestart the container id\nExample request:\n POST /containers/e90e34656806/restart?t=5 HTTP/1.1\n\nExample response:\n HTTP/1.1 204 No Content\n\nQuery Parameters:\n\nt \u2013 number of seconds to wait before killing the container\n\nStatus Codes:\n\n204 \u2013 no error\n404 \u2013 no such container\n500 \u2013 server error\n\nKill a container\nPOST /containers/(id)/kill\nKill the container id\nExample request:\n POST /containers/e90e34656806/kill HTTP/1.1\n\nExample response:\n HTTP/1.1 204 No Content\n\nQuery Parameters\n\nsignal - Signal to send to the container: integer or string like \"SIGINT\".\n When not set, SIGKILL is assumed and the call will waits for the container to exit.\n\nStatus Codes:\n\n204 \u2013 no error\n404 \u2013 no such container\n500 \u2013 server error\n\nRename a container\nPOST /containers/(id)/rename\nRename the container id to a new_name\nExample request:\n POST /containers/e90e34656806/rename?name=new_name HTTP/1.1\n\nExample response:\n HTTP/1.1 204 No Content\n\nQuery Parameters:\n\nname \u2013 new name for the container\n\nStatus Codes:\n\n204 \u2013 no error\n404 \u2013 no such container\n409 - conflict name already assigned\n500 \u2013 server error\n\nPause a container\nPOST /containers/(id)/pause\nPause the container id\nExample request:\n POST /containers/e90e34656806/pause HTTP/1.1\n\nExample response:\n HTTP/1.1 204 No Content\n\nStatus Codes:\n\n204 \u2013 no error\n404 \u2013 no such container\n500 \u2013 server error\n\nUnpause a container\nPOST /containers/(id)/unpause\nUnpause the container id\nExample request:\n POST /containers/e90e34656806/unpause HTTP/1.1\n\nExample response:\n HTTP/1.1 204 No Content\n\nStatus Codes:\n\n204 \u2013 no error\n404 \u2013 no such container\n500 \u2013 server error\n\nAttach to a container\nPOST /containers/(id)/attach\nAttach to the container id\nExample request:\n POST /containers/16253994b7c4/attach?logs=1stream=0stdout=1 HTTP/1.1\n\nExample response:\n HTTP/1.1 101 UPGRADED\n Content-Type: application/vnd.docker.raw-stream\n Connection: Upgrade\n Upgrade: tcp\n\n {{ STREAM }}\n\nQuery Parameters:\n\nlogs \u2013 1/True/true or 0/False/false, return logs. Default false\nstream \u2013 1/True/true or 0/False/false, return stream.\n Default false\nstdin \u2013 1/True/true or 0/False/false, if stream=true, attach\n to stdin. Default false\nstdout \u2013 1/True/true or 0/False/false, if logs=true, return\n stdout log, if stream=true, attach to stdout. Default false\nstderr \u2013 1/True/true or 0/False/false, if logs=true, return\n stderr log, if stream=true, attach to stderr. Default false\n\nStatus Codes:\n\n101 \u2013 no error, hints proxy about hijacking\n200 \u2013 no error, no upgrade header found\n400 \u2013 bad parameter\n404 \u2013 no such container\n\n500 \u2013 server error\nStream details:\nWhen using the TTY setting is enabled in\nPOST /containers/create\n,\nthe stream is the raw data from the process PTY and client's stdin.\nWhen the TTY is disabled, then the stream is multiplexed to separate\nstdout and stderr.\nThe format is a Header and a Payload (frame).\nHEADER\nThe header will contain the information on which stream write the\nstream (stdout or stderr). It also contain the size of the\nassociated frame encoded on the last 4 bytes (uint32).\nIt is encoded on the first 8 bytes like this:\nheader := [8]byte{STREAM_TYPE, 0, 0, 0, SIZE1, SIZE2, SIZE3, SIZE4}\n\nSTREAM_TYPE can be:\n\n\n0: stdin (will be written on stdout)\n\n1: stdout\n\n2: stderr\nSIZE1, SIZE2, SIZE3, SIZE4 are the 4 bytes of\nthe uint32 size encoded as big endian.\nPAYLOAD\nThe payload is the raw stream.\nIMPLEMENTATION\nThe simplest way to implement the Attach protocol is the following:\n\nRead 8 bytes\nchose stdout or stderr depending on the first byte\nExtract the frame size from the last 4 byets\nRead the extracted size and output it on the correct output\nGoto 1\n\n\n\nAttach to a container (websocket)\nGET /containers/(id)/attach/ws\nAttach to the container id via websocket\nImplements websocket protocol handshake according to RFC 6455\nExample request\n GET /containers/e90e34656806/attach/ws?logs=0stream=1stdin=1stdout=1stderr=1 HTTP/1.1\n\nExample response\n {{ STREAM }}\n\nQuery Parameters:\n\nlogs \u2013 1/True/true or 0/False/false, return logs. Default false\nstream \u2013 1/True/true or 0/False/false, return stream.\n Default false\nstdin \u2013 1/True/true or 0/False/false, if stream=true, attach\n to stdin. Default false\nstdout \u2013 1/True/true or 0/False/false, if logs=true, return\n stdout log, if stream=true, attach to stdout. Default false\nstderr \u2013 1/True/true or 0/False/false, if logs=true, return\n stderr log, if stream=true, attach to stderr. Default false\n\nStatus Codes:\n\n200 \u2013 no error\n400 \u2013 bad parameter\n404 \u2013 no such container\n500 \u2013 server error\n\nWait a container\nPOST /containers/(id)/wait\nBlock until container id stops, then returns the exit code\nExample request:\n POST /containers/16253994b7c4/wait HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\"StatusCode\": 0}\n\nStatus Codes:\n\n200 \u2013 no error\n404 \u2013 no such container\n500 \u2013 server error\n\nRemove a container\nDELETE /containers/(id)\nRemove the container id from the filesystem\nExample request:\n DELETE /containers/16253994b7c4?v=1 HTTP/1.1\n\nExample response:\n HTTP/1.1 204 No Content\n\nQuery Parameters:\n\nv \u2013 1/True/true or 0/False/false, Remove the volumes\n associated to the container. Default false\nforce - 1/True/true or 0/False/false, Kill then remove the container.\n Default false\n\nStatus Codes:\n\n204 \u2013 no error\n400 \u2013 bad parameter\n404 \u2013 no such container\n500 \u2013 server error\n\nCopy files or folders from a container\nPOST /containers/(id)/copy\nCopy files or folders of container id\nExample request:\n POST /containers/4fa6e0f0c678/copy HTTP/1.1\n Content-Type: application/json\n\n {\n \"Resource\": \"test.txt\"\n }\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/x-tar\n\n {{ TAR STREAM }}\n\nStatus Codes:\n\n200 \u2013 no error\n404 \u2013 no such container\n500 \u2013 server error\n\n2.2 Images\nList Images\nGET /images/json\nExample request:\n GET /images/json?all=0 HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n [\n {\n \"RepoTags\": [\n \"ubuntu:12.04\",\n \"ubuntu:precise\",\n \"ubuntu:latest\"\n ],\n \"Id\": \"8dbd9e392a964056420e5d58ca5cc376ef18e2de93b5cc90e868a1bbc8318c1c\",\n \"Created\": 1365714795,\n \"Size\": 131506275,\n \"VirtualSize\": 131506275\n },\n {\n \"RepoTags\": [\n \"ubuntu:12.10\",\n \"ubuntu:quantal\"\n ],\n \"ParentId\": \"27cf784147099545\",\n \"Id\": \"b750fe79269d2ec9a3c593ef05b4332b1d1a02a62b4accb2c21d589ff2f5f2dc\",\n \"Created\": 1364102658,\n \"Size\": 24653,\n \"VirtualSize\": 180116135\n }\n ]\n\nQuery Parameters:\n\nall \u2013 1/True/true or 0/False/false, default false\nfilters \u2013 a json encoded value of the filters (a map[string][]string) to process on the images list. Available filters:\ndangling=true\n\nBuild image from a Dockerfile\nPOST /build\nBuild an image from a Dockerfile\nExample request:\n POST /build HTTP/1.1\n\n {{ TAR STREAM }}\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\"stream\": \"Step 1...\"}\n {\"stream\": \"...\"}\n {\"error\": \"Error...\", \"errorDetail\": {\"code\": 123, \"message\": \"Error...\"}}\n\nThe input stream must be a tar archive compressed with one of the\nfollowing algorithms: identity (no compression), gzip, bzip2, xz.\nThe archive must include a build instructions file, typically called\nDockerfile at the root of the archive. The dockerfile parameter may be\nused to specify a different build instructions file by having its value be\nthe path to the alternate build instructions file to use.\nThe archive may include any number of other files,\nwhich will be accessible in the build context (See the ADD build\ncommand).\nQuery Parameters:\n\ndockerfile - path within the build context to the Dockerfile\nt \u2013 repository name (and optionally a tag) to be applied to\n the resulting image in case of success\nremote \u2013 git or HTTP/HTTPS URI build source\nq \u2013 suppress verbose build output\nnocache \u2013 do not use the cache when building the image\npull - attempt to pull the image even if an older image exists locally\nrm - remove intermediate containers after a successful build (default behavior)\n\nforcerm - always remove intermediate containers (includes rm)\nRequest Headers:\n\n\nContent-type \u2013 should be set to \"application/tar\".\n\nX-Registry-Config \u2013 base64-encoded ConfigFile objec\n\nStatus Codes:\n\n200 \u2013 no error\n500 \u2013 server error\n\nCreate an image\nPOST /images/create\nCreate an image, either by pulling it from the registry or by importing it\nExample request:\n POST /images/create?fromImage=ubuntu HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\"status\": \"Pulling...\"}\n {\"status\": \"Pulling\", \"progress\": \"1 B/ 100 B\", \"progressDetail\": {\"current\": 1, \"total\": 100}}\n {\"error\": \"Invalid...\"}\n ...\n\nWhen using this endpoint to pull an image from the registry, the\n`X-Registry-Auth` header can be used to include\na base64-encoded AuthConfig object.\n\nQuery Parameters:\n\nfromImage \u2013 name of the image to pull\nfromSrc \u2013 source to import. The value may be a URL from which the image\n can be retrieved or - to read the image from the request body.\nrepo \u2013 repository\ntag \u2013 tag\n\nregistry \u2013 the registry to pull from\nRequest Headers:\n\n\nX-Registry-Auth \u2013 base64-encoded AuthConfig object\n\n\nStatus Codes:\n\n200 \u2013 no error\n500 \u2013 server error\n\nInspect an image\nGET /images/(name)/json\nReturn low-level information on the image name\nExample request:\n GET /images/ubuntu/json HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\n \"Created\": \"2013-03-23T22:24:18.818426-07:00\",\n \"Container\": \"3d67245a8d72ecf13f33dffac9f79dcdf70f75acb84d308770391510e0c23ad0\",\n \"ContainerConfig\":\n {\n \"Hostname\": \"\",\n \"User\": \"\",\n \"Memory\": 0,\n \"MemorySwap\": 0,\n \"AttachStdin\": false,\n \"AttachStdout\": false,\n \"AttachStderr\": false,\n \"PortSpecs\": null,\n \"Tty\": true,\n \"OpenStdin\": true,\n \"StdinOnce\": false,\n \"Env\": null,\n \"Cmd\": [\"/bin/bash\"],\n \"Dns\": null,\n \"Image\": \"ubuntu\",\n \"Volumes\": null,\n \"VolumesFrom\": \"\",\n \"WorkingDir\": \"\"\n },\n \"Id\": \"b750fe79269d2ec9a3c593ef05b4332b1d1a02a62b4accb2c21d589ff2f5f2dc\",\n \"Parent\": \"27cf784147099545\",\n \"Size\": 6824592\n }\n\nStatus Codes:\n\n200 \u2013 no error\n404 \u2013 no such image\n500 \u2013 server error\n\nGet the history of an image\nGET /images/(name)/history\nReturn the history of the image name\nExample request:\n GET /images/ubuntu/history HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n [\n {\n \"Id\": \"b750fe79269d\",\n \"Created\": 1364102658,\n \"CreatedBy\": \"/bin/bash\"\n },\n {\n \"Id\": \"27cf78414709\",\n \"Created\": 1364068391,\n \"CreatedBy\": \"\"\n }\n ]\n\nStatus Codes:\n\n200 \u2013 no error\n404 \u2013 no such image\n500 \u2013 server error\n\nPush an image on the registry\nPOST /images/(name)/push\nPush the image name on the registry\nExample request:\n POST /images/test/push HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\"status\": \"Pushing...\"}\n {\"status\": \"Pushing\", \"progress\": \"1/? (n/a)\", \"progressDetail\": {\"current\": 1}}}\n {\"error\": \"Invalid...\"}\n ...\n\nIf you wish to push an image on to a private registry, that image must already have been tagged\ninto a repository which references that registry host name and port. This repository name should\nthen be used in the URL. This mirrors the flow of the CLI.\n\nExample request:\n POST /images/registry.acme.com:5000/test/push HTTP/1.1\n\nQuery Parameters:\n\ntag \u2013 the tag to associate with the image on the registry, optional\n\nRequest Headers:\n\nX-Registry-Auth \u2013 include a base64-encoded AuthConfig\n object.\n\nStatus Codes:\n\n200 \u2013 no error\n404 \u2013 no such image\n500 \u2013 server error\n\nTag an image into a repository\nPOST /images/(name)/tag\nTag the image name into a repository\nExample request:\n POST /images/test/tag?repo=myrepoforce=0tag=v42 HTTP/1.1\n\nExample response:\n HTTP/1.1 201 OK\n\nQuery Parameters:\n\nrepo \u2013 The repository to tag in\nforce \u2013 1/True/true or 0/False/false, default false\ntag - The new tag name\n\nStatus Codes:\n\n201 \u2013 no error\n400 \u2013 bad parameter\n404 \u2013 no such image\n409 \u2013 conflict\n500 \u2013 server error\n\nRemove an image\nDELETE /images/(name)\nRemove the image name from the filesystem\nExample request:\n DELETE /images/test HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-type: application/json\n\n [\n {\"Untagged\": \"3e2f21a89f\"},\n {\"Deleted\": \"3e2f21a89f\"},\n {\"Deleted\": \"53b4f83ac9\"}\n ]\n\nQuery Parameters:\n\nforce \u2013 1/True/true or 0/False/false, default false\nnoprune \u2013 1/True/true or 0/False/false, default false\n\nStatus Codes:\n\n200 \u2013 no error\n404 \u2013 no such image\n409 \u2013 conflict\n500 \u2013 server error\n\nSearch images\nGET /images/search\nSearch for an image on Docker Hub.\n\nNote:\nThe response keys have changed from API v1.6 to reflect the JSON\nsent by the registry server to the docker daemon's request.\n\nExample request:\n GET /images/search?term=sshd HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n [\n {\n \"description\": \"\",\n \"is_official\": false,\n \"is_automated\": false,\n \"name\": \"wma55/u1210sshd\",\n \"star_count\": 0\n },\n {\n \"description\": \"\",\n \"is_official\": false,\n \"is_automated\": false,\n \"name\": \"jdswinbank/sshd\",\n \"star_count\": 0\n },\n {\n \"description\": \"\",\n \"is_official\": false,\n \"is_automated\": false,\n \"name\": \"vgauthier/sshd\",\n \"star_count\": 0\n }\n ...\n ]\n\nQuery Parameters:\n\nterm \u2013 term to search\n\nStatus Codes:\n\n200 \u2013 no error\n500 \u2013 server error\n\n2.3 Misc\nCheck auth configuration\nPOST /auth\nGet the default username and email\nExample request:\n POST /auth HTTP/1.1\n Content-Type: application/json\n\n {\n \"username\":\" hannibal\",\n \"password: \"xxxx\",\n \"email\": \"hannibal@a-team.com\",\n \"serveraddress\": \"https://index.docker.io/v1/\"\n }\n\nExample response:\n HTTP/1.1 200 OK\n\nStatus Codes:\n\n200 \u2013 no error\n204 \u2013 no error\n500 \u2013 server error\n\nDisplay system-wide information\nGET /info\nDisplay system-wide information\nExample request:\n GET /info HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\n \"Containers\":11,\n \"Images\":16,\n \"Driver\":\"btrfs\",\n \"DriverStatus\": [[\"\"]],\n \"ExecutionDriver\":\"native-0.1\",\n \"KernelVersion\":\"3.12.0-1-amd64\"\n \"NCPU\":1,\n \"MemTotal\":2099236864,\n \"Name\":\"prod-server-42\",\n \"ID\":\"7TRN:IPZB:QYBB:VPBQ:UMPP:KARE:6ZNR:XE6T:7EWV:PKF4:ZOJD:TPYS\",\n \"Debug\":false,\n \"NFd\": 11,\n \"NGoroutines\":21,\n \"NEventsListener\":0,\n \"InitPath\":\"/usr/bin/docker\",\n \"InitSha1\":\"\",\n \"IndexServerAddress\":[\"https://index.docker.io/v1/\"],\n \"MemoryLimit\":true,\n \"SwapLimit\":false,\n \"IPv4Forwarding\":true,\n \"Labels\":[\"storage=ssd\"],\n \"DockerRootDir\": \"/var/lib/docker\",\n \"OperatingSystem\": \"Boot2Docker\",\n }\n\nStatus Codes:\n\n200 \u2013 no error\n500 \u2013 server error\n\nShow the docker version information\nGET /version\nShow the docker version information\nExample request:\n GET /version HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\n \"ApiVersion\": \"1.12\",\n \"Version\": \"0.2.2\",\n \"GitCommit\": \"5a2a5cc+CHANGES\",\n \"GoVersion\": \"go1.0.3\"\n }\n\nStatus Codes:\n\n200 \u2013 no error\n500 \u2013 server error\n\nPing the docker server\nGET /_ping\nPing the docker server\nExample request:\n GET /_ping HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: text/plain\n\n OK\n\nStatus Codes:\n\n200 - no error\n500 - server error\n\nCreate a new image from a container's changes\nPOST /commit\nCreate a new image from a container's changes\nExample request:\n POST /commit?container=44c004db4b17comment=messagerepo=myrepo HTTP/1.1\n Content-Type: application/json\n\n {\n \"Hostname\": \"\",\n \"Domainname\": \"\",\n \"User\": \"\",\n \"Memory\": 0,\n \"MemorySwap\": 0,\n \"CpuShares\": 512,\n \"Cpuset\": \"0,1\",\n \"AttachStdin\": false,\n \"AttachStdout\": true,\n \"AttachStderr\": true,\n \"PortSpecs\": null,\n \"Tty\": false,\n \"OpenStdin\": false,\n \"StdinOnce\": false,\n \"Env\": null,\n \"Cmd\": [\n \"date\"\n ],\n \"Volumes\": {\n \"/tmp\": {}\n },\n \"WorkingDir\": \"\",\n \"NetworkDisabled\": false,\n \"ExposedPorts\": {\n \"22/tcp\": {}\n }\n }\n\nExample response:\n HTTP/1.1 201 Created\n Content-Type: application/vnd.docker.raw-stream\n\n {\"Id\": \"596069db4bf5\"}\n\nJson Parameters:\n\nconfig - the container's configuration\n\nQuery Parameters:\n\ncontainer \u2013 source container\nrepo \u2013 repository\ntag \u2013 tag\ncomment \u2013 commit message\nauthor \u2013 author (e.g., \"John Hannibal Smith\n hannibal@a-team.com\")\n\nStatus Codes:\n\n201 \u2013 no error\n404 \u2013 no such container\n500 \u2013 server error\n\nMonitor Docker's events\nGET /events\nGet container events from docker, either in real time via streaming, or via\npolling (using since).\nDocker containers will report the following events:\ncreate, destroy, die, exec_create, exec_start, export, kill, oom, pause, restart, start, stop, unpause\n\nand Docker images will report:\nuntag, delete\n\nExample request:\n GET /events?since=1374067924\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\"status\": \"create\", \"id\": \"dfdf82bd3881\",\"from\": \"ubuntu:latest\", \"time\":1374067924}\n {\"status\": \"start\", \"id\": \"dfdf82bd3881\",\"from\": \"ubuntu:latest\", \"time\":1374067924}\n {\"status\": \"stop\", \"id\": \"dfdf82bd3881\",\"from\": \"ubuntu:latest\", \"time\":1374067966}\n {\"status\": \"destroy\", \"id\": \"dfdf82bd3881\",\"from\": \"ubuntu:latest\", \"time\":1374067970}\n\nQuery Parameters:\n\nsince \u2013 timestamp used for polling\nuntil \u2013 timestamp used for polling\nfilters \u2013 a json encoded value of the filters (a map[string][]string) to process on the event list. Available filters:\nevent=string -- event to filter\nimage=string -- image to filter\ncontainer=string -- container to filter\n\nStatus Codes:\n\n200 \u2013 no error\n500 \u2013 server error\n\nGet a tarball containing all images in a repository\nGET /images/(name)/get\nGet a tarball containing all images and metadata for the repository specified\nby name.\nIf name is a specific name and tag (e.g. ubuntu:latest), then only that image\n(and its parents) are returned. If name is an image ID, similarly only tha\nimage (and its parents) are returned, but with the exclusion of the\n'repositories' file in the tarball, as there were no image names referenced.\nSee the image tarball format for more details.\nExample request\n GET /images/ubuntu/get\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/x-tar\n\n Binary data stream\n\nStatus Codes:\n\n200 \u2013 no error\n500 \u2013 server error\n\nGet a tarball containing all images.\nGET /images/get\nGet a tarball containing all images and metadata for one or more repositories.\nFor each value of the names parameter: if it is a specific name and tag (e.g.\nubuntu:latest), then only that image (and its parents) are returned; if it is\nan image ID, similarly only that image (and its parents) are returned and there\nwould be no names referenced in the 'repositories' file for this image ID.\nSee the image tarball format for more details.\nExample request\n GET /images/get?names=myname%2Fmyapp%3Alatestnames=busybox\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/x-tar\n\n Binary data stream\n\nStatus Codes:\n\n200 \u2013 no error\n500 \u2013 server error\n\nLoad a tarball with a set of images and tags into docker\nPOST /images/load\nLoad a set of images and tags into the docker repository.\nSee the image tarball format for more details.\nExample request\n POST /images/load\n\n Tarball in body\n\nExample response:\n HTTP/1.1 200 OK\n\nStatus Codes:\n\n200 \u2013 no error\n500 \u2013 server error\n\nImage tarball format\nAn image tarball contains one directory per image layer (named using its long ID),\neach containing three files:\n\nVERSION: currently 1.0 - the file format version\njson: detailed layer information, similar to docker inspect layer_id\nlayer.tar: A tarfile containing the filesystem changes in this layer\n\nThe layer.tar file will contain aufs style .wh..wh.aufs files and directories\nfor storing attribute changes and deletions.\nIf the tarball defines a repository, there will also be a repositories file at\nthe root that contains a list of repository and tag names mapped to layer IDs.\n{hello-world:\n {latest: 565a9d68a73f6706862bfe8409a7f659776d4d60a8d096eb4a3cbce6999cc2a1}\n}\n\n\nExec Create\nPOST /containers/(id)/exec\nSets up an exec instance in a running container id\nExample request:\n POST /containers/e90e34656806/exec HTTP/1.1\n Content-Type: application/json\n\n {\n \"AttachStdin\": false,\n \"AttachStdout\": true,\n \"AttachStderr\": true,\n \"Tty\": false,\n \"Cmd\": [\n \"date\"\n ],\n }\n\nExample response:\n HTTP/1.1 201 OK\n Content-Type: application/json\n\n {\n \"Id\": \"f90e34656806\"\n }\n\nJson Parameters:\n\nAttachStdin - Boolean value, attaches to stdin of the exec command.\nAttachStdout - Boolean value, attaches to stdout of the exec command.\nAttachStderr - Boolean value, attaches to stderr of the exec command.\nTty - Boolean value to allocate a pseudo-TTY\nCmd - Command to run specified as a string or an array of strings.\n\nStatus Codes:\n\n201 \u2013 no error\n404 \u2013 no such container\n\nExec Start\nPOST /exec/(id)/start\nStarts a previously set up exec instance id. If detach is true, this API\nreturns after starting the exec command. Otherwise, this API sets up an\ninteractive session with the exec command.\nExample request:\n POST /exec/e90e34656806/start HTTP/1.1\n Content-Type: application/json\n\n {\n \"Detach\": false,\n \"Tty\": false,\n }\n\nExample response:\n HTTP/1.1 201 OK\n Content-Type: application/json\n\n {{ STREAM }}\n\nJson Parameters:\n\nDetach - Detach from the exec command\nTty - Boolean value to allocate a pseudo-TTY\n\nStatus Codes:\n\n201 \u2013 no error\n\n404 \u2013 no such exec instance\nStream details:\nSimilar to the stream behavior of POST /container/(id)/attach API\n\n\nExec Resize\nPOST /exec/(id)/resize\nResizes the tty session used by the exec command id.\nThis API is valid only if tty was specified as part of creating and starting the exec command.\nExample request:\n POST /exec/e90e34656806/resize HTTP/1.1\n Content-Type: text/plain\n\nExample response:\n HTTP/1.1 201 OK\n Content-Type: text/plain\n\nQuery Parameters:\n\nh \u2013 height of tty session\nw \u2013 width\n\nStatus Codes:\n\n201 \u2013 no error\n404 \u2013 no such exec instance\n\nExec Inspect\nGET /exec/(id)/json\nReturn low-level information about the exec command id.\nExample request:\n GET /exec/11fb006128e8ceb3942e7c58d77750f24210e35f879dd204ac975c184b820b39/json HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: plain/text\n\n {\n \"ID\" : \"11fb006128e8ceb3942e7c58d77750f24210e35f879dd204ac975c184b820b39\",\n \"Running\" : false,\n \"ExitCode\" : 2,\n \"ProcessConfig\" : {\n \"privileged\" : false,\n \"user\" : \"\",\n \"tty\" : false,\n \"entrypoint\" : \"sh\",\n \"arguments\" : [\n \"-c\",\n \"exit 2\"\n ]\n },\n \"OpenStdin\" : false,\n \"OpenStderr\" : false,\n \"OpenStdout\" : false,\n \"Container\" : {\n \"State\" : {\n \"Running\" : true,\n \"Paused\" : false,\n \"Restarting\" : false,\n \"OOMKilled\" : false,\n \"Pid\" : 3650,\n \"ExitCode\" : 0,\n \"Error\" : \"\",\n \"StartedAt\" : \"2014-11-17T22:26:03.717657531Z\",\n \"FinishedAt\" : \"0001-01-01T00:00:00Z\"\n },\n \"ID\" : \"8f177a186b977fb451136e0fdf182abff5599a08b3c7f6ef0d36a55aaf89634c\",\n \"Created\" : \"2014-11-17T22:26:03.626304998Z\",\n \"Path\" : \"date\",\n \"Args\" : [],\n \"Config\" : {\n \"Hostname\" : \"8f177a186b97\",\n \"Domainname\" : \"\",\n \"User\" : \"\",\n \"Memory\" : 0,\n \"MemorySwap\" : 0,\n \"CpuShares\" : 0,\n \"Cpuset\" : \"\",\n \"AttachStdin\" : false,\n \"AttachStdout\" : false,\n \"AttachStderr\" : false,\n \"PortSpecs\" : null,\n \"ExposedPorts\" : null,\n \"Tty\" : false,\n \"OpenStdin\" : false,\n \"StdinOnce\" : false,\n \"Env\" : [ \"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\" ],\n \"Cmd\" : [\n \"date\"\n ],\n \"Image\" : \"ubuntu\",\n \"Volumes\" : null,\n \"WorkingDir\" : \"\",\n \"Entrypoint\" : null,\n \"NetworkDisabled\" : false,\n \"MacAddress\" : \"\",\n \"OnBuild\" : null,\n \"SecurityOpt\" : null\n },\n \"Image\" : \"5506de2b643be1e6febbf3b8a240760c6843244c41e12aa2f60ccbb7153d17f5\",\n \"NetworkSettings\" : {\n \"IPAddress\" : \"172.17.0.2\",\n \"IPPrefixLen\" : 16,\n \"MacAddress\" : \"02:42:ac:11:00:02\",\n \"Gateway\" : \"172.17.42.1\",\n \"Bridge\" : \"docker0\",\n \"PortMapping\" : null,\n \"Ports\" : {}\n },\n \"ResolvConfPath\" : \"/var/lib/docker/containers/8f177a186b977fb451136e0fdf182abff5599a08b3c7f6ef0d36a55aaf89634c/resolv.conf\",\n \"HostnamePath\" : \"/var/lib/docker/containers/8f177a186b977fb451136e0fdf182abff5599a08b3c7f6ef0d36a55aaf89634c/hostname\",\n \"HostsPath\" : \"/var/lib/docker/containers/8f177a186b977fb451136e0fdf182abff5599a08b3c7f6ef0d36a55aaf89634c/hosts\",\n \"Name\" : \"/test\",\n \"Driver\" : \"aufs\",\n \"ExecDriver\" : \"native-0.2\",\n \"MountLabel\" : \"\",\n \"ProcessLabel\" : \"\",\n \"AppArmorProfile\" : \"\",\n \"RestartCount\" : 0,\n \"Volumes\" : {},\n \"VolumesRW\" : {}\n }\n }\n\nStatus Codes:\n\n200 \u2013 no error\n404 \u2013 no such exec instance\n500 - server error\n\n3. Going further\n3.1 Inside docker run\nAs an example, the docker run command line makes the following API calls:\n\n\nCreate the container\n\n\nIf the status code is 404, it means the image doesn't exist:\n\nTry to pull it\nThen retry to create the container\n\n\n\nStart the container\n\n\nIf you are not in detached mode:\n\n\nAttach to the container, using logs=1 (to have stdout and\n stderr from the container's start) and stream=1\n\n\nIf in detached mode or only stdin is attached:\n\nDisplay the container's id\n\n3.2 Hijacking\nIn this version of the API, /attach, uses hijacking to transport stdin,\nstdout and stderr on the same socket.\nTo hint potential proxies about connection hijacking, Docker client sends\nconnection upgrade headers similarly to websocket.\nUpgrade: tcp\nConnection: Upgrade\n\nWhen Docker daemon detects the Upgrade header, it will switch its status code\nfrom 200 OK to 101 UPGRADED and resend the same headers.\nThis might change in the future.\n3.3 CORS Requests\nTo enable cross origin requests to the remote api add the flag\n\"--api-enable-cors\" when running docker in daemon mode.\n$ docker -d -H=\"192.168.1.9:2375\" --api-enable-cors",
|
|
"title": "Docker Remote API v1.17"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api_v1.17#docker-remote-api-v117",
|
|
"tags": "",
|
|
"text": "",
|
|
"title": "Docker Remote API v1.17"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api_v1.17#1-brief-introduction",
|
|
"tags": "",
|
|
"text": "The Remote API has replaced rcli . The daemon listens on unix:///var/run/docker.sock but you can\n Bind Docker to another host/port or a Unix socket . The API tends to be REST, but for some complex commands, like attach \n or pull , the HTTP connection is hijacked to transport STDOUT ,\n STDIN and STDERR .",
|
|
"title": "1. Brief introduction"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api_v1.17#2-endpoints",
|
|
"tags": "",
|
|
"text": "",
|
|
"title": "2. Endpoints"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api_v1.17#21-containers",
|
|
"tags": "",
|
|
"text": "List containers GET /containers/json List containers Example request : GET /containers/json?all=1 before=8dfafdbc3a40 size=1 HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n [\n {\n \"Id\": \"8dfafdbc3a40\",\n \"Image\": \"ubuntu:latest\",\n \"Command\": \"echo 1\",\n \"Created\": 1367854155,\n \"Status\": \"Exit 0\",\n \"Ports\": [{\"PrivatePort\": 2222, \"PublicPort\": 3333, \"Type\": \"tcp\"}],\n \"SizeRw\": 12288,\n \"SizeRootFs\": 0\n },\n {\n \"Id\": \"9cd87474be90\",\n \"Image\": \"ubuntu:latest\",\n \"Command\": \"echo 222222\",\n \"Created\": 1367854155,\n \"Status\": \"Exit 0\",\n \"Ports\": [],\n \"SizeRw\": 12288,\n \"SizeRootFs\": 0\n },\n {\n \"Id\": \"3176a2479c92\",\n \"Image\": \"ubuntu:latest\",\n \"Command\": \"echo 3333333333333333\",\n \"Created\": 1367854154,\n \"Status\": \"Exit 0\",\n \"Ports\":[],\n \"SizeRw\":12288,\n \"SizeRootFs\":0\n },\n {\n \"Id\": \"4cb07b47f9fb\",\n \"Image\": \"ubuntu:latest\",\n \"Command\": \"echo 444444444444444444444444444444444\",\n \"Created\": 1367854152,\n \"Status\": \"Exit 0\",\n \"Ports\": [],\n \"SizeRw\": 12288,\n \"SizeRootFs\": 0\n }\n ] Query Parameters: all \u2013 1/True/true or 0/False/false, Show all containers.\n Only running containers are shown by default (i.e., this defaults to false) limit \u2013 Show limit last created\n containers, include non-running ones. since \u2013 Show only containers created since Id, include\n non-running ones. before \u2013 Show only containers created before Id, include\n non-running ones. size \u2013 1/True/true or 0/False/false, Show the containers\n sizes filters - a json encoded value of the filters (a map[string][]string) to process on the containers list. Available filters: exited= int -- containers with exit code of int status=(restarting|running|paused|exited) Status Codes: 200 \u2013 no error 400 \u2013 bad parameter 500 \u2013 server error Create a container POST /containers/create Create a container Example request : POST /containers/create HTTP/1.1\n Content-Type: application/json\n\n {\n \"Hostname\": \"\",\n \"Domainname\": \"\",\n \"User\": \"\",\n \"Memory\": 0,\n \"MemorySwap\": 0,\n \"CpuShares\": 512,\n \"Cpuset\": \"0,1\",\n \"AttachStdin\": false,\n \"AttachStdout\": true,\n \"AttachStderr\": true,\n \"Tty\": false,\n \"OpenStdin\": false,\n \"StdinOnce\": false,\n \"Env\": null,\n \"Cmd\": [\n \"date\"\n ],\n \"Entrypoint\": \"\",\n \"Image\": \"ubuntu\",\n \"Volumes\": {\n \"/tmp\": {}\n },\n \"WorkingDir\": \"\",\n \"NetworkDisabled\": false,\n \"MacAddress\": \"12:34:56:78:9a:bc\",\n \"ExposedPorts\": {\n \"22/tcp\": {}\n },\n \"SecurityOpts\": [\"\"],\n \"HostConfig\": {\n \"Binds\": [\"/tmp:/tmp\"],\n \"Links\": [\"redis3:redis\"],\n \"LxcConf\": {\"lxc.utsname\":\"docker\"},\n \"PortBindings\": { \"22/tcp\": [{ \"HostPort\": \"11022\" }] },\n \"PublishAllPorts\": false,\n \"Privileged\": false,\n \"ReadonlyRootfs\": false,\n \"Dns\": [\"8.8.8.8\"],\n \"DnsSearch\": [\"\"],\n \"ExtraHosts\": null,\n \"VolumesFrom\": [\"parent\", \"other:ro\"],\n \"CapAdd\": [\"NET_ADMIN\"],\n \"CapDrop\": [\"MKNOD\"],\n \"RestartPolicy\": { \"Name\": \"\", \"MaximumRetryCount\": 0 },\n \"NetworkMode\": \"bridge\",\n \"Devices\": []\n }\n } Example response : HTTP/1.1 201 Created\n Content-Type: application/json\n\n {\n \"Id\":\"e90e34656806\"\n \"Warnings\":[]\n } Json Parameters: Hostname - A string value containing the desired hostname to use for the\n container. Domainname - A string value containing the desired domain name to use\n for the container. User - A string value containg the user to use inside the container. Memory - Memory limit in bytes. MemorySwap - Total memory usage (memory + swap); set -1 to disable swap. CpuShares - An integer value containing the CPU Shares for container\n (ie. the relative weight vs othercontainers).\n CpuSet - String value containg the cgroups Cpuset to use. AttachStdin - Boolean value, attaches to stdin. AttachStdout - Boolean value, attaches to stdout. AttachStderr - Boolean value, attaches to stderr. Tty - Boolean value, Attach standard streams to a tty, including stdin if it is not closed. OpenStdin - Boolean value, opens stdin, StdinOnce - Boolean value, close stdin after the 1 attached client disconnects. Env - A list of environment variables in the form of VAR=value Cmd - Command to run specified as a string or an array of strings. Entrypoint - Set the entrypoint for the container a a string or an array\n of strings Image - String value containing the image name to use for the container Volumes \u2013 An object mapping mountpoint paths (strings) inside the\n container to empty objects. WorkingDir - A string value containing the working dir for commands to\n run in. NetworkDisabled - Boolean value, when true disables neworking for the\n container ExposedPorts - An object mapping ports to an empty object in the form of:\n \"ExposedPorts\": { \" port / tcp|udp : {}\" } SecurityOpts : A list of string values to customize labels for MLS\n systems, such as SELinux. HostConfig Binds \u2013 A list of volume bindings for this container. Each volume\n binding is a string of the form container_path (to create a new\n volume for the container), host_path:container_path (to bind-mount\n a host path into the container), or host_path:container_path:ro \n (to make the bind-mount read-only inside the container). Links - A list of links for the container. Each link entry should be of\n of the form \"container_name:alias\". LxcConf - LXC specific configurations. These configurations will only\n work when using the lxc execution driver. PortBindings - A map of exposed container ports and the host port they\n should map to. It should be specified in the form\n { port / protocol : [{ \"HostPort\": \" port \" }] } \n Take note that port is specified as a string and not an integer value. PublishAllPorts - Allocates a random host port for all of a container's\n exposed ports. Specified as a boolean value. Privileged - Gives the container full access to the host. Specified as\n a boolean value. ReadonlyRootfs - Mount the container's root filesystem as read only.\n Specified as a boolean value. Dns - A list of dns servers for the container to use. DnsSearch - A list of DNS search domains ExtraHosts - A list of hostnames/IP mappings to be added to the\n container's /etc/hosts file. Specified in the form [\"hostname:IP\"] . VolumesFrom - A list of volumes to inherit from another container.\n Specified in the form container name [: ro|rw ] CapAdd - A list of kernel capabilties to add to the container. Capdrop - A list of kernel capabilties to drop from the container. RestartPolicy \u2013 The behavior to apply when the container exits. The\n value is an object with a Name property of either \"always\" to\n always restart or \"on-failure\" to restart only when the container\n exit code is non-zero. If on-failure is used, MaximumRetryCount \n controls the number of times to retry before giving up.\n The default is not to restart. (optional)\n An ever increasing delay (double the previous delay, starting at 100mS)\n is added before each restart to prevent flooding the server. NetworkMode - Sets the networking mode for the container. Supported\n values are: bridge , host , and container: name|id Devices - A list of devices to add to the container specified in the\n form\n { \"PathOnHost\": \"/dev/deviceName\", \"PathInContainer\": \"/dev/deviceName\", \"CgroupPermissions\": \"mrw\"} Query Parameters: name \u2013 Assign the specified name to the container. Must\n match /?[a-zA-Z0-9_-]+ . Status Codes: 201 \u2013 no error 404 \u2013 no such container 406 \u2013 impossible to attach (container not running) 500 \u2013 server error Inspect a container GET /containers/(id)/json Return low-level information on the container id Example request : GET /containers/4fa6e0f0c678/json HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n{\n \"AppArmorProfile\": \"\",\n \"Args\": [\n \"-c\",\n \"exit 9\"\n ],\n \"Config\": {\n \"AttachStderr\": true,\n \"AttachStdin\": false,\n \"AttachStdout\": true,\n \"Cmd\": [\n \"/bin/sh\",\n \"-c\",\n \"exit 9\"\n ],\n \"CpuShares\": 0,\n \"Cpuset\": \"\",\n \"Domainname\": \"\",\n \"Entrypoint\": null,\n \"Env\": [\n \"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\"\n ],\n \"ExposedPorts\": null,\n \"Hostname\": \"ba033ac44011\",\n \"Image\": \"ubuntu\",\n \"MacAddress\": \"\",\n \"Memory\": 0,\n \"MemorySwap\": 0,\n \"NetworkDisabled\": false,\n \"OnBuild\": null,\n \"OpenStdin\": false,\n \"PortSpecs\": null,\n \"StdinOnce\": false,\n \"Tty\": false,\n \"User\": \"\",\n \"Volumes\": null,\n \"WorkingDir\": \"\"\n },\n \"Created\": \"2015-01-06T15:47:31.485331387Z\",\n \"Driver\": \"devicemapper\",\n \"ExecDriver\": \"native-0.2\",\n \"ExecIDs\": null,\n \"HostConfig\": {\n \"Binds\": null,\n \"CapAdd\": null,\n \"CapDrop\": null,\n \"ContainerIDFile\": \"\",\n \"Devices\": [],\n \"Dns\": null,\n \"DnsSearch\": null,\n \"ExtraHosts\": null,\n \"IpcMode\": \"\",\n \"Links\": null,\n \"LxcConf\": [],\n \"NetworkMode\": \"bridge\",\n \"PortBindings\": {},\n \"Privileged\": false,\n \"ReadonlyRootfs\": false,\n \"PublishAllPorts\": false,\n \"RestartPolicy\": {\n \"MaximumRetryCount\": 2,\n \"Name\": \"on-failure\"\n },\n \"SecurityOpt\": null,\n \"VolumesFrom\": null\n },\n \"HostnamePath\": \"/var/lib/docker/containers/ba033ac4401106a3b513bc9d639eee123ad78ca3616b921167cd74b20e25ed39/hostname\",\n \"HostsPath\": \"/var/lib/docker/containers/ba033ac4401106a3b513bc9d639eee123ad78ca3616b921167cd74b20e25ed39/hosts\",\n \"Id\": \"ba033ac4401106a3b513bc9d639eee123ad78ca3616b921167cd74b20e25ed39\",\n \"Image\": \"04c5d3b7b0656168630d3ba35d8889bd0e9caafcaeb3004d2bfbc47e7c5d35d2\",\n \"MountLabel\": \"\",\n \"Name\": \"/boring_euclid\",\n \"NetworkSettings\": {\n \"Bridge\": \"\",\n \"Gateway\": \"\",\n \"IPAddress\": \"\",\n \"IPPrefixLen\": 0,\n \"MacAddress\": \"\",\n \"PortMapping\": null,\n \"Ports\": null\n },\n \"Path\": \"/bin/sh\",\n \"ProcessLabel\": \"\",\n \"ResolvConfPath\": \"/var/lib/docker/containers/ba033ac4401106a3b513bc9d639eee123ad78ca3616b921167cd74b20e25ed39/resolv.conf\",\n \"RestartCount\": 1,\n \"State\": {\n \"Error\": \"\",\n \"ExitCode\": 9,\n \"FinishedAt\": \"2015-01-06T15:47:32.080254511Z\",\n \"OOMKilled\": false,\n \"Paused\": false,\n \"Pid\": 0,\n \"Restarting\": false,\n \"Running\": false,\n \"StartedAt\": \"2015-01-06T15:47:32.072697474Z\"\n },\n \"Volumes\": {},\n \"VolumesRW\": {}\n} Status Codes: 200 \u2013 no error 404 \u2013 no such container 500 \u2013 server error List processes running inside a container GET /containers/(id)/top List processes running inside the container id Example request : GET /containers/4fa6e0f0c678/top HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\n \"Titles\": [\n \"USER\",\n \"PID\",\n \"%CPU\",\n \"%MEM\",\n \"VSZ\",\n \"RSS\",\n \"TTY\",\n \"STAT\",\n \"START\",\n \"TIME\",\n \"COMMAND\"\n ],\n \"Processes\": [\n [\"root\",\"20147\",\"0.0\",\"0.1\",\"18060\",\"1864\",\"pts/4\",\"S\",\"10:06\",\"0:00\",\"bash\"],\n [\"root\",\"20271\",\"0.0\",\"0.0\",\"4312\",\"352\",\"pts/4\",\"S+\",\"10:07\",\"0:00\",\"sleep\",\"10\"]\n ]\n } Query Parameters: ps_args \u2013 ps arguments to use (e.g., aux) Status Codes: 200 \u2013 no error 404 \u2013 no such container 500 \u2013 server error Get container logs GET /containers/(id)/logs Get stdout and stderr logs from the container id Example request : GET /containers/4fa6e0f0c678/logs?stderr=1 stdout=1 timestamps=1 follow=1 tail=10 HTTP/1.1 Example response : HTTP/1.1 101 UPGRADED\n Content-Type: application/vnd.docker.raw-stream\n Connection: Upgrade\n Upgrade: tcp\n\n {{ STREAM }} Query Parameters: follow \u2013 1/True/true or 0/False/false, return stream. Default false stdout \u2013 1/True/true or 0/False/false, show stdout log. Default false stderr \u2013 1/True/true or 0/False/false, show stderr log. Default false timestamps \u2013 1/True/true or 0/False/false, print timestamps for\n every log line. Default false tail \u2013 Output specified number of lines at the end of logs: all or number . Default all Status Codes: 101 \u2013 no error, hints proxy about hijacking 200 \u2013 no error, no upgrade header found 404 \u2013 no such container 500 \u2013 server error Inspect changes on a container's filesystem GET /containers/(id)/changes Inspect changes on container id 's filesystem Example request : GET /containers/4fa6e0f0c678/changes HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n [\n {\n \"Path\": \"/dev\",\n \"Kind\": 0\n },\n {\n \"Path\": \"/dev/kmsg\",\n \"Kind\": 1\n },\n {\n \"Path\": \"/test\",\n \"Kind\": 1\n }\n ] Status Codes: 200 \u2013 no error 404 \u2013 no such container 500 \u2013 server error Export a container GET /containers/(id)/export Export the contents of container id Example request : GET /containers/4fa6e0f0c678/export HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/octet-stream\n\n {{ TAR STREAM }} Status Codes: 200 \u2013 no error 404 \u2013 no such container 500 \u2013 server error Get container stats based on resource usage GET /containers/(id)/stats This endpoint returns a live stream of a container's resource usage statistics. Note : this functionality currently only works when using the libcontainer exec-driver. Example request : GET /containers/redis1/stats HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\n \"read\" : \"2015-01-08T22:57:31.547920715Z\",\n \"network\" : {\n \"rx_dropped\" : 0,\n \"rx_bytes\" : 648,\n \"rx_errors\" : 0,\n \"tx_packets\" : 8,\n \"tx_dropped\" : 0,\n \"rx_packets\" : 8,\n \"tx_errors\" : 0,\n \"tx_bytes\" : 648\n },\n \"memory_stats\" : {\n \"stats\" : {\n \"total_pgmajfault\" : 0,\n \"cache\" : 0,\n \"mapped_file\" : 0,\n \"total_inactive_file\" : 0,\n \"pgpgout\" : 414,\n \"rss\" : 6537216,\n \"total_mapped_file\" : 0,\n \"writeback\" : 0,\n \"unevictable\" : 0,\n \"pgpgin\" : 477,\n \"total_unevictable\" : 0,\n \"pgmajfault\" : 0,\n \"total_rss\" : 6537216,\n \"total_rss_huge\" : 6291456,\n \"total_writeback\" : 0,\n \"total_inactive_anon\" : 0,\n \"rss_huge\" : 6291456,\n \"hierarchical_memory_limit\" : 67108864,\n \"total_pgfault\" : 964,\n \"total_active_file\" : 0,\n \"active_anon\" : 6537216,\n \"total_active_anon\" : 6537216,\n \"total_pgpgout\" : 414,\n \"total_cache\" : 0,\n \"inactive_anon\" : 0,\n \"active_file\" : 0,\n \"pgfault\" : 964,\n \"inactive_file\" : 0,\n \"total_pgpgin\" : 477\n },\n \"max_usage\" : 6651904,\n \"usage\" : 6537216,\n \"failcnt\" : 0,\n \"limit\" : 67108864\n },\n \"blkio_stats\" : {},\n \"cpu_stats\" : {\n \"cpu_usage\" : {\n \"percpu_usage\" : [\n 16970827,\n 1839451,\n 7107380,\n 10571290\n ],\n \"usage_in_usermode\" : 10000000,\n \"total_usage\" : 36488948,\n \"usage_in_kernelmode\" : 20000000\n },\n \"system_cpu_usage\" : 20091722000000000,\n \"throttling_data\" : {}\n }\n } Status Codes: 200 \u2013 no error 404 \u2013 no such container 500 \u2013 server error Resize a container TTY POST /containers/(id)/resize?h= height w= width Resize the TTY for container with id . The container must be restarted for the resize to take effect. Example request : POST /containers/4fa6e0f0c678/resize?h=40 w=80 HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Length: 0\n Content-Type: text/plain; charset=utf-8 Status Codes: 200 \u2013 no error 404 \u2013 No such container 500 \u2013 Cannot resize container Start a container POST /containers/(id)/start Start the container id Example request : POST /containers/(id)/start HTTP/1.1\n Content-Type: application/json Example response : HTTP/1.1 204 No Content Json Parameters: Status Codes: 204 \u2013 no error 304 \u2013 container already started 404 \u2013 no such container 500 \u2013 server error Stop a container POST /containers/(id)/stop Stop the container id Example request : POST /containers/e90e34656806/stop?t=5 HTTP/1.1 Example response : HTTP/1.1 204 No Content Query Parameters: t \u2013 number of seconds to wait before killing the container Status Codes: 204 \u2013 no error 304 \u2013 container already stopped 404 \u2013 no such container 500 \u2013 server error Restart a container POST /containers/(id)/restart Restart the container id Example request : POST /containers/e90e34656806/restart?t=5 HTTP/1.1 Example response : HTTP/1.1 204 No Content Query Parameters: t \u2013 number of seconds to wait before killing the container Status Codes: 204 \u2013 no error 404 \u2013 no such container 500 \u2013 server error Kill a container POST /containers/(id)/kill Kill the container id Example request : POST /containers/e90e34656806/kill HTTP/1.1 Example response : HTTP/1.1 204 No Content Query Parameters signal - Signal to send to the container: integer or string like \"SIGINT\".\n When not set, SIGKILL is assumed and the call will waits for the container to exit. Status Codes: 204 \u2013 no error 404 \u2013 no such container 500 \u2013 server error Rename a container POST /containers/(id)/rename Rename the container id to a new_name Example request : POST /containers/e90e34656806/rename?name=new_name HTTP/1.1 Example response : HTTP/1.1 204 No Content Query Parameters: name \u2013 new name for the container Status Codes: 204 \u2013 no error 404 \u2013 no such container 409 - conflict name already assigned 500 \u2013 server error Pause a container POST /containers/(id)/pause Pause the container id Example request : POST /containers/e90e34656806/pause HTTP/1.1 Example response : HTTP/1.1 204 No Content Status Codes: 204 \u2013 no error 404 \u2013 no such container 500 \u2013 server error Unpause a container POST /containers/(id)/unpause Unpause the container id Example request : POST /containers/e90e34656806/unpause HTTP/1.1 Example response : HTTP/1.1 204 No Content Status Codes: 204 \u2013 no error 404 \u2013 no such container 500 \u2013 server error Attach to a container POST /containers/(id)/attach Attach to the container id Example request : POST /containers/16253994b7c4/attach?logs=1 stream=0 stdout=1 HTTP/1.1 Example response : HTTP/1.1 101 UPGRADED\n Content-Type: application/vnd.docker.raw-stream\n Connection: Upgrade\n Upgrade: tcp\n\n {{ STREAM }} Query Parameters: logs \u2013 1/True/true or 0/False/false, return logs. Default false stream \u2013 1/True/true or 0/False/false, return stream.\n Default false stdin \u2013 1/True/true or 0/False/false, if stream=true, attach\n to stdin. Default false stdout \u2013 1/True/true or 0/False/false, if logs=true, return\n stdout log, if stream=true, attach to stdout. Default false stderr \u2013 1/True/true or 0/False/false, if logs=true, return\n stderr log, if stream=true, attach to stderr. Default false Status Codes: 101 \u2013 no error, hints proxy about hijacking 200 \u2013 no error, no upgrade header found 400 \u2013 bad parameter 404 \u2013 no such container 500 \u2013 server error Stream details : When using the TTY setting is enabled in POST /containers/create ,\nthe stream is the raw data from the process PTY and client's stdin.\nWhen the TTY is disabled, then the stream is multiplexed to separate\nstdout and stderr. The format is a Header and a Payload (frame). HEADER The header will contain the information on which stream write the\nstream (stdout or stderr). It also contain the size of the\nassociated frame encoded on the last 4 bytes (uint32). It is encoded on the first 8 bytes like this: header := [8]byte{STREAM_TYPE, 0, 0, 0, SIZE1, SIZE2, SIZE3, SIZE4} STREAM_TYPE can be: 0: stdin (will be written on stdout) 1: stdout 2: stderr SIZE1, SIZE2, SIZE3, SIZE4 are the 4 bytes of\nthe uint32 size encoded as big endian. PAYLOAD The payload is the raw stream. IMPLEMENTATION The simplest way to implement the Attach protocol is the following: Read 8 bytes chose stdout or stderr depending on the first byte Extract the frame size from the last 4 byets Read the extracted size and output it on the correct output Goto 1 Attach to a container (websocket) GET /containers/(id)/attach/ws Attach to the container id via websocket Implements websocket protocol handshake according to RFC 6455 Example request GET /containers/e90e34656806/attach/ws?logs=0 stream=1 stdin=1 stdout=1 stderr=1 HTTP/1.1 Example response {{ STREAM }} Query Parameters: logs \u2013 1/True/true or 0/False/false, return logs. Default false stream \u2013 1/True/true or 0/False/false, return stream.\n Default false stdin \u2013 1/True/true or 0/False/false, if stream=true, attach\n to stdin. Default false stdout \u2013 1/True/true or 0/False/false, if logs=true, return\n stdout log, if stream=true, attach to stdout. Default false stderr \u2013 1/True/true or 0/False/false, if logs=true, return\n stderr log, if stream=true, attach to stderr. Default false Status Codes: 200 \u2013 no error 400 \u2013 bad parameter 404 \u2013 no such container 500 \u2013 server error Wait a container POST /containers/(id)/wait Block until container id stops, then returns the exit code Example request : POST /containers/16253994b7c4/wait HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\"StatusCode\": 0} Status Codes: 200 \u2013 no error 404 \u2013 no such container 500 \u2013 server error Remove a container DELETE /containers/(id) Remove the container id from the filesystem Example request : DELETE /containers/16253994b7c4?v=1 HTTP/1.1 Example response : HTTP/1.1 204 No Content Query Parameters: v \u2013 1/True/true or 0/False/false, Remove the volumes\n associated to the container. Default false force - 1/True/true or 0/False/false, Kill then remove the container.\n Default false Status Codes: 204 \u2013 no error 400 \u2013 bad parameter 404 \u2013 no such container 500 \u2013 server error Copy files or folders from a container POST /containers/(id)/copy Copy files or folders of container id Example request : POST /containers/4fa6e0f0c678/copy HTTP/1.1\n Content-Type: application/json\n\n {\n \"Resource\": \"test.txt\"\n } Example response : HTTP/1.1 200 OK\n Content-Type: application/x-tar\n\n {{ TAR STREAM }} Status Codes: 200 \u2013 no error 404 \u2013 no such container 500 \u2013 server error",
|
|
"title": "2.1 Containers"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api_v1.17#22-images",
|
|
"tags": "",
|
|
"text": "List Images GET /images/json Example request : GET /images/json?all=0 HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n [\n {\n \"RepoTags\": [\n \"ubuntu:12.04\",\n \"ubuntu:precise\",\n \"ubuntu:latest\"\n ],\n \"Id\": \"8dbd9e392a964056420e5d58ca5cc376ef18e2de93b5cc90e868a1bbc8318c1c\",\n \"Created\": 1365714795,\n \"Size\": 131506275,\n \"VirtualSize\": 131506275\n },\n {\n \"RepoTags\": [\n \"ubuntu:12.10\",\n \"ubuntu:quantal\"\n ],\n \"ParentId\": \"27cf784147099545\",\n \"Id\": \"b750fe79269d2ec9a3c593ef05b4332b1d1a02a62b4accb2c21d589ff2f5f2dc\",\n \"Created\": 1364102658,\n \"Size\": 24653,\n \"VirtualSize\": 180116135\n }\n ] Query Parameters: all \u2013 1/True/true or 0/False/false, default false filters \u2013 a json encoded value of the filters (a map[string][]string) to process on the images list. Available filters: dangling=true Build image from a Dockerfile POST /build Build an image from a Dockerfile Example request : POST /build HTTP/1.1\n\n {{ TAR STREAM }} Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\"stream\": \"Step 1...\"}\n {\"stream\": \"...\"}\n {\"error\": \"Error...\", \"errorDetail\": {\"code\": 123, \"message\": \"Error...\"}} The input stream must be a tar archive compressed with one of the\nfollowing algorithms: identity (no compression), gzip, bzip2, xz. The archive must include a build instructions file, typically called Dockerfile at the root of the archive. The dockerfile parameter may be\nused to specify a different build instructions file by having its value be\nthe path to the alternate build instructions file to use. The archive may include any number of other files,\nwhich will be accessible in the build context (See the ADD build\ncommand ). Query Parameters: dockerfile - path within the build context to the Dockerfile t \u2013 repository name (and optionally a tag) to be applied to\n the resulting image in case of success remote \u2013 git or HTTP/HTTPS URI build source q \u2013 suppress verbose build output nocache \u2013 do not use the cache when building the image pull - attempt to pull the image even if an older image exists locally rm - remove intermediate containers after a successful build (default behavior) forcerm - always remove intermediate containers (includes rm) Request Headers: Content-type \u2013 should be set to \"application/tar\" . X-Registry-Config \u2013 base64-encoded ConfigFile objec Status Codes: 200 \u2013 no error 500 \u2013 server error Create an image POST /images/create Create an image, either by pulling it from the registry or by importing it Example request : POST /images/create?fromImage=ubuntu HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\"status\": \"Pulling...\"}\n {\"status\": \"Pulling\", \"progress\": \"1 B/ 100 B\", \"progressDetail\": {\"current\": 1, \"total\": 100}}\n {\"error\": \"Invalid...\"}\n ...\n\nWhen using this endpoint to pull an image from the registry, the\n`X-Registry-Auth` header can be used to include\na base64-encoded AuthConfig object. Query Parameters: fromImage \u2013 name of the image to pull fromSrc \u2013 source to import. The value may be a URL from which the image\n can be retrieved or - to read the image from the request body. repo \u2013 repository tag \u2013 tag registry \u2013 the registry to pull from Request Headers: X-Registry-Auth \u2013 base64-encoded AuthConfig object Status Codes: 200 \u2013 no error 500 \u2013 server error Inspect an image GET /images/(name)/json Return low-level information on the image name Example request : GET /images/ubuntu/json HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\n \"Created\": \"2013-03-23T22:24:18.818426-07:00\",\n \"Container\": \"3d67245a8d72ecf13f33dffac9f79dcdf70f75acb84d308770391510e0c23ad0\",\n \"ContainerConfig\":\n {\n \"Hostname\": \"\",\n \"User\": \"\",\n \"Memory\": 0,\n \"MemorySwap\": 0,\n \"AttachStdin\": false,\n \"AttachStdout\": false,\n \"AttachStderr\": false,\n \"PortSpecs\": null,\n \"Tty\": true,\n \"OpenStdin\": true,\n \"StdinOnce\": false,\n \"Env\": null,\n \"Cmd\": [\"/bin/bash\"],\n \"Dns\": null,\n \"Image\": \"ubuntu\",\n \"Volumes\": null,\n \"VolumesFrom\": \"\",\n \"WorkingDir\": \"\"\n },\n \"Id\": \"b750fe79269d2ec9a3c593ef05b4332b1d1a02a62b4accb2c21d589ff2f5f2dc\",\n \"Parent\": \"27cf784147099545\",\n \"Size\": 6824592\n } Status Codes: 200 \u2013 no error 404 \u2013 no such image 500 \u2013 server error Get the history of an image GET /images/(name)/history Return the history of the image name Example request : GET /images/ubuntu/history HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n [\n {\n \"Id\": \"b750fe79269d\",\n \"Created\": 1364102658,\n \"CreatedBy\": \"/bin/bash\"\n },\n {\n \"Id\": \"27cf78414709\",\n \"Created\": 1364068391,\n \"CreatedBy\": \"\"\n }\n ] Status Codes: 200 \u2013 no error 404 \u2013 no such image 500 \u2013 server error Push an image on the registry POST /images/(name)/push Push the image name on the registry Example request : POST /images/test/push HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\"status\": \"Pushing...\"}\n {\"status\": \"Pushing\", \"progress\": \"1/? (n/a)\", \"progressDetail\": {\"current\": 1}}}\n {\"error\": \"Invalid...\"}\n ...\n\nIf you wish to push an image on to a private registry, that image must already have been tagged\ninto a repository which references that registry host name and port. This repository name should\nthen be used in the URL. This mirrors the flow of the CLI. Example request : POST /images/registry.acme.com:5000/test/push HTTP/1.1 Query Parameters: tag \u2013 the tag to associate with the image on the registry, optional Request Headers: X-Registry-Auth \u2013 include a base64-encoded AuthConfig\n object. Status Codes: 200 \u2013 no error 404 \u2013 no such image 500 \u2013 server error Tag an image into a repository POST /images/(name)/tag Tag the image name into a repository Example request : POST /images/test/tag?repo=myrepo force=0 tag=v42 HTTP/1.1 Example response : HTTP/1.1 201 OK Query Parameters: repo \u2013 The repository to tag in force \u2013 1/True/true or 0/False/false, default false tag - The new tag name Status Codes: 201 \u2013 no error 400 \u2013 bad parameter 404 \u2013 no such image 409 \u2013 conflict 500 \u2013 server error Remove an image DELETE /images/(name) Remove the image name from the filesystem Example request : DELETE /images/test HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-type: application/json\n\n [\n {\"Untagged\": \"3e2f21a89f\"},\n {\"Deleted\": \"3e2f21a89f\"},\n {\"Deleted\": \"53b4f83ac9\"}\n ] Query Parameters: force \u2013 1/True/true or 0/False/false, default false noprune \u2013 1/True/true or 0/False/false, default false Status Codes: 200 \u2013 no error 404 \u2013 no such image 409 \u2013 conflict 500 \u2013 server error Search images GET /images/search Search for an image on Docker Hub . Note :\nThe response keys have changed from API v1.6 to reflect the JSON\nsent by the registry server to the docker daemon's request. Example request : GET /images/search?term=sshd HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n [\n {\n \"description\": \"\",\n \"is_official\": false,\n \"is_automated\": false,\n \"name\": \"wma55/u1210sshd\",\n \"star_count\": 0\n },\n {\n \"description\": \"\",\n \"is_official\": false,\n \"is_automated\": false,\n \"name\": \"jdswinbank/sshd\",\n \"star_count\": 0\n },\n {\n \"description\": \"\",\n \"is_official\": false,\n \"is_automated\": false,\n \"name\": \"vgauthier/sshd\",\n \"star_count\": 0\n }\n ...\n ] Query Parameters: term \u2013 term to search Status Codes: 200 \u2013 no error 500 \u2013 server error",
|
|
"title": "2.2 Images"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api_v1.17#23-misc",
|
|
"tags": "",
|
|
"text": "Check auth configuration POST /auth Get the default username and email Example request : POST /auth HTTP/1.1\n Content-Type: application/json\n\n {\n \"username\":\" hannibal\",\n \"password: \"xxxx\",\n \"email\": \"hannibal@a-team.com\",\n \"serveraddress\": \"https://index.docker.io/v1/\"\n } Example response : HTTP/1.1 200 OK Status Codes: 200 \u2013 no error 204 \u2013 no error 500 \u2013 server error Display system-wide information GET /info Display system-wide information Example request : GET /info HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\n \"Containers\":11,\n \"Images\":16,\n \"Driver\":\"btrfs\",\n \"DriverStatus\": [[\"\"]],\n \"ExecutionDriver\":\"native-0.1\",\n \"KernelVersion\":\"3.12.0-1-amd64\"\n \"NCPU\":1,\n \"MemTotal\":2099236864,\n \"Name\":\"prod-server-42\",\n \"ID\":\"7TRN:IPZB:QYBB:VPBQ:UMPP:KARE:6ZNR:XE6T:7EWV:PKF4:ZOJD:TPYS\",\n \"Debug\":false,\n \"NFd\": 11,\n \"NGoroutines\":21,\n \"NEventsListener\":0,\n \"InitPath\":\"/usr/bin/docker\",\n \"InitSha1\":\"\",\n \"IndexServerAddress\":[\"https://index.docker.io/v1/\"],\n \"MemoryLimit\":true,\n \"SwapLimit\":false,\n \"IPv4Forwarding\":true,\n \"Labels\":[\"storage=ssd\"],\n \"DockerRootDir\": \"/var/lib/docker\",\n \"OperatingSystem\": \"Boot2Docker\",\n } Status Codes: 200 \u2013 no error 500 \u2013 server error Show the docker version information GET /version Show the docker version information Example request : GET /version HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\n \"ApiVersion\": \"1.12\",\n \"Version\": \"0.2.2\",\n \"GitCommit\": \"5a2a5cc+CHANGES\",\n \"GoVersion\": \"go1.0.3\"\n } Status Codes: 200 \u2013 no error 500 \u2013 server error Ping the docker server GET /_ping Ping the docker server Example request : GET /_ping HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: text/plain\n\n OK Status Codes: 200 - no error 500 - server error Create a new image from a container's changes POST /commit Create a new image from a container's changes Example request : POST /commit?container=44c004db4b17 comment=message repo=myrepo HTTP/1.1\n Content-Type: application/json\n\n {\n \"Hostname\": \"\",\n \"Domainname\": \"\",\n \"User\": \"\",\n \"Memory\": 0,\n \"MemorySwap\": 0,\n \"CpuShares\": 512,\n \"Cpuset\": \"0,1\",\n \"AttachStdin\": false,\n \"AttachStdout\": true,\n \"AttachStderr\": true,\n \"PortSpecs\": null,\n \"Tty\": false,\n \"OpenStdin\": false,\n \"StdinOnce\": false,\n \"Env\": null,\n \"Cmd\": [\n \"date\"\n ],\n \"Volumes\": {\n \"/tmp\": {}\n },\n \"WorkingDir\": \"\",\n \"NetworkDisabled\": false,\n \"ExposedPorts\": {\n \"22/tcp\": {}\n }\n } Example response : HTTP/1.1 201 Created\n Content-Type: application/vnd.docker.raw-stream\n\n {\"Id\": \"596069db4bf5\"} Json Parameters: config - the container's configuration Query Parameters: container \u2013 source container repo \u2013 repository tag \u2013 tag comment \u2013 commit message author \u2013 author (e.g., \"John Hannibal Smith\n hannibal@a-team.com \") Status Codes: 201 \u2013 no error 404 \u2013 no such container 500 \u2013 server error Monitor Docker's events GET /events Get container events from docker, either in real time via streaming, or via\npolling (using since). Docker containers will report the following events: create, destroy, die, exec_create, exec_start, export, kill, oom, pause, restart, start, stop, unpause and Docker images will report: untag, delete Example request : GET /events?since=1374067924 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\"status\": \"create\", \"id\": \"dfdf82bd3881\",\"from\": \"ubuntu:latest\", \"time\":1374067924}\n {\"status\": \"start\", \"id\": \"dfdf82bd3881\",\"from\": \"ubuntu:latest\", \"time\":1374067924}\n {\"status\": \"stop\", \"id\": \"dfdf82bd3881\",\"from\": \"ubuntu:latest\", \"time\":1374067966}\n {\"status\": \"destroy\", \"id\": \"dfdf82bd3881\",\"from\": \"ubuntu:latest\", \"time\":1374067970} Query Parameters: since \u2013 timestamp used for polling until \u2013 timestamp used for polling filters \u2013 a json encoded value of the filters (a map[string][]string) to process on the event list. Available filters: event= string -- event to filter image= string -- image to filter container= string -- container to filter Status Codes: 200 \u2013 no error 500 \u2013 server error Get a tarball containing all images in a repository GET /images/(name)/get Get a tarball containing all images and metadata for the repository specified\nby name . If name is a specific name and tag (e.g. ubuntu:latest), then only that image\n(and its parents) are returned. If name is an image ID, similarly only tha\nimage (and its parents) are returned, but with the exclusion of the\n'repositories' file in the tarball, as there were no image names referenced. See the image tarball format for more details. Example request GET /images/ubuntu/get Example response : HTTP/1.1 200 OK\n Content-Type: application/x-tar\n\n Binary data stream Status Codes: 200 \u2013 no error 500 \u2013 server error Get a tarball containing all images. GET /images/get Get a tarball containing all images and metadata for one or more repositories. For each value of the names parameter: if it is a specific name and tag (e.g.\nubuntu:latest), then only that image (and its parents) are returned; if it is\nan image ID, similarly only that image (and its parents) are returned and there\nwould be no names referenced in the 'repositories' file for this image ID. See the image tarball format for more details. Example request GET /images/get?names=myname%2Fmyapp%3Alatest names=busybox Example response : HTTP/1.1 200 OK\n Content-Type: application/x-tar\n\n Binary data stream Status Codes: 200 \u2013 no error 500 \u2013 server error Load a tarball with a set of images and tags into docker POST /images/load Load a set of images and tags into the docker repository.\nSee the image tarball format for more details. Example request POST /images/load\n\n Tarball in body Example response : HTTP/1.1 200 OK Status Codes: 200 \u2013 no error 500 \u2013 server error Image tarball format An image tarball contains one directory per image layer (named using its long ID),\neach containing three files: VERSION : currently 1.0 - the file format version json : detailed layer information, similar to docker inspect layer_id layer.tar : A tarfile containing the filesystem changes in this layer The layer.tar file will contain aufs style .wh..wh.aufs files and directories\nfor storing attribute changes and deletions. If the tarball defines a repository, there will also be a repositories file at\nthe root that contains a list of repository and tag names mapped to layer IDs. { hello-world :\n { latest : 565a9d68a73f6706862bfe8409a7f659776d4d60a8d096eb4a3cbce6999cc2a1 }\n} Exec Create POST /containers/(id)/exec Sets up an exec instance in a running container id Example request : POST /containers/e90e34656806/exec HTTP/1.1\n Content-Type: application/json\n\n {\n \"AttachStdin\": false,\n \"AttachStdout\": true,\n \"AttachStderr\": true,\n \"Tty\": false,\n \"Cmd\": [\n \"date\"\n ],\n } Example response : HTTP/1.1 201 OK\n Content-Type: application/json\n\n {\n \"Id\": \"f90e34656806\"\n } Json Parameters: AttachStdin - Boolean value, attaches to stdin of the exec command. AttachStdout - Boolean value, attaches to stdout of the exec command. AttachStderr - Boolean value, attaches to stderr of the exec command. Tty - Boolean value to allocate a pseudo-TTY Cmd - Command to run specified as a string or an array of strings. Status Codes: 201 \u2013 no error 404 \u2013 no such container Exec Start POST /exec/(id)/start Starts a previously set up exec instance id . If detach is true, this API\nreturns after starting the exec command. Otherwise, this API sets up an\ninteractive session with the exec command. Example request : POST /exec/e90e34656806/start HTTP/1.1\n Content-Type: application/json\n\n {\n \"Detach\": false,\n \"Tty\": false,\n } Example response : HTTP/1.1 201 OK\n Content-Type: application/json\n\n {{ STREAM }} Json Parameters: Detach - Detach from the exec command Tty - Boolean value to allocate a pseudo-TTY Status Codes: 201 \u2013 no error 404 \u2013 no such exec instance Stream details :\nSimilar to the stream behavior of POST /container/(id)/attach API Exec Resize POST /exec/(id)/resize Resizes the tty session used by the exec command id .\nThis API is valid only if tty was specified as part of creating and starting the exec command. Example request : POST /exec/e90e34656806/resize HTTP/1.1\n Content-Type: text/plain Example response : HTTP/1.1 201 OK\n Content-Type: text/plain Query Parameters: h \u2013 height of tty session w \u2013 width Status Codes: 201 \u2013 no error 404 \u2013 no such exec instance Exec Inspect GET /exec/(id)/json Return low-level information about the exec command id . Example request : GET /exec/11fb006128e8ceb3942e7c58d77750f24210e35f879dd204ac975c184b820b39/json HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: plain/text\n\n {\n \"ID\" : \"11fb006128e8ceb3942e7c58d77750f24210e35f879dd204ac975c184b820b39\",\n \"Running\" : false,\n \"ExitCode\" : 2,\n \"ProcessConfig\" : {\n \"privileged\" : false,\n \"user\" : \"\",\n \"tty\" : false,\n \"entrypoint\" : \"sh\",\n \"arguments\" : [\n \"-c\",\n \"exit 2\"\n ]\n },\n \"OpenStdin\" : false,\n \"OpenStderr\" : false,\n \"OpenStdout\" : false,\n \"Container\" : {\n \"State\" : {\n \"Running\" : true,\n \"Paused\" : false,\n \"Restarting\" : false,\n \"OOMKilled\" : false,\n \"Pid\" : 3650,\n \"ExitCode\" : 0,\n \"Error\" : \"\",\n \"StartedAt\" : \"2014-11-17T22:26:03.717657531Z\",\n \"FinishedAt\" : \"0001-01-01T00:00:00Z\"\n },\n \"ID\" : \"8f177a186b977fb451136e0fdf182abff5599a08b3c7f6ef0d36a55aaf89634c\",\n \"Created\" : \"2014-11-17T22:26:03.626304998Z\",\n \"Path\" : \"date\",\n \"Args\" : [],\n \"Config\" : {\n \"Hostname\" : \"8f177a186b97\",\n \"Domainname\" : \"\",\n \"User\" : \"\",\n \"Memory\" : 0,\n \"MemorySwap\" : 0,\n \"CpuShares\" : 0,\n \"Cpuset\" : \"\",\n \"AttachStdin\" : false,\n \"AttachStdout\" : false,\n \"AttachStderr\" : false,\n \"PortSpecs\" : null,\n \"ExposedPorts\" : null,\n \"Tty\" : false,\n \"OpenStdin\" : false,\n \"StdinOnce\" : false,\n \"Env\" : [ \"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\" ],\n \"Cmd\" : [\n \"date\"\n ],\n \"Image\" : \"ubuntu\",\n \"Volumes\" : null,\n \"WorkingDir\" : \"\",\n \"Entrypoint\" : null,\n \"NetworkDisabled\" : false,\n \"MacAddress\" : \"\",\n \"OnBuild\" : null,\n \"SecurityOpt\" : null\n },\n \"Image\" : \"5506de2b643be1e6febbf3b8a240760c6843244c41e12aa2f60ccbb7153d17f5\",\n \"NetworkSettings\" : {\n \"IPAddress\" : \"172.17.0.2\",\n \"IPPrefixLen\" : 16,\n \"MacAddress\" : \"02:42:ac:11:00:02\",\n \"Gateway\" : \"172.17.42.1\",\n \"Bridge\" : \"docker0\",\n \"PortMapping\" : null,\n \"Ports\" : {}\n },\n \"ResolvConfPath\" : \"/var/lib/docker/containers/8f177a186b977fb451136e0fdf182abff5599a08b3c7f6ef0d36a55aaf89634c/resolv.conf\",\n \"HostnamePath\" : \"/var/lib/docker/containers/8f177a186b977fb451136e0fdf182abff5599a08b3c7f6ef0d36a55aaf89634c/hostname\",\n \"HostsPath\" : \"/var/lib/docker/containers/8f177a186b977fb451136e0fdf182abff5599a08b3c7f6ef0d36a55aaf89634c/hosts\",\n \"Name\" : \"/test\",\n \"Driver\" : \"aufs\",\n \"ExecDriver\" : \"native-0.2\",\n \"MountLabel\" : \"\",\n \"ProcessLabel\" : \"\",\n \"AppArmorProfile\" : \"\",\n \"RestartCount\" : 0,\n \"Volumes\" : {},\n \"VolumesRW\" : {}\n }\n } Status Codes: 200 \u2013 no error 404 \u2013 no such exec instance 500 - server error",
|
|
"title": "2.3 Misc"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api_v1.17#3-going-further",
|
|
"tags": "",
|
|
"text": "",
|
|
"title": "3. Going further"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api_v1.17#31-inside-docker-run",
|
|
"tags": "",
|
|
"text": "As an example, the docker run command line makes the following API calls: Create the container If the status code is 404, it means the image doesn't exist: Try to pull it Then retry to create the container Start the container If you are not in detached mode: Attach to the container, using logs=1 (to have stdout and\n stderr from the container's start) and stream=1 If in detached mode or only stdin is attached: Display the container's id",
|
|
"title": "3.1 Inside docker run"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api_v1.17#32-hijacking",
|
|
"tags": "",
|
|
"text": "In this version of the API, /attach, uses hijacking to transport stdin,\nstdout and stderr on the same socket. To hint potential proxies about connection hijacking, Docker client sends\nconnection upgrade headers similarly to websocket. Upgrade: tcp\nConnection: Upgrade When Docker daemon detects the Upgrade header, it will switch its status code\nfrom 200 OK to 101 UPGRADED and resend the same headers. This might change in the future.",
|
|
"title": "3.2 Hijacking"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api_v1.17#33-cors-requests",
|
|
"tags": "",
|
|
"text": "To enable cross origin requests to the remote api add the flag\n\"--api-enable-cors\" when running docker in daemon mode. $ docker -d -H=\"192.168.1.9:2375\" --api-enable-cors",
|
|
"title": "3.3 CORS Requests"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api_v1.16/",
|
|
"tags": "",
|
|
"text": "Docker Remote API v1.16\n1. Brief introduction\n\nThe Remote API has replaced rcli.\nThe daemon listens on unix:///var/run/docker.sock but you can\n Bind Docker to another host/port or a Unix socket.\nThe API tends to be REST, but for some complex commands, like attach\n or pull, the HTTP connection is hijacked to transport STDOUT,\n STDIN and STDERR.\n\n2. Endpoints\n2.1 Containers\nList containers\nGET /containers/json\nList containers\nExample request:\n GET /containers/json?all=1before=8dfafdbc3a40size=1 HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n [\n {\n \"Id\": \"8dfafdbc3a40\",\n \"Image\": \"ubuntu:latest\",\n \"Command\": \"echo 1\",\n \"Created\": 1367854155,\n \"Status\": \"Exit 0\",\n \"Ports\": [{\"PrivatePort\": 2222, \"PublicPort\": 3333, \"Type\": \"tcp\"}],\n \"SizeRw\": 12288,\n \"SizeRootFs\": 0\n },\n {\n \"Id\": \"9cd87474be90\",\n \"Image\": \"ubuntu:latest\",\n \"Command\": \"echo 222222\",\n \"Created\": 1367854155,\n \"Status\": \"Exit 0\",\n \"Ports\": [],\n \"SizeRw\": 12288,\n \"SizeRootFs\": 0\n },\n {\n \"Id\": \"3176a2479c92\",\n \"Image\": \"ubuntu:latest\",\n \"Command\": \"echo 3333333333333333\",\n \"Created\": 1367854154,\n \"Status\": \"Exit 0\",\n \"Ports\":[],\n \"SizeRw\":12288,\n \"SizeRootFs\":0\n },\n {\n \"Id\": \"4cb07b47f9fb\",\n \"Image\": \"ubuntu:latest\",\n \"Command\": \"echo 444444444444444444444444444444444\",\n \"Created\": 1367854152,\n \"Status\": \"Exit 0\",\n \"Ports\": [],\n \"SizeRw\": 12288,\n \"SizeRootFs\": 0\n }\n ]\n\nQuery Parameters:\n\nall \u2013 1/True/true or 0/False/false, Show all containers.\n Only running containers are shown by default (i.e., this defaults to false)\nlimit \u2013 Show limit last created\n containers, include non-running ones.\nsince \u2013 Show only containers created since Id, include\n non-running ones.\nbefore \u2013 Show only containers created before Id, include\n non-running ones.\nsize \u2013 1/True/true or 0/False/false, Show the containers\n sizes\nfilters - a json encoded value of the filters (a map[string][]string) to process on the containers list. Available filters:\nexited=int -- containers with exit code of int\nstatus=(restarting|running|paused|exited)\n\nStatus Codes:\n\n200 \u2013 no error\n400 \u2013 bad parameter\n500 \u2013 server error\n\nCreate a container\nPOST /containers/create\nCreate a container\nExample request:\n POST /containers/create HTTP/1.1\n Content-Type: application/json\n\n {\n \"Hostname\": \"\",\n \"Domainname\": \"\",\n \"User\": \"\",\n \"Memory\": 0,\n \"MemorySwap\": 0,\n \"CpuShares\": 512,\n \"Cpuset\": \"0,1\",\n \"AttachStdin\": false,\n \"AttachStdout\": true,\n \"AttachStderr\": true,\n \"Tty\": false,\n \"OpenStdin\": false,\n \"StdinOnce\": false,\n \"Env\": null,\n \"Cmd\": [\n \"date\"\n ],\n \"Entrypoint\": \"\",\n \"Image\": \"ubuntu\",\n \"Volumes\": {\n \"/tmp\": {}\n },\n \"WorkingDir\": \"\",\n \"NetworkDisabled\": false,\n \"MacAddress\": \"12:34:56:78:9a:bc\",\n \"ExposedPorts\": {\n \"22/tcp\": {}\n },\n \"SecurityOpts\": [\"\"],\n \"HostConfig\": {\n \"Binds\": [\"/tmp:/tmp\"],\n \"Links\": [\"redis3:redis\"],\n \"LxcConf\": {\"lxc.utsname\":\"docker\"},\n \"PortBindings\": { \"22/tcp\": [{ \"HostPort\": \"11022\" }] },\n \"PublishAllPorts\": false,\n \"Privileged\": false,\n \"Dns\": [\"8.8.8.8\"],\n \"DnsSearch\": [\"\"],\n \"ExtraHosts\": null,\n \"VolumesFrom\": [\"parent\", \"other:ro\"],\n \"CapAdd\": [\"NET_ADMIN\"],\n \"CapDrop\": [\"MKNOD\"],\n \"RestartPolicy\": { \"Name\": \"\", \"MaximumRetryCount\": 0 },\n \"NetworkMode\": \"bridge\",\n \"Devices\": []\n }\n }\n\nExample response:\n HTTP/1.1 201 Created\n Content-Type: application/json\n\n {\n \"Id\":\"e90e34656806\"\n \"Warnings\":[]\n }\n\nJson Parameters:\n\nHostname - A string value containing the desired hostname to use for the\n container.\nDomainname - A string value containing the desired domain name to use\n for the container.\nUser - A string value containg the user to use inside the container.\nMemory - Memory limit in bytes.\nMemorySwap- Total memory usage (memory + swap); set -1 to disable swap.\nCpuShares - An integer value containing the CPU Shares for container\n (ie. the relative weight vs othercontainers).\n CpuSet - String value containg the cgroups Cpuset to use.\nAttachStdin - Boolean value, attaches to stdin.\nAttachStdout - Boolean value, attaches to stdout.\nAttachStderr - Boolean value, attaches to stderr.\nTty - Boolean value, Attach standard streams to a tty, including stdin if it is not closed.\nOpenStdin - Boolean value, opens stdin,\nStdinOnce - Boolean value, close stdin after the 1 attached client disconnects.\nEnv - A list of environment variables in the form of VAR=value\nCmd - Command to run specified as a string or an array of strings.\nEntrypoint - Set the entrypoint for the container a a string or an array\n of strings\nImage - String value containing the image name to use for the container\nVolumes \u2013 An object mapping mountpoint paths (strings) inside the\n container to empty objects.\nWorkingDir - A string value containing the working dir for commands to\n run in.\nNetworkDisabled - Boolean value, when true disables neworking for the\n container\nExposedPorts - An object mapping ports to an empty object in the form of:\n \"ExposedPorts\": { \"port/tcp|udp: {}\" }\nSecurityOpts: A list of string values to customize labels for MLS\n systems, such as SELinux.\nHostConfig\nBinds \u2013 A list of volume bindings for this container. Each volume\n binding is a string of the form container_path (to create a new\n volume for the container), host_path:container_path (to bind-mount\n a host path into the container), or host_path:container_path:ro\n (to make the bind-mount read-only inside the container).\nLinks - A list of links for the container. Each link entry should be of\n of the form \"container_name:alias\".\nLxcConf - LXC specific configurations. These configurations will only\n work when using the lxc execution driver.\nPortBindings - A map of exposed container ports and the host port they\n should map to. It should be specified in the form\n { port/protocol: [{ \"HostPort\": \"port\" }] }\n Take note that port is specified as a string and not an integer value.\nPublishAllPorts - Allocates a random host port for all of a container's\n exposed ports. Specified as a boolean value.\nPrivileged - Gives the container full access to the host. Specified as\n a boolean value.\nDns - A list of dns servers for the container to use.\nDnsSearch - A list of DNS search domains\nExtraHosts - A list of hostnames/IP mappings to be added to the\n container's /etc/hosts file. Specified in the form [\"hostname:IP\"].\nVolumesFrom - A list of volumes to inherit from another container.\n Specified in the form container name[:ro|rw]\nCapAdd - A list of kernel capabilties to add to the container.\nCapdrop - A list of kernel capabilties to drop from the container.\nRestartPolicy \u2013 The behavior to apply when the container exits. The\n value is an object with a Name property of either \"always\" to\n always restart or \"on-failure\" to restart only when the container\n exit code is non-zero. If on-failure is used, MaximumRetryCount\n controls the number of times to retry before giving up.\n The default is not to restart. (optional)\n An ever increasing delay (double the previous delay, starting at 100mS)\n is added before each restart to prevent flooding the server.\nNetworkMode - Sets the networking mode for the container. Supported\n values are: bridge, host, and container:name|id\nDevices - A list of devices to add to the container specified in the\n form\n { \"PathOnHost\": \"/dev/deviceName\", \"PathInContainer\": \"/dev/deviceName\", \"CgroupPermissions\": \"mrw\"}\n\nQuery Parameters:\n\nname \u2013 Assign the specified name to the container. Must\n match /?[a-zA-Z0-9_-]+.\n\nStatus Codes:\n\n201 \u2013 no error\n404 \u2013 no such container\n406 \u2013 impossible to attach (container not running)\n500 \u2013 server error\n\nInspect a container\nGET /containers/(id)/json\nReturn low-level information on the container id\nExample request:\n GET /containers/4fa6e0f0c678/json HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\n \"Id\": \"4fa6e0f0c6786287e131c3852c58a2e01cc697a68231826813597e4994f1d6e2\",\n \"Created\": \"2013-05-07T14:51:42.041847+02:00\",\n \"Path\": \"date\",\n \"Args\": [],\n \"Config\": {\n \"Hostname\": \"4fa6e0f0c678\",\n \"User\": \"\",\n \"Memory\": 0,\n \"MemorySwap\": 0,\n \"AttachStdin\": false,\n \"AttachStdout\": true,\n \"AttachStderr\": true,\n \"PortSpecs\": null,\n \"Tty\": false,\n \"OpenStdin\": false,\n \"StdinOnce\": false,\n \"Env\": null,\n \"Cmd\": [\n \"date\"\n ],\n \"Dns\": null,\n \"Image\": \"ubuntu\",\n \"Volumes\": {},\n \"VolumesFrom\": \"\",\n \"WorkingDir\": \"\"\n },\n \"State\": {\n \"Running\": false,\n \"Pid\": 0,\n \"ExitCode\": 0,\n \"StartedAt\": \"2013-05-07T14:51:42.087658+02:01360\",\n \"Ghost\": false\n },\n \"Image\": \"b750fe79269d2ec9a3c593ef05b4332b1d1a02a62b4accb2c21d589ff2f5f2dc\",\n \"NetworkSettings\": {\n \"IpAddress\": \"\",\n \"IpPrefixLen\": 0,\n \"Gateway\": \"\",\n \"Bridge\": \"\",\n \"PortMapping\": null\n },\n \"SysInitPath\": \"/home/kitty/go/src/github.com/docker/docker/bin/docker\",\n \"ResolvConfPath\": \"/etc/resolv.conf\",\n \"Volumes\": {},\n \"HostConfig\": {\n \"Binds\": null,\n \"ContainerIDFile\": \"\",\n \"LxcConf\": [],\n \"Privileged\": false,\n \"PortBindings\": {\n \"80/tcp\": [\n {\n \"HostIp\": \"0.0.0.0\",\n \"HostPort\": \"49153\"\n }\n ]\n },\n \"Links\": [\"/name:alias\"],\n \"PublishAllPorts\": false,\n \"CapAdd\": [\"NET_ADMIN\"],\n \"CapDrop\": [\"MKNOD\"]\n }\n }\n\nStatus Codes:\n\n200 \u2013 no error\n404 \u2013 no such container\n500 \u2013 server error\n\nList processes running inside a container\nGET /containers/(id)/top\nList processes running inside the container id\nExample request:\n GET /containers/4fa6e0f0c678/top HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\n \"Titles\": [\n \"USER\",\n \"PID\",\n \"%CPU\",\n \"%MEM\",\n \"VSZ\",\n \"RSS\",\n \"TTY\",\n \"STAT\",\n \"START\",\n \"TIME\",\n \"COMMAND\"\n ],\n \"Processes\": [\n [\"root\",\"20147\",\"0.0\",\"0.1\",\"18060\",\"1864\",\"pts/4\",\"S\",\"10:06\",\"0:00\",\"bash\"],\n [\"root\",\"20271\",\"0.0\",\"0.0\",\"4312\",\"352\",\"pts/4\",\"S+\",\"10:07\",\"0:00\",\"sleep\",\"10\"]\n ]\n }\n\nQuery Parameters:\n\nps_args \u2013 ps arguments to use (e.g., aux)\n\nStatus Codes:\n\n200 \u2013 no error\n404 \u2013 no such container\n500 \u2013 server error\n\nGet container logs\nGET /containers/(id)/logs\nGet stdout and stderr logs from the container id\nExample request:\n GET /containers/4fa6e0f0c678/logs?stderr=1stdout=1timestamps=1follow=1tail=10 HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/vnd.docker.raw-stream\n\n {{ STREAM }}\n\nQuery Parameters:\n\nfollow \u2013 1/True/true or 0/False/false, return stream. Default false\nstdout \u2013 1/True/true or 0/False/false, show stdout log. Default false\nstderr \u2013 1/True/true or 0/False/false, show stderr log. Default false\ntimestamps \u2013 1/True/true or 0/False/false, print timestamps for\n every log line. Default false\ntail \u2013 Output specified number of lines at the end of logs: all or number. Default all\n\nStatus Codes:\n\n200 \u2013 no error\n404 \u2013 no such container\n500 \u2013 server error\n\nInspect changes on a container's filesystem\nGET /containers/(id)/changes\nInspect changes on container id's filesystem\nExample request:\n GET /containers/4fa6e0f0c678/changes HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n [\n {\n \"Path\": \"/dev\",\n \"Kind\": 0\n },\n {\n \"Path\": \"/dev/kmsg\",\n \"Kind\": 1\n },\n {\n \"Path\": \"/test\",\n \"Kind\": 1\n }\n ]\n\nStatus Codes:\n\n200 \u2013 no error\n404 \u2013 no such container\n500 \u2013 server error\n\nExport a container\nGET /containers/(id)/export\nExport the contents of container id\nExample request:\n GET /containers/4fa6e0f0c678/export HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/octet-stream\n\n {{ TAR STREAM }}\n\nStatus Codes:\n\n200 \u2013 no error\n404 \u2013 no such container\n500 \u2013 server error\n\nResize a container TTY\nPOST /containers/(id)/resize?h=heightw=width\nResize the TTY for container with id. The container must be restarted for the resize to take effect.\nExample request:\n POST /containers/4fa6e0f0c678/resize?h=40w=80 HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Length: 0\n Content-Type: text/plain; charset=utf-8\n\nStatus Codes:\n\n200 \u2013 no error\n404 \u2013 No such container\n500 \u2013 Cannot resize container\n\nStart a container\nPOST /containers/(id)/start\nStart the container id\nExample request:\n POST /containers/(id)/start HTTP/1.1\n Content-Type: application/json\n\nExample response:\n HTTP/1.1 204 No Content\n\nJson Parameters:\nStatus Codes:\n\n204 \u2013 no error\n304 \u2013 container already started\n404 \u2013 no such container\n500 \u2013 server error\n\nStop a container\nPOST /containers/(id)/stop\nStop the container id\nExample request:\n POST /containers/e90e34656806/stop?t=5 HTTP/1.1\n\nExample response:\n HTTP/1.1 204 No Content\n\nQuery Parameters:\n\nt \u2013 number of seconds to wait before killing the container\n\nStatus Codes:\n\n204 \u2013 no error\n304 \u2013 container already stopped\n404 \u2013 no such container\n500 \u2013 server error\n\nRestart a container\nPOST /containers/(id)/restart\nRestart the container id\nExample request:\n POST /containers/e90e34656806/restart?t=5 HTTP/1.1\n\nExample response:\n HTTP/1.1 204 No Content\n\nQuery Parameters:\n\nt \u2013 number of seconds to wait before killing the container\n\nStatus Codes:\n\n204 \u2013 no error\n404 \u2013 no such container\n500 \u2013 server error\n\nKill a container\nPOST /containers/(id)/kill\nKill the container id\nExample request:\n POST /containers/e90e34656806/kill HTTP/1.1\n\nExample response:\n HTTP/1.1 204 No Content\n\nQuery Parameters\n\nsignal - Signal to send to the container: integer or string like \"SIGINT\".\n When not set, SIGKILL is assumed and the call will waits for the container to exit.\n\nStatus Codes:\n\n204 \u2013 no error\n404 \u2013 no such container\n500 \u2013 server error\n\nPause a container\nPOST /containers/(id)/pause\nPause the container id\nExample request:\n POST /containers/e90e34656806/pause HTTP/1.1\n\nExample response:\n HTTP/1.1 204 No Content\n\nStatus Codes:\n\n204 \u2013 no error\n404 \u2013 no such container\n500 \u2013 server error\n\nUnpause a container\nPOST /containers/(id)/unpause\nUnpause the container id\nExample request:\n POST /containers/e90e34656806/unpause HTTP/1.1\n\nExample response:\n HTTP/1.1 204 No Content\n\nStatus Codes:\n\n204 \u2013 no error\n404 \u2013 no such container\n500 \u2013 server error\n\nAttach to a container\nPOST /containers/(id)/attach\nAttach to the container id\nExample request:\n POST /containers/16253994b7c4/attach?logs=1stream=0stdout=1 HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/vnd.docker.raw-stream\n\n {{ STREAM }}\n\nQuery Parameters:\n\nlogs \u2013 1/True/true or 0/False/false, return logs. Default false\nstream \u2013 1/True/true or 0/False/false, return stream.\n Default false\nstdin \u2013 1/True/true or 0/False/false, if stream=true, attach\n to stdin. Default false\nstdout \u2013 1/True/true or 0/False/false, if logs=true, return\n stdout log, if stream=true, attach to stdout. Default false\nstderr \u2013 1/True/true or 0/False/false, if logs=true, return\n stderr log, if stream=true, attach to stderr. Default false\n\nStatus Codes:\n\n200 \u2013 no error\n400 \u2013 bad parameter\n404 \u2013 no such container\n\n500 \u2013 server error\nStream details:\nWhen using the TTY setting is enabled in\nPOST /containers/create\n,\nthe stream is the raw data from the process PTY and client's stdin.\nWhen the TTY is disabled, then the stream is multiplexed to separate\nstdout and stderr.\nThe format is a Header and a Payload (frame).\nHEADER\nThe header will contain the information on which stream write the\nstream (stdout or stderr). It also contain the size of the\nassociated frame encoded on the last 4 bytes (uint32).\nIt is encoded on the first 8 bytes like this:\nheader := [8]byte{STREAM_TYPE, 0, 0, 0, SIZE1, SIZE2, SIZE3, SIZE4}\n\nSTREAM_TYPE can be:\n\n\n0: stdin (will be written on stdout)\n\n1: stdout\n\n2: stderr\nSIZE1, SIZE2, SIZE3, SIZE4 are the 4 bytes of\nthe uint32 size encoded as big endian.\nPAYLOAD\nThe payload is the raw stream.\nIMPLEMENTATION\nThe simplest way to implement the Attach protocol is the following:\n\nRead 8 bytes\nchose stdout or stderr depending on the first byte\nExtract the frame size from the last 4 byets\nRead the extracted size and output it on the correct output\nGoto 1\n\n\n\nAttach to a container (websocket)\nGET /containers/(id)/attach/ws\nAttach to the container id via websocket\nImplements websocket protocol handshake according to RFC 6455\nExample request\n GET /containers/e90e34656806/attach/ws?logs=0stream=1stdin=1stdout=1stderr=1 HTTP/1.1\n\nExample response\n {{ STREAM }}\n\nQuery Parameters:\n\nlogs \u2013 1/True/true or 0/False/false, return logs. Default false\nstream \u2013 1/True/true or 0/False/false, return stream.\n Default false\nstdin \u2013 1/True/true or 0/False/false, if stream=true, attach\n to stdin. Default false\nstdout \u2013 1/True/true or 0/False/false, if logs=true, return\n stdout log, if stream=true, attach to stdout. Default false\nstderr \u2013 1/True/true or 0/False/false, if logs=true, return\n stderr log, if stream=true, attach to stderr. Default false\n\nStatus Codes:\n\n200 \u2013 no error\n400 \u2013 bad parameter\n404 \u2013 no such container\n500 \u2013 server error\n\nWait a container\nPOST /containers/(id)/wait\nBlock until container id stops, then returns the exit code\nExample request:\n POST /containers/16253994b7c4/wait HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\"StatusCode\": 0}\n\nStatus Codes:\n\n200 \u2013 no error\n404 \u2013 no such container\n500 \u2013 server error\n\nRemove a container\nDELETE /containers/(id)\nRemove the container id from the filesystem\nExample request:\n DELETE /containers/16253994b7c4?v=1 HTTP/1.1\n\nExample response:\n HTTP/1.1 204 No Content\n\nQuery Parameters:\n\nv \u2013 1/True/true or 0/False/false, Remove the volumes\n associated to the container. Default false\nforce - 1/True/true or 0/False/false, Kill then remove the container.\n Default false\n\nStatus Codes:\n\n204 \u2013 no error\n400 \u2013 bad parameter\n404 \u2013 no such container\n500 \u2013 server error\n\nCopy files or folders from a container\nPOST /containers/(id)/copy\nCopy files or folders of container id\nExample request:\n POST /containers/4fa6e0f0c678/copy HTTP/1.1\n Content-Type: application/json\n\n {\n \"Resource\": \"test.txt\"\n }\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/x-tar\n\n {{ TAR STREAM }}\n\nStatus Codes:\n\n200 \u2013 no error\n404 \u2013 no such container\n500 \u2013 server error\n\n2.2 Images\nList Images\nGET /images/json\nExample request:\n GET /images/json?all=0 HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n [\n {\n \"RepoTags\": [\n \"ubuntu:12.04\",\n \"ubuntu:precise\",\n \"ubuntu:latest\"\n ],\n \"Id\": \"8dbd9e392a964056420e5d58ca5cc376ef18e2de93b5cc90e868a1bbc8318c1c\",\n \"Created\": 1365714795,\n \"Size\": 131506275,\n \"VirtualSize\": 131506275\n },\n {\n \"RepoTags\": [\n \"ubuntu:12.10\",\n \"ubuntu:quantal\"\n ],\n \"ParentId\": \"27cf784147099545\",\n \"Id\": \"b750fe79269d2ec9a3c593ef05b4332b1d1a02a62b4accb2c21d589ff2f5f2dc\",\n \"Created\": 1364102658,\n \"Size\": 24653,\n \"VirtualSize\": 180116135\n }\n ]\n\nQuery Parameters:\n\nall \u2013 1/True/true or 0/False/false, default false\nfilters \u2013 a json encoded value of the filters (a map[string][]string) to process on the images list. Available filters:\ndangling=true\n\nCreate an image\nPOST /images/create\nCreate an image, either by pulling it from the registry or by importing it\nExample request:\n POST /images/create?fromImage=ubuntu HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\"status\": \"Pulling...\"}\n {\"status\": \"Pulling\", \"progress\": \"1 B/ 100 B\", \"progressDetail\": {\"current\": 1, \"total\": 100}}\n {\"error\": \"Invalid...\"}\n ...\n\nWhen using this endpoint to pull an image from the registry, the\n`X-Registry-Auth` header can be used to include\na base64-encoded AuthConfig object.\n\nQuery Parameters:\n\nfromImage \u2013 name of the image to pull\nfromSrc \u2013 source to import. The value may be a URL from which the image\n can be retrieved or - to read the image from the request body.\nrepo \u2013 repository\ntag \u2013 tag\n\nregistry \u2013 the registry to pull from\nRequest Headers:\n\n\nX-Registry-Auth \u2013 base64-encoded AuthConfig object\n\n\nStatus Codes:\n\n200 \u2013 no error\n500 \u2013 server error\n\nInspect an image\nGET /images/(name)/json\nReturn low-level information on the image name\nExample request:\n GET /images/ubuntu/json HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\n \"Created\": \"2013-03-23T22:24:18.818426-07:00\",\n \"Container\": \"3d67245a8d72ecf13f33dffac9f79dcdf70f75acb84d308770391510e0c23ad0\",\n \"ContainerConfig\":\n {\n \"Hostname\": \"\",\n \"User\": \"\",\n \"Memory\": 0,\n \"MemorySwap\": 0,\n \"AttachStdin\": false,\n \"AttachStdout\": false,\n \"AttachStderr\": false,\n \"PortSpecs\": null,\n \"Tty\": true,\n \"OpenStdin\": true,\n \"StdinOnce\": false,\n \"Env\": null,\n \"Cmd\": [\"/bin/bash\"],\n \"Dns\": null,\n \"Image\": \"ubuntu\",\n \"Volumes\": null,\n \"VolumesFrom\": \"\",\n \"WorkingDir\": \"\"\n },\n \"Id\": \"b750fe79269d2ec9a3c593ef05b4332b1d1a02a62b4accb2c21d589ff2f5f2dc\",\n \"Parent\": \"27cf784147099545\",\n \"Size\": 6824592\n }\n\nStatus Codes:\n\n200 \u2013 no error\n404 \u2013 no such image\n500 \u2013 server error\n\nGet the history of an image\nGET /images/(name)/history\nReturn the history of the image name\nExample request:\n GET /images/ubuntu/history HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n [\n {\n \"Id\": \"b750fe79269d\",\n \"Created\": 1364102658,\n \"CreatedBy\": \"/bin/bash\"\n },\n {\n \"Id\": \"27cf78414709\",\n \"Created\": 1364068391,\n \"CreatedBy\": \"\"\n }\n ]\n\nStatus Codes:\n\n200 \u2013 no error\n404 \u2013 no such image\n500 \u2013 server error\n\nPush an image on the registry\nPOST /images/(name)/push\nPush the image name on the registry\nExample request:\n POST /images/test/push HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\"status\": \"Pushing...\"}\n {\"status\": \"Pushing\", \"progress\": \"1/? (n/a)\", \"progressDetail\": {\"current\": 1}}}\n {\"error\": \"Invalid...\"}\n ...\n\nIf you wish to push an image on to a private registry, that image must already have been tagged\ninto a repository which references that registry host name and port. This repository name should\nthen be used in the URL. This mirrors the flow of the CLI.\n\nExample request:\n POST /images/registry.acme.com:5000/test/push HTTP/1.1\n\nQuery Parameters:\n\ntag \u2013 the tag to associate with the image on the registry, optional\n\nRequest Headers:\n\nX-Registry-Auth \u2013 include a base64-encoded AuthConfig\n object.\n\nStatus Codes:\n\n200 \u2013 no error\n404 \u2013 no such image\n500 \u2013 server error\n\nTag an image into a repository\nPOST /images/(name)/tag\nTag the image name into a repository\nExample request:\n POST /images/test/tag?repo=myrepoforce=0tag=v42 HTTP/1.1\n\nExample response:\n HTTP/1.1 201 OK\n\nQuery Parameters:\n\nrepo \u2013 The repository to tag in\nforce \u2013 1/True/true or 0/False/false, default false\ntag - The new tag name\n\nStatus Codes:\n\n201 \u2013 no error\n400 \u2013 bad parameter\n404 \u2013 no such image\n409 \u2013 conflict\n500 \u2013 server error\n\nRemove an image\nDELETE /images/(name)\nRemove the image name from the filesystem\nExample request:\n DELETE /images/test HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-type: application/json\n\n [\n {\"Untagged\": \"3e2f21a89f\"},\n {\"Deleted\": \"3e2f21a89f\"},\n {\"Deleted\": \"53b4f83ac9\"}\n ]\n\nQuery Parameters:\n\nforce \u2013 1/True/true or 0/False/false, default false\nnoprune \u2013 1/True/true or 0/False/false, default false\n\nStatus Codes:\n\n200 \u2013 no error\n404 \u2013 no such image\n409 \u2013 conflict\n500 \u2013 server error\n\nSearch images\nGET /images/search\nSearch for an image on Docker Hub.\n\nNote:\nThe response keys have changed from API v1.6 to reflect the JSON\nsent by the registry server to the docker daemon's request.\n\nExample request:\n GET /images/search?term=sshd HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n [\n {\n \"description\": \"\",\n \"is_official\": false,\n \"is_automated\": false,\n \"name\": \"wma55/u1210sshd\",\n \"star_count\": 0\n },\n {\n \"description\": \"\",\n \"is_official\": false,\n \"is_automated\": false,\n \"name\": \"jdswinbank/sshd\",\n \"star_count\": 0\n },\n {\n \"description\": \"\",\n \"is_official\": false,\n \"is_automated\": false,\n \"name\": \"vgauthier/sshd\",\n \"star_count\": 0\n }\n ...\n ]\n\nQuery Parameters:\n\nterm \u2013 term to search\n\nStatus Codes:\n\n200 \u2013 no error\n500 \u2013 server error\n\n2.3 Misc\nBuild an image from Dockerfile via stdin\nPOST /build\nBuild an image from Dockerfile via stdin\nExample request:\n POST /build HTTP/1.1\n\n {{ TAR STREAM }}\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\"stream\": \"Step 1...\"}\n {\"stream\": \"...\"}\n {\"error\": \"Error...\", \"errorDetail\": {\"code\": 123, \"message\": \"Error...\"}}\n\nThe stream must be a tar archive compressed with one of the\nfollowing algorithms: identity (no compression), gzip, bzip2, xz.\n\nThe archive must include a file called `Dockerfile`\nat its root. It may include any number of other files,\nwhich will be accessible in the build context (See the [*ADD build\ncommand*](/reference/builder/#dockerbuilder)).\n\nQuery Parameters:\n\nt \u2013 repository name (and optionally a tag) to be applied to\n the resulting image in case of success\nremote \u2013 git or HTTP/HTTPS URI build source\nq \u2013 suppress verbose build output\nnocache \u2013 do not use the cache when building the image\npull - attempt to pull the image even if an older image exists locally\nrm - remove intermediate containers after a successful build (default behavior)\n\nforcerm - always remove intermediate containers (includes rm)\nRequest Headers:\n\n\nContent-type \u2013 should be set to \"application/tar\".\n\nX-Registry-Config \u2013 base64-encoded ConfigFile objec\n\nStatus Codes:\n\n200 \u2013 no error\n500 \u2013 server error\n\nCheck auth configuration\nPOST /auth\nGet the default username and email\nExample request:\n POST /auth HTTP/1.1\n Content-Type: application/json\n\n {\n \"username\":\" hannibal\",\n \"password: \"xxxx\",\n \"email\": \"hannibal@a-team.com\",\n \"serveraddress\": \"https://index.docker.io/v1/\"\n }\n\nExample response:\n HTTP/1.1 200 OK\n\nStatus Codes:\n\n200 \u2013 no error\n204 \u2013 no error\n500 \u2013 server error\n\nDisplay system-wide information\nGET /info\nDisplay system-wide information\nExample request:\n GET /info HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\n \"Containers\":11,\n \"Images\":16,\n \"Driver\":\"btrfs\",\n \"DriverStatus\": [[\"\"]],\n \"ExecutionDriver\":\"native-0.1\",\n \"KernelVersion\":\"3.12.0-1-amd64\"\n \"NCPU\":1,\n \"MemTotal\":2099236864,\n \"Name\":\"prod-server-42\",\n \"ID\":\"7TRN:IPZB:QYBB:VPBQ:UMPP:KARE:6ZNR:XE6T:7EWV:PKF4:ZOJD:TPYS\",\n \"Debug\":false,\n \"NFd\": 11,\n \"NGoroutines\":21,\n \"NEventsListener\":0,\n \"InitPath\":\"/usr/bin/docker\",\n \"InitSha1\":\"\",\n \"IndexServerAddress\":[\"https://index.docker.io/v1/\"],\n \"MemoryLimit\":true,\n \"SwapLimit\":false,\n \"IPv4Forwarding\":true,\n \"Labels\":[\"storage=ssd\"],\n \"DockerRootDir\": \"/var/lib/docker\",\n \"OperatingSystem\": \"Boot2Docker\",\n }\n\nStatus Codes:\n\n200 \u2013 no error\n500 \u2013 server error\n\nShow the docker version information\nGET /version\nShow the docker version information\nExample request:\n GET /version HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\n \"ApiVersion\": \"1.12\",\n \"Version\": \"0.2.2\",\n \"GitCommit\": \"5a2a5cc+CHANGES\",\n \"GoVersion\": \"go1.0.3\"\n }\n\nStatus Codes:\n\n200 \u2013 no error\n500 \u2013 server error\n\nPing the docker server\nGET /_ping\nPing the docker server\nExample request:\n GET /_ping HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: text/plain\n\n OK\n\nStatus Codes:\n\n200 - no error\n500 - server error\n\nCreate a new image from a container's changes\nPOST /commit\nCreate a new image from a container's changes\nExample request:\n POST /commit?container=44c004db4b17comment=messagerepo=myrepo HTTP/1.1\n Content-Type: application/json\n\n {\n \"Hostname\": \"\",\n \"Domainname\": \"\",\n \"User\": \"\",\n \"Memory\": 0,\n \"MemorySwap\": 0,\n \"CpuShares\": 512,\n \"Cpuset\": \"0,1\",\n \"AttachStdin\": false,\n \"AttachStdout\": true,\n \"AttachStderr\": true,\n \"PortSpecs\": null,\n \"Tty\": false,\n \"OpenStdin\": false,\n \"StdinOnce\": false,\n \"Env\": null,\n \"Cmd\": [\n \"date\"\n ],\n \"Volumes\": {\n \"/tmp\": {}\n },\n \"WorkingDir\": \"\",\n \"NetworkDisabled\": false,\n \"ExposedPorts\": {\n \"22/tcp\": {}\n }\n }\n\nExample response:\n HTTP/1.1 201 Created\n Content-Type: application/vnd.docker.raw-stream\n\n {\"Id\": \"596069db4bf5\"}\n\nJson Parameters:\n\nconfig - the container's configuration\n\nQuery Parameters:\n\ncontainer \u2013 source container\nrepo \u2013 repository\ntag \u2013 tag\ncomment \u2013 commit message\nauthor \u2013 author (e.g., \"John Hannibal Smith\n hannibal@a-team.com\")\n\nStatus Codes:\n\n201 \u2013 no error\n404 \u2013 no such container\n500 \u2013 server error\n\nMonitor Docker's events\nGET /events\nGet container events from docker, either in real time via streaming, or via\npolling (using since).\nDocker containers will report the following events:\ncreate, destroy, die, export, kill, pause, restart, start, stop, unpause\n\nand Docker images will report:\nuntag, delete\n\nExample request:\n GET /events?since=1374067924\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\"status\": \"create\", \"id\": \"dfdf82bd3881\",\"from\": \"ubuntu:latest\", \"time\":1374067924}\n {\"status\": \"start\", \"id\": \"dfdf82bd3881\",\"from\": \"ubuntu:latest\", \"time\":1374067924}\n {\"status\": \"stop\", \"id\": \"dfdf82bd3881\",\"from\": \"ubuntu:latest\", \"time\":1374067966}\n {\"status\": \"destroy\", \"id\": \"dfdf82bd3881\",\"from\": \"ubuntu:latest\", \"time\":1374067970}\n\nQuery Parameters:\n\nsince \u2013 timestamp used for polling\nuntil \u2013 timestamp used for polling\nfilters \u2013 a json encoded value of the filters (a map[string][]string) to process on the event list. Available filters:\nevent=string -- event to filter\nimage=string -- image to filter\ncontainer=string -- container to filter\n\nStatus Codes:\n\n200 \u2013 no error\n500 \u2013 server error\n\nGet a tarball containing all images in a repository\nGET /images/(name)/get\nGet a tarball containing all images and metadata for the repository specified\nby name.\nIf name is a specific name and tag (e.g. ubuntu:latest), then only that image\n(and its parents) are returned. If name is an image ID, similarly only tha\nimage (and its parents) are returned, but with the exclusion of the\n'repositories' file in the tarball, as there were no image names referenced.\nSee the image tarball format for more details.\nExample request\n GET /images/ubuntu/get\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/x-tar\n\n Binary data stream\n\nStatus Codes:\n\n200 \u2013 no error\n500 \u2013 server error\n\nGet a tarball containing all images.\nGET /images/get\nGet a tarball containing all images and metadata for one or more repositories.\nFor each value of the names parameter: if it is a specific name and tag (e.g.\nubuntu:latest), then only that image (and its parents) are returned; if it is\nan image ID, similarly only that image (and its parents) are returned and there\nwould be no names referenced in the 'repositories' file for this image ID.\nSee the image tarball format for more details.\nExample request\n GET /images/get?names=myname%2Fmyapp%3Alatestnames=busybox\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/x-tar\n\n Binary data stream\n\nStatus Codes:\n\n200 \u2013 no error\n500 \u2013 server error\n\nLoad a tarball with a set of images and tags into docker\nPOST /images/load\nLoad a set of images and tags into the docker repository.\nSee the image tarball format for more details.\nExample request\n POST /images/load\n\n Tarball in body\n\nExample response:\n HTTP/1.1 200 OK\n\nStatus Codes:\n\n200 \u2013 no error\n500 \u2013 server error\n\nImage tarball format\nAn image tarball contains one directory per image layer (named using its long ID),\neach containing three files:\n\nVERSION: currently 1.0 - the file format version\njson: detailed layer information, similar to docker inspect layer_id\nlayer.tar: A tarfile containing the filesystem changes in this layer\n\nThe layer.tar file will contain aufs style .wh..wh.aufs files and directories\nfor storing attribute changes and deletions.\nIf the tarball defines a repository, there will also be a repositories file at\nthe root that contains a list of repository and tag names mapped to layer IDs.\n{hello-world:\n {latest: 565a9d68a73f6706862bfe8409a7f659776d4d60a8d096eb4a3cbce6999cc2a1}\n}\n\n\nExec Create\nPOST /containers/(id)/exec\nSets up an exec instance in a running container id\nExample request:\n POST /containers/e90e34656806/exec HTTP/1.1\n Content-Type: application/json\n\n {\n \"AttachStdin\": false,\n \"AttachStdout\": true,\n \"AttachStderr\": true,\n \"Tty\": false,\n \"Cmd\": [\n \"date\"\n ],\n }\n\nExample response:\n HTTP/1.1 201 OK\n Content-Type: application/json\n\n {\n \"Id\": \"f90e34656806\"\n }\n\nJson Parameters:\n\nAttachStdin - Boolean value, attaches to stdin of the exec command.\nAttachStdout - Boolean value, attaches to stdout of the exec command.\nAttachStderr - Boolean value, attaches to stderr of the exec command.\nTty - Boolean value to allocate a pseudo-TTY\nCmd - Command to run specified as a string or an array of strings.\n\nStatus Codes:\n\n201 \u2013 no error\n404 \u2013 no such container\n\nExec Start\nPOST /exec/(id)/start\nStarts a previously set up exec instance id. If detach is true, this API\nreturns after starting the exec command. Otherwise, this API sets up an\ninteractive session with the exec command.\nExample request:\n POST /exec/e90e34656806/start HTTP/1.1\n Content-Type: application/json\n\n {\n \"Detach\": false,\n \"Tty\": false,\n }\n\nExample response:\n HTTP/1.1 201 OK\n Content-Type: application/json\n\n {{ STREAM }}\n\nJson Parameters:\n\nDetach - Detach from the exec command\nTty - Boolean value to allocate a pseudo-TTY\n\nStatus Codes:\n\n201 \u2013 no error\n\n404 \u2013 no such exec instance\nStream details:\nSimilar to the stream behavior of POST /container/(id)/attach API\n\n\nExec Resize\nPOST /exec/(id)/resize\nResizes the tty session used by the exec command id.\nThis API is valid only if tty was specified as part of creating and starting the exec command.\nExample request:\n POST /exec/e90e34656806/resize HTTP/1.1\n Content-Type: plain/text\n\nExample response:\n HTTP/1.1 201 OK\n Content-Type: plain/text\n\nQuery Parameters:\n\nh \u2013 height of tty session\nw \u2013 width\n\nStatus Codes:\n\n201 \u2013 no error\n404 \u2013 no such exec instance\n\nExec Inspect\nGET /exec/(id)/json\nReturn low-level information about the exec command id.\nExample request:\n GET /exec/11fb006128e8ceb3942e7c58d77750f24210e35f879dd204ac975c184b820b39/json HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: plain/text\n\n {\n \"ID\" : \"11fb006128e8ceb3942e7c58d77750f24210e35f879dd204ac975c184b820b39\",\n \"Running\" : false,\n \"ExitCode\" : 2,\n \"ProcessConfig\" : {\n \"privileged\" : false,\n \"user\" : \"\",\n \"tty\" : false,\n \"entrypoint\" : \"sh\",\n \"arguments\" : [\n \"-c\",\n \"exit 2\"\n ]\n },\n \"OpenStdin\" : false,\n \"OpenStderr\" : false,\n \"OpenStdout\" : false,\n \"Container\" : {\n \"State\" : {\n \"Running\" : true,\n \"Paused\" : false,\n \"Restarting\" : false,\n \"OOMKilled\" : false,\n \"Pid\" : 3650,\n \"ExitCode\" : 0,\n \"Error\" : \"\",\n \"StartedAt\" : \"2014-11-17T22:26:03.717657531Z\",\n \"FinishedAt\" : \"0001-01-01T00:00:00Z\"\n },\n \"ID\" : \"8f177a186b977fb451136e0fdf182abff5599a08b3c7f6ef0d36a55aaf89634c\",\n \"Created\" : \"2014-11-17T22:26:03.626304998Z\",\n \"Path\" : \"date\",\n \"Args\" : [],\n \"Config\" : {\n \"Hostname\" : \"8f177a186b97\",\n \"Domainname\" : \"\",\n \"User\" : \"\",\n \"Memory\" : 0,\n \"MemorySwap\" : 0,\n \"CpuShares\" : 0,\n \"Cpuset\" : \"\",\n \"AttachStdin\" : false,\n \"AttachStdout\" : false,\n \"AttachStderr\" : false,\n \"PortSpecs\" : null,\n \"ExposedPorts\" : null,\n \"Tty\" : false,\n \"OpenStdin\" : false,\n \"StdinOnce\" : false,\n \"Env\" : [ \"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\" ],\n \"Cmd\" : [\n \"date\"\n ],\n \"Image\" : \"ubuntu\",\n \"Volumes\" : null,\n \"WorkingDir\" : \"\",\n \"Entrypoint\" : null,\n \"NetworkDisabled\" : false,\n \"MacAddress\" : \"\",\n \"OnBuild\" : null,\n \"SecurityOpt\" : null\n },\n \"Image\" : \"5506de2b643be1e6febbf3b8a240760c6843244c41e12aa2f60ccbb7153d17f5\",\n \"NetworkSettings\" : {\n \"IPAddress\" : \"172.17.0.2\",\n \"IPPrefixLen\" : 16,\n \"MacAddress\" : \"02:42:ac:11:00:02\",\n \"Gateway\" : \"172.17.42.1\",\n \"Bridge\" : \"docker0\",\n \"PortMapping\" : null,\n \"Ports\" : {}\n },\n \"ResolvConfPath\" : \"/var/lib/docker/containers/8f177a186b977fb451136e0fdf182abff5599a08b3c7f6ef0d36a55aaf89634c/resolv.conf\",\n \"HostnamePath\" : \"/var/lib/docker/containers/8f177a186b977fb451136e0fdf182abff5599a08b3c7f6ef0d36a55aaf89634c/hostname\",\n \"HostsPath\" : \"/var/lib/docker/containers/8f177a186b977fb451136e0fdf182abff5599a08b3c7f6ef0d36a55aaf89634c/hosts\",\n \"Name\" : \"/test\",\n \"Driver\" : \"aufs\",\n \"ExecDriver\" : \"native-0.2\",\n \"MountLabel\" : \"\",\n \"ProcessLabel\" : \"\",\n \"AppArmorProfile\" : \"\",\n \"RestartCount\" : 0,\n \"Volumes\" : {},\n \"VolumesRW\" : {}\n }\n }\n\nStatus Codes:\n\n200 \u2013 no error\n404 \u2013 no such exec instance\n500 - server error\n\n3. Going further\n3.1 Inside docker run\nAs an example, the docker run command line makes the following API calls:\n\n\nCreate the container\n\n\nIf the status code is 404, it means the image doesn't exist:\n\nTry to pull it\nThen retry to create the container\n\n\n\nStart the container\n\n\nIf you are not in detached mode:\n\n\nAttach to the container, using logs=1 (to have stdout and\n stderr from the container's start) and stream=1\n\n\nIf in detached mode or only stdin is attached:\n\nDisplay the container's id\n\n3.2 Hijacking\nIn this version of the API, /attach, uses hijacking to transport stdin,\nstdout and stderr on the same socket. This might change in the future.\n3.3 CORS Requests\nTo enable cross origin requests to the remote api add the flag\n\"--api-enable-cors\" when running docker in daemon mode.\n$ docker -d -H=\"192.168.1.9:2375\" --api-enable-cors",
|
|
"title": "Docker Remote API v1.16"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api_v1.16#docker-remote-api-v116",
|
|
"tags": "",
|
|
"text": "",
|
|
"title": "Docker Remote API v1.16"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api_v1.16#1-brief-introduction",
|
|
"tags": "",
|
|
"text": "The Remote API has replaced rcli . The daemon listens on unix:///var/run/docker.sock but you can\n Bind Docker to another host/port or a Unix socket . The API tends to be REST, but for some complex commands, like attach \n or pull , the HTTP connection is hijacked to transport STDOUT ,\n STDIN and STDERR .",
|
|
"title": "1. Brief introduction"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api_v1.16#2-endpoints",
|
|
"tags": "",
|
|
"text": "",
|
|
"title": "2. Endpoints"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api_v1.16#21-containers",
|
|
"tags": "",
|
|
"text": "List containers GET /containers/json List containers Example request : GET /containers/json?all=1 before=8dfafdbc3a40 size=1 HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n [\n {\n \"Id\": \"8dfafdbc3a40\",\n \"Image\": \"ubuntu:latest\",\n \"Command\": \"echo 1\",\n \"Created\": 1367854155,\n \"Status\": \"Exit 0\",\n \"Ports\": [{\"PrivatePort\": 2222, \"PublicPort\": 3333, \"Type\": \"tcp\"}],\n \"SizeRw\": 12288,\n \"SizeRootFs\": 0\n },\n {\n \"Id\": \"9cd87474be90\",\n \"Image\": \"ubuntu:latest\",\n \"Command\": \"echo 222222\",\n \"Created\": 1367854155,\n \"Status\": \"Exit 0\",\n \"Ports\": [],\n \"SizeRw\": 12288,\n \"SizeRootFs\": 0\n },\n {\n \"Id\": \"3176a2479c92\",\n \"Image\": \"ubuntu:latest\",\n \"Command\": \"echo 3333333333333333\",\n \"Created\": 1367854154,\n \"Status\": \"Exit 0\",\n \"Ports\":[],\n \"SizeRw\":12288,\n \"SizeRootFs\":0\n },\n {\n \"Id\": \"4cb07b47f9fb\",\n \"Image\": \"ubuntu:latest\",\n \"Command\": \"echo 444444444444444444444444444444444\",\n \"Created\": 1367854152,\n \"Status\": \"Exit 0\",\n \"Ports\": [],\n \"SizeRw\": 12288,\n \"SizeRootFs\": 0\n }\n ] Query Parameters: all \u2013 1/True/true or 0/False/false, Show all containers.\n Only running containers are shown by default (i.e., this defaults to false) limit \u2013 Show limit last created\n containers, include non-running ones. since \u2013 Show only containers created since Id, include\n non-running ones. before \u2013 Show only containers created before Id, include\n non-running ones. size \u2013 1/True/true or 0/False/false, Show the containers\n sizes filters - a json encoded value of the filters (a map[string][]string) to process on the containers list. Available filters: exited= int -- containers with exit code of int status=(restarting|running|paused|exited) Status Codes: 200 \u2013 no error 400 \u2013 bad parameter 500 \u2013 server error Create a container POST /containers/create Create a container Example request : POST /containers/create HTTP/1.1\n Content-Type: application/json\n\n {\n \"Hostname\": \"\",\n \"Domainname\": \"\",\n \"User\": \"\",\n \"Memory\": 0,\n \"MemorySwap\": 0,\n \"CpuShares\": 512,\n \"Cpuset\": \"0,1\",\n \"AttachStdin\": false,\n \"AttachStdout\": true,\n \"AttachStderr\": true,\n \"Tty\": false,\n \"OpenStdin\": false,\n \"StdinOnce\": false,\n \"Env\": null,\n \"Cmd\": [\n \"date\"\n ],\n \"Entrypoint\": \"\",\n \"Image\": \"ubuntu\",\n \"Volumes\": {\n \"/tmp\": {}\n },\n \"WorkingDir\": \"\",\n \"NetworkDisabled\": false,\n \"MacAddress\": \"12:34:56:78:9a:bc\",\n \"ExposedPorts\": {\n \"22/tcp\": {}\n },\n \"SecurityOpts\": [\"\"],\n \"HostConfig\": {\n \"Binds\": [\"/tmp:/tmp\"],\n \"Links\": [\"redis3:redis\"],\n \"LxcConf\": {\"lxc.utsname\":\"docker\"},\n \"PortBindings\": { \"22/tcp\": [{ \"HostPort\": \"11022\" }] },\n \"PublishAllPorts\": false,\n \"Privileged\": false,\n \"Dns\": [\"8.8.8.8\"],\n \"DnsSearch\": [\"\"],\n \"ExtraHosts\": null,\n \"VolumesFrom\": [\"parent\", \"other:ro\"],\n \"CapAdd\": [\"NET_ADMIN\"],\n \"CapDrop\": [\"MKNOD\"],\n \"RestartPolicy\": { \"Name\": \"\", \"MaximumRetryCount\": 0 },\n \"NetworkMode\": \"bridge\",\n \"Devices\": []\n }\n } Example response : HTTP/1.1 201 Created\n Content-Type: application/json\n\n {\n \"Id\":\"e90e34656806\"\n \"Warnings\":[]\n } Json Parameters: Hostname - A string value containing the desired hostname to use for the\n container. Domainname - A string value containing the desired domain name to use\n for the container. User - A string value containg the user to use inside the container. Memory - Memory limit in bytes. MemorySwap - Total memory usage (memory + swap); set -1 to disable swap. CpuShares - An integer value containing the CPU Shares for container\n (ie. the relative weight vs othercontainers).\n CpuSet - String value containg the cgroups Cpuset to use. AttachStdin - Boolean value, attaches to stdin. AttachStdout - Boolean value, attaches to stdout. AttachStderr - Boolean value, attaches to stderr. Tty - Boolean value, Attach standard streams to a tty, including stdin if it is not closed. OpenStdin - Boolean value, opens stdin, StdinOnce - Boolean value, close stdin after the 1 attached client disconnects. Env - A list of environment variables in the form of VAR=value Cmd - Command to run specified as a string or an array of strings. Entrypoint - Set the entrypoint for the container a a string or an array\n of strings Image - String value containing the image name to use for the container Volumes \u2013 An object mapping mountpoint paths (strings) inside the\n container to empty objects. WorkingDir - A string value containing the working dir for commands to\n run in. NetworkDisabled - Boolean value, when true disables neworking for the\n container ExposedPorts - An object mapping ports to an empty object in the form of:\n \"ExposedPorts\": { \" port / tcp|udp : {}\" } SecurityOpts : A list of string values to customize labels for MLS\n systems, such as SELinux. HostConfig Binds \u2013 A list of volume bindings for this container. Each volume\n binding is a string of the form container_path (to create a new\n volume for the container), host_path:container_path (to bind-mount\n a host path into the container), or host_path:container_path:ro \n (to make the bind-mount read-only inside the container). Links - A list of links for the container. Each link entry should be of\n of the form \"container_name:alias\". LxcConf - LXC specific configurations. These configurations will only\n work when using the lxc execution driver. PortBindings - A map of exposed container ports and the host port they\n should map to. It should be specified in the form\n { port / protocol : [{ \"HostPort\": \" port \" }] } \n Take note that port is specified as a string and not an integer value. PublishAllPorts - Allocates a random host port for all of a container's\n exposed ports. Specified as a boolean value. Privileged - Gives the container full access to the host. Specified as\n a boolean value. Dns - A list of dns servers for the container to use. DnsSearch - A list of DNS search domains ExtraHosts - A list of hostnames/IP mappings to be added to the\n container's /etc/hosts file. Specified in the form [\"hostname:IP\"] . VolumesFrom - A list of volumes to inherit from another container.\n Specified in the form container name [: ro|rw ] CapAdd - A list of kernel capabilties to add to the container. Capdrop - A list of kernel capabilties to drop from the container. RestartPolicy \u2013 The behavior to apply when the container exits. The\n value is an object with a Name property of either \"always\" to\n always restart or \"on-failure\" to restart only when the container\n exit code is non-zero. If on-failure is used, MaximumRetryCount \n controls the number of times to retry before giving up.\n The default is not to restart. (optional)\n An ever increasing delay (double the previous delay, starting at 100mS)\n is added before each restart to prevent flooding the server. NetworkMode - Sets the networking mode for the container. Supported\n values are: bridge , host , and container: name|id Devices - A list of devices to add to the container specified in the\n form\n { \"PathOnHost\": \"/dev/deviceName\", \"PathInContainer\": \"/dev/deviceName\", \"CgroupPermissions\": \"mrw\"} Query Parameters: name \u2013 Assign the specified name to the container. Must\n match /?[a-zA-Z0-9_-]+ . Status Codes: 201 \u2013 no error 404 \u2013 no such container 406 \u2013 impossible to attach (container not running) 500 \u2013 server error Inspect a container GET /containers/(id)/json Return low-level information on the container id Example request : GET /containers/4fa6e0f0c678/json HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\n \"Id\": \"4fa6e0f0c6786287e131c3852c58a2e01cc697a68231826813597e4994f1d6e2\",\n \"Created\": \"2013-05-07T14:51:42.041847+02:00\",\n \"Path\": \"date\",\n \"Args\": [],\n \"Config\": {\n \"Hostname\": \"4fa6e0f0c678\",\n \"User\": \"\",\n \"Memory\": 0,\n \"MemorySwap\": 0,\n \"AttachStdin\": false,\n \"AttachStdout\": true,\n \"AttachStderr\": true,\n \"PortSpecs\": null,\n \"Tty\": false,\n \"OpenStdin\": false,\n \"StdinOnce\": false,\n \"Env\": null,\n \"Cmd\": [\n \"date\"\n ],\n \"Dns\": null,\n \"Image\": \"ubuntu\",\n \"Volumes\": {},\n \"VolumesFrom\": \"\",\n \"WorkingDir\": \"\"\n },\n \"State\": {\n \"Running\": false,\n \"Pid\": 0,\n \"ExitCode\": 0,\n \"StartedAt\": \"2013-05-07T14:51:42.087658+02:01360\",\n \"Ghost\": false\n },\n \"Image\": \"b750fe79269d2ec9a3c593ef05b4332b1d1a02a62b4accb2c21d589ff2f5f2dc\",\n \"NetworkSettings\": {\n \"IpAddress\": \"\",\n \"IpPrefixLen\": 0,\n \"Gateway\": \"\",\n \"Bridge\": \"\",\n \"PortMapping\": null\n },\n \"SysInitPath\": \"/home/kitty/go/src/github.com/docker/docker/bin/docker\",\n \"ResolvConfPath\": \"/etc/resolv.conf\",\n \"Volumes\": {},\n \"HostConfig\": {\n \"Binds\": null,\n \"ContainerIDFile\": \"\",\n \"LxcConf\": [],\n \"Privileged\": false,\n \"PortBindings\": {\n \"80/tcp\": [\n {\n \"HostIp\": \"0.0.0.0\",\n \"HostPort\": \"49153\"\n }\n ]\n },\n \"Links\": [\"/name:alias\"],\n \"PublishAllPorts\": false,\n \"CapAdd\": [\"NET_ADMIN\"],\n \"CapDrop\": [\"MKNOD\"]\n }\n } Status Codes: 200 \u2013 no error 404 \u2013 no such container 500 \u2013 server error List processes running inside a container GET /containers/(id)/top List processes running inside the container id Example request : GET /containers/4fa6e0f0c678/top HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\n \"Titles\": [\n \"USER\",\n \"PID\",\n \"%CPU\",\n \"%MEM\",\n \"VSZ\",\n \"RSS\",\n \"TTY\",\n \"STAT\",\n \"START\",\n \"TIME\",\n \"COMMAND\"\n ],\n \"Processes\": [\n [\"root\",\"20147\",\"0.0\",\"0.1\",\"18060\",\"1864\",\"pts/4\",\"S\",\"10:06\",\"0:00\",\"bash\"],\n [\"root\",\"20271\",\"0.0\",\"0.0\",\"4312\",\"352\",\"pts/4\",\"S+\",\"10:07\",\"0:00\",\"sleep\",\"10\"]\n ]\n } Query Parameters: ps_args \u2013 ps arguments to use (e.g., aux) Status Codes: 200 \u2013 no error 404 \u2013 no such container 500 \u2013 server error Get container logs GET /containers/(id)/logs Get stdout and stderr logs from the container id Example request : GET /containers/4fa6e0f0c678/logs?stderr=1 stdout=1 timestamps=1 follow=1 tail=10 HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/vnd.docker.raw-stream\n\n {{ STREAM }} Query Parameters: follow \u2013 1/True/true or 0/False/false, return stream. Default false stdout \u2013 1/True/true or 0/False/false, show stdout log. Default false stderr \u2013 1/True/true or 0/False/false, show stderr log. Default false timestamps \u2013 1/True/true or 0/False/false, print timestamps for\n every log line. Default false tail \u2013 Output specified number of lines at the end of logs: all or number . Default all Status Codes: 200 \u2013 no error 404 \u2013 no such container 500 \u2013 server error Inspect changes on a container's filesystem GET /containers/(id)/changes Inspect changes on container id 's filesystem Example request : GET /containers/4fa6e0f0c678/changes HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n [\n {\n \"Path\": \"/dev\",\n \"Kind\": 0\n },\n {\n \"Path\": \"/dev/kmsg\",\n \"Kind\": 1\n },\n {\n \"Path\": \"/test\",\n \"Kind\": 1\n }\n ] Status Codes: 200 \u2013 no error 404 \u2013 no such container 500 \u2013 server error Export a container GET /containers/(id)/export Export the contents of container id Example request : GET /containers/4fa6e0f0c678/export HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/octet-stream\n\n {{ TAR STREAM }} Status Codes: 200 \u2013 no error 404 \u2013 no such container 500 \u2013 server error Resize a container TTY POST /containers/(id)/resize?h= height w= width Resize the TTY for container with id . The container must be restarted for the resize to take effect. Example request : POST /containers/4fa6e0f0c678/resize?h=40 w=80 HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Length: 0\n Content-Type: text/plain; charset=utf-8 Status Codes: 200 \u2013 no error 404 \u2013 No such container 500 \u2013 Cannot resize container Start a container POST /containers/(id)/start Start the container id Example request : POST /containers/(id)/start HTTP/1.1\n Content-Type: application/json Example response : HTTP/1.1 204 No Content Json Parameters: Status Codes: 204 \u2013 no error 304 \u2013 container already started 404 \u2013 no such container 500 \u2013 server error Stop a container POST /containers/(id)/stop Stop the container id Example request : POST /containers/e90e34656806/stop?t=5 HTTP/1.1 Example response : HTTP/1.1 204 No Content Query Parameters: t \u2013 number of seconds to wait before killing the container Status Codes: 204 \u2013 no error 304 \u2013 container already stopped 404 \u2013 no such container 500 \u2013 server error Restart a container POST /containers/(id)/restart Restart the container id Example request : POST /containers/e90e34656806/restart?t=5 HTTP/1.1 Example response : HTTP/1.1 204 No Content Query Parameters: t \u2013 number of seconds to wait before killing the container Status Codes: 204 \u2013 no error 404 \u2013 no such container 500 \u2013 server error Kill a container POST /containers/(id)/kill Kill the container id Example request : POST /containers/e90e34656806/kill HTTP/1.1 Example response : HTTP/1.1 204 No Content Query Parameters signal - Signal to send to the container: integer or string like \"SIGINT\".\n When not set, SIGKILL is assumed and the call will waits for the container to exit. Status Codes: 204 \u2013 no error 404 \u2013 no such container 500 \u2013 server error Pause a container POST /containers/(id)/pause Pause the container id Example request : POST /containers/e90e34656806/pause HTTP/1.1 Example response : HTTP/1.1 204 No Content Status Codes: 204 \u2013 no error 404 \u2013 no such container 500 \u2013 server error Unpause a container POST /containers/(id)/unpause Unpause the container id Example request : POST /containers/e90e34656806/unpause HTTP/1.1 Example response : HTTP/1.1 204 No Content Status Codes: 204 \u2013 no error 404 \u2013 no such container 500 \u2013 server error Attach to a container POST /containers/(id)/attach Attach to the container id Example request : POST /containers/16253994b7c4/attach?logs=1 stream=0 stdout=1 HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/vnd.docker.raw-stream\n\n {{ STREAM }} Query Parameters: logs \u2013 1/True/true or 0/False/false, return logs. Default false stream \u2013 1/True/true or 0/False/false, return stream.\n Default false stdin \u2013 1/True/true or 0/False/false, if stream=true, attach\n to stdin. Default false stdout \u2013 1/True/true or 0/False/false, if logs=true, return\n stdout log, if stream=true, attach to stdout. Default false stderr \u2013 1/True/true or 0/False/false, if logs=true, return\n stderr log, if stream=true, attach to stderr. Default false Status Codes: 200 \u2013 no error 400 \u2013 bad parameter 404 \u2013 no such container 500 \u2013 server error Stream details : When using the TTY setting is enabled in POST /containers/create ,\nthe stream is the raw data from the process PTY and client's stdin.\nWhen the TTY is disabled, then the stream is multiplexed to separate\nstdout and stderr. The format is a Header and a Payload (frame). HEADER The header will contain the information on which stream write the\nstream (stdout or stderr). It also contain the size of the\nassociated frame encoded on the last 4 bytes (uint32). It is encoded on the first 8 bytes like this: header := [8]byte{STREAM_TYPE, 0, 0, 0, SIZE1, SIZE2, SIZE3, SIZE4} STREAM_TYPE can be: 0: stdin (will be written on stdout) 1: stdout 2: stderr SIZE1, SIZE2, SIZE3, SIZE4 are the 4 bytes of\nthe uint32 size encoded as big endian. PAYLOAD The payload is the raw stream. IMPLEMENTATION The simplest way to implement the Attach protocol is the following: Read 8 bytes chose stdout or stderr depending on the first byte Extract the frame size from the last 4 byets Read the extracted size and output it on the correct output Goto 1 Attach to a container (websocket) GET /containers/(id)/attach/ws Attach to the container id via websocket Implements websocket protocol handshake according to RFC 6455 Example request GET /containers/e90e34656806/attach/ws?logs=0 stream=1 stdin=1 stdout=1 stderr=1 HTTP/1.1 Example response {{ STREAM }} Query Parameters: logs \u2013 1/True/true or 0/False/false, return logs. Default false stream \u2013 1/True/true or 0/False/false, return stream.\n Default false stdin \u2013 1/True/true or 0/False/false, if stream=true, attach\n to stdin. Default false stdout \u2013 1/True/true or 0/False/false, if logs=true, return\n stdout log, if stream=true, attach to stdout. Default false stderr \u2013 1/True/true or 0/False/false, if logs=true, return\n stderr log, if stream=true, attach to stderr. Default false Status Codes: 200 \u2013 no error 400 \u2013 bad parameter 404 \u2013 no such container 500 \u2013 server error Wait a container POST /containers/(id)/wait Block until container id stops, then returns the exit code Example request : POST /containers/16253994b7c4/wait HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\"StatusCode\": 0} Status Codes: 200 \u2013 no error 404 \u2013 no such container 500 \u2013 server error Remove a container DELETE /containers/(id) Remove the container id from the filesystem Example request : DELETE /containers/16253994b7c4?v=1 HTTP/1.1 Example response : HTTP/1.1 204 No Content Query Parameters: v \u2013 1/True/true or 0/False/false, Remove the volumes\n associated to the container. Default false force - 1/True/true or 0/False/false, Kill then remove the container.\n Default false Status Codes: 204 \u2013 no error 400 \u2013 bad parameter 404 \u2013 no such container 500 \u2013 server error Copy files or folders from a container POST /containers/(id)/copy Copy files or folders of container id Example request : POST /containers/4fa6e0f0c678/copy HTTP/1.1\n Content-Type: application/json\n\n {\n \"Resource\": \"test.txt\"\n } Example response : HTTP/1.1 200 OK\n Content-Type: application/x-tar\n\n {{ TAR STREAM }} Status Codes: 200 \u2013 no error 404 \u2013 no such container 500 \u2013 server error",
|
|
"title": "2.1 Containers"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api_v1.16#22-images",
|
|
"tags": "",
|
|
"text": "List Images GET /images/json Example request : GET /images/json?all=0 HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n [\n {\n \"RepoTags\": [\n \"ubuntu:12.04\",\n \"ubuntu:precise\",\n \"ubuntu:latest\"\n ],\n \"Id\": \"8dbd9e392a964056420e5d58ca5cc376ef18e2de93b5cc90e868a1bbc8318c1c\",\n \"Created\": 1365714795,\n \"Size\": 131506275,\n \"VirtualSize\": 131506275\n },\n {\n \"RepoTags\": [\n \"ubuntu:12.10\",\n \"ubuntu:quantal\"\n ],\n \"ParentId\": \"27cf784147099545\",\n \"Id\": \"b750fe79269d2ec9a3c593ef05b4332b1d1a02a62b4accb2c21d589ff2f5f2dc\",\n \"Created\": 1364102658,\n \"Size\": 24653,\n \"VirtualSize\": 180116135\n }\n ] Query Parameters: all \u2013 1/True/true or 0/False/false, default false filters \u2013 a json encoded value of the filters (a map[string][]string) to process on the images list. Available filters: dangling=true Create an image POST /images/create Create an image, either by pulling it from the registry or by importing it Example request : POST /images/create?fromImage=ubuntu HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\"status\": \"Pulling...\"}\n {\"status\": \"Pulling\", \"progress\": \"1 B/ 100 B\", \"progressDetail\": {\"current\": 1, \"total\": 100}}\n {\"error\": \"Invalid...\"}\n ...\n\nWhen using this endpoint to pull an image from the registry, the\n`X-Registry-Auth` header can be used to include\na base64-encoded AuthConfig object. Query Parameters: fromImage \u2013 name of the image to pull fromSrc \u2013 source to import. The value may be a URL from which the image\n can be retrieved or - to read the image from the request body. repo \u2013 repository tag \u2013 tag registry \u2013 the registry to pull from Request Headers: X-Registry-Auth \u2013 base64-encoded AuthConfig object Status Codes: 200 \u2013 no error 500 \u2013 server error Inspect an image GET /images/(name)/json Return low-level information on the image name Example request : GET /images/ubuntu/json HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\n \"Created\": \"2013-03-23T22:24:18.818426-07:00\",\n \"Container\": \"3d67245a8d72ecf13f33dffac9f79dcdf70f75acb84d308770391510e0c23ad0\",\n \"ContainerConfig\":\n {\n \"Hostname\": \"\",\n \"User\": \"\",\n \"Memory\": 0,\n \"MemorySwap\": 0,\n \"AttachStdin\": false,\n \"AttachStdout\": false,\n \"AttachStderr\": false,\n \"PortSpecs\": null,\n \"Tty\": true,\n \"OpenStdin\": true,\n \"StdinOnce\": false,\n \"Env\": null,\n \"Cmd\": [\"/bin/bash\"],\n \"Dns\": null,\n \"Image\": \"ubuntu\",\n \"Volumes\": null,\n \"VolumesFrom\": \"\",\n \"WorkingDir\": \"\"\n },\n \"Id\": \"b750fe79269d2ec9a3c593ef05b4332b1d1a02a62b4accb2c21d589ff2f5f2dc\",\n \"Parent\": \"27cf784147099545\",\n \"Size\": 6824592\n } Status Codes: 200 \u2013 no error 404 \u2013 no such image 500 \u2013 server error Get the history of an image GET /images/(name)/history Return the history of the image name Example request : GET /images/ubuntu/history HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n [\n {\n \"Id\": \"b750fe79269d\",\n \"Created\": 1364102658,\n \"CreatedBy\": \"/bin/bash\"\n },\n {\n \"Id\": \"27cf78414709\",\n \"Created\": 1364068391,\n \"CreatedBy\": \"\"\n }\n ] Status Codes: 200 \u2013 no error 404 \u2013 no such image 500 \u2013 server error Push an image on the registry POST /images/(name)/push Push the image name on the registry Example request : POST /images/test/push HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\"status\": \"Pushing...\"}\n {\"status\": \"Pushing\", \"progress\": \"1/? (n/a)\", \"progressDetail\": {\"current\": 1}}}\n {\"error\": \"Invalid...\"}\n ...\n\nIf you wish to push an image on to a private registry, that image must already have been tagged\ninto a repository which references that registry host name and port. This repository name should\nthen be used in the URL. This mirrors the flow of the CLI. Example request : POST /images/registry.acme.com:5000/test/push HTTP/1.1 Query Parameters: tag \u2013 the tag to associate with the image on the registry, optional Request Headers: X-Registry-Auth \u2013 include a base64-encoded AuthConfig\n object. Status Codes: 200 \u2013 no error 404 \u2013 no such image 500 \u2013 server error Tag an image into a repository POST /images/(name)/tag Tag the image name into a repository Example request : POST /images/test/tag?repo=myrepo force=0 tag=v42 HTTP/1.1 Example response : HTTP/1.1 201 OK Query Parameters: repo \u2013 The repository to tag in force \u2013 1/True/true or 0/False/false, default false tag - The new tag name Status Codes: 201 \u2013 no error 400 \u2013 bad parameter 404 \u2013 no such image 409 \u2013 conflict 500 \u2013 server error Remove an image DELETE /images/(name) Remove the image name from the filesystem Example request : DELETE /images/test HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-type: application/json\n\n [\n {\"Untagged\": \"3e2f21a89f\"},\n {\"Deleted\": \"3e2f21a89f\"},\n {\"Deleted\": \"53b4f83ac9\"}\n ] Query Parameters: force \u2013 1/True/true or 0/False/false, default false noprune \u2013 1/True/true or 0/False/false, default false Status Codes: 200 \u2013 no error 404 \u2013 no such image 409 \u2013 conflict 500 \u2013 server error Search images GET /images/search Search for an image on Docker Hub . Note :\nThe response keys have changed from API v1.6 to reflect the JSON\nsent by the registry server to the docker daemon's request. Example request : GET /images/search?term=sshd HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n [\n {\n \"description\": \"\",\n \"is_official\": false,\n \"is_automated\": false,\n \"name\": \"wma55/u1210sshd\",\n \"star_count\": 0\n },\n {\n \"description\": \"\",\n \"is_official\": false,\n \"is_automated\": false,\n \"name\": \"jdswinbank/sshd\",\n \"star_count\": 0\n },\n {\n \"description\": \"\",\n \"is_official\": false,\n \"is_automated\": false,\n \"name\": \"vgauthier/sshd\",\n \"star_count\": 0\n }\n ...\n ] Query Parameters: term \u2013 term to search Status Codes: 200 \u2013 no error 500 \u2013 server error",
|
|
"title": "2.2 Images"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api_v1.16#23-misc",
|
|
"tags": "",
|
|
"text": "Build an image from Dockerfile via stdin POST /build Build an image from Dockerfile via stdin Example request : POST /build HTTP/1.1\n\n {{ TAR STREAM }} Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\"stream\": \"Step 1...\"}\n {\"stream\": \"...\"}\n {\"error\": \"Error...\", \"errorDetail\": {\"code\": 123, \"message\": \"Error...\"}}\n\nThe stream must be a tar archive compressed with one of the\nfollowing algorithms: identity (no compression), gzip, bzip2, xz.\n\nThe archive must include a file called `Dockerfile`\nat its root. It may include any number of other files,\nwhich will be accessible in the build context (See the [*ADD build\ncommand*](/reference/builder/#dockerbuilder)). Query Parameters: t \u2013 repository name (and optionally a tag) to be applied to\n the resulting image in case of success remote \u2013 git or HTTP/HTTPS URI build source q \u2013 suppress verbose build output nocache \u2013 do not use the cache when building the image pull - attempt to pull the image even if an older image exists locally rm - remove intermediate containers after a successful build (default behavior) forcerm - always remove intermediate containers (includes rm) Request Headers: Content-type \u2013 should be set to \"application/tar\" . X-Registry-Config \u2013 base64-encoded ConfigFile objec Status Codes: 200 \u2013 no error 500 \u2013 server error Check auth configuration POST /auth Get the default username and email Example request : POST /auth HTTP/1.1\n Content-Type: application/json\n\n {\n \"username\":\" hannibal\",\n \"password: \"xxxx\",\n \"email\": \"hannibal@a-team.com\",\n \"serveraddress\": \"https://index.docker.io/v1/\"\n } Example response : HTTP/1.1 200 OK Status Codes: 200 \u2013 no error 204 \u2013 no error 500 \u2013 server error Display system-wide information GET /info Display system-wide information Example request : GET /info HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\n \"Containers\":11,\n \"Images\":16,\n \"Driver\":\"btrfs\",\n \"DriverStatus\": [[\"\"]],\n \"ExecutionDriver\":\"native-0.1\",\n \"KernelVersion\":\"3.12.0-1-amd64\"\n \"NCPU\":1,\n \"MemTotal\":2099236864,\n \"Name\":\"prod-server-42\",\n \"ID\":\"7TRN:IPZB:QYBB:VPBQ:UMPP:KARE:6ZNR:XE6T:7EWV:PKF4:ZOJD:TPYS\",\n \"Debug\":false,\n \"NFd\": 11,\n \"NGoroutines\":21,\n \"NEventsListener\":0,\n \"InitPath\":\"/usr/bin/docker\",\n \"InitSha1\":\"\",\n \"IndexServerAddress\":[\"https://index.docker.io/v1/\"],\n \"MemoryLimit\":true,\n \"SwapLimit\":false,\n \"IPv4Forwarding\":true,\n \"Labels\":[\"storage=ssd\"],\n \"DockerRootDir\": \"/var/lib/docker\",\n \"OperatingSystem\": \"Boot2Docker\",\n } Status Codes: 200 \u2013 no error 500 \u2013 server error Show the docker version information GET /version Show the docker version information Example request : GET /version HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\n \"ApiVersion\": \"1.12\",\n \"Version\": \"0.2.2\",\n \"GitCommit\": \"5a2a5cc+CHANGES\",\n \"GoVersion\": \"go1.0.3\"\n } Status Codes: 200 \u2013 no error 500 \u2013 server error Ping the docker server GET /_ping Ping the docker server Example request : GET /_ping HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: text/plain\n\n OK Status Codes: 200 - no error 500 - server error Create a new image from a container's changes POST /commit Create a new image from a container's changes Example request : POST /commit?container=44c004db4b17 comment=message repo=myrepo HTTP/1.1\n Content-Type: application/json\n\n {\n \"Hostname\": \"\",\n \"Domainname\": \"\",\n \"User\": \"\",\n \"Memory\": 0,\n \"MemorySwap\": 0,\n \"CpuShares\": 512,\n \"Cpuset\": \"0,1\",\n \"AttachStdin\": false,\n \"AttachStdout\": true,\n \"AttachStderr\": true,\n \"PortSpecs\": null,\n \"Tty\": false,\n \"OpenStdin\": false,\n \"StdinOnce\": false,\n \"Env\": null,\n \"Cmd\": [\n \"date\"\n ],\n \"Volumes\": {\n \"/tmp\": {}\n },\n \"WorkingDir\": \"\",\n \"NetworkDisabled\": false,\n \"ExposedPorts\": {\n \"22/tcp\": {}\n }\n } Example response : HTTP/1.1 201 Created\n Content-Type: application/vnd.docker.raw-stream\n\n {\"Id\": \"596069db4bf5\"} Json Parameters: config - the container's configuration Query Parameters: container \u2013 source container repo \u2013 repository tag \u2013 tag comment \u2013 commit message author \u2013 author (e.g., \"John Hannibal Smith\n hannibal@a-team.com \") Status Codes: 201 \u2013 no error 404 \u2013 no such container 500 \u2013 server error Monitor Docker's events GET /events Get container events from docker, either in real time via streaming, or via\npolling (using since). Docker containers will report the following events: create, destroy, die, export, kill, pause, restart, start, stop, unpause and Docker images will report: untag, delete Example request : GET /events?since=1374067924 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\"status\": \"create\", \"id\": \"dfdf82bd3881\",\"from\": \"ubuntu:latest\", \"time\":1374067924}\n {\"status\": \"start\", \"id\": \"dfdf82bd3881\",\"from\": \"ubuntu:latest\", \"time\":1374067924}\n {\"status\": \"stop\", \"id\": \"dfdf82bd3881\",\"from\": \"ubuntu:latest\", \"time\":1374067966}\n {\"status\": \"destroy\", \"id\": \"dfdf82bd3881\",\"from\": \"ubuntu:latest\", \"time\":1374067970} Query Parameters: since \u2013 timestamp used for polling until \u2013 timestamp used for polling filters \u2013 a json encoded value of the filters (a map[string][]string) to process on the event list. Available filters: event= string -- event to filter image= string -- image to filter container= string -- container to filter Status Codes: 200 \u2013 no error 500 \u2013 server error Get a tarball containing all images in a repository GET /images/(name)/get Get a tarball containing all images and metadata for the repository specified\nby name . If name is a specific name and tag (e.g. ubuntu:latest), then only that image\n(and its parents) are returned. If name is an image ID, similarly only tha\nimage (and its parents) are returned, but with the exclusion of the\n'repositories' file in the tarball, as there were no image names referenced. See the image tarball format for more details. Example request GET /images/ubuntu/get Example response : HTTP/1.1 200 OK\n Content-Type: application/x-tar\n\n Binary data stream Status Codes: 200 \u2013 no error 500 \u2013 server error Get a tarball containing all images. GET /images/get Get a tarball containing all images and metadata for one or more repositories. For each value of the names parameter: if it is a specific name and tag (e.g.\nubuntu:latest), then only that image (and its parents) are returned; if it is\nan image ID, similarly only that image (and its parents) are returned and there\nwould be no names referenced in the 'repositories' file for this image ID. See the image tarball format for more details. Example request GET /images/get?names=myname%2Fmyapp%3Alatest names=busybox Example response : HTTP/1.1 200 OK\n Content-Type: application/x-tar\n\n Binary data stream Status Codes: 200 \u2013 no error 500 \u2013 server error Load a tarball with a set of images and tags into docker POST /images/load Load a set of images and tags into the docker repository.\nSee the image tarball format for more details. Example request POST /images/load\n\n Tarball in body Example response : HTTP/1.1 200 OK Status Codes: 200 \u2013 no error 500 \u2013 server error Image tarball format An image tarball contains one directory per image layer (named using its long ID),\neach containing three files: VERSION : currently 1.0 - the file format version json : detailed layer information, similar to docker inspect layer_id layer.tar : A tarfile containing the filesystem changes in this layer The layer.tar file will contain aufs style .wh..wh.aufs files and directories\nfor storing attribute changes and deletions. If the tarball defines a repository, there will also be a repositories file at\nthe root that contains a list of repository and tag names mapped to layer IDs. { hello-world :\n { latest : 565a9d68a73f6706862bfe8409a7f659776d4d60a8d096eb4a3cbce6999cc2a1 }\n} Exec Create POST /containers/(id)/exec Sets up an exec instance in a running container id Example request : POST /containers/e90e34656806/exec HTTP/1.1\n Content-Type: application/json\n\n {\n \"AttachStdin\": false,\n \"AttachStdout\": true,\n \"AttachStderr\": true,\n \"Tty\": false,\n \"Cmd\": [\n \"date\"\n ],\n } Example response : HTTP/1.1 201 OK\n Content-Type: application/json\n\n {\n \"Id\": \"f90e34656806\"\n } Json Parameters: AttachStdin - Boolean value, attaches to stdin of the exec command. AttachStdout - Boolean value, attaches to stdout of the exec command. AttachStderr - Boolean value, attaches to stderr of the exec command. Tty - Boolean value to allocate a pseudo-TTY Cmd - Command to run specified as a string or an array of strings. Status Codes: 201 \u2013 no error 404 \u2013 no such container Exec Start POST /exec/(id)/start Starts a previously set up exec instance id . If detach is true, this API\nreturns after starting the exec command. Otherwise, this API sets up an\ninteractive session with the exec command. Example request : POST /exec/e90e34656806/start HTTP/1.1\n Content-Type: application/json\n\n {\n \"Detach\": false,\n \"Tty\": false,\n } Example response : HTTP/1.1 201 OK\n Content-Type: application/json\n\n {{ STREAM }} Json Parameters: Detach - Detach from the exec command Tty - Boolean value to allocate a pseudo-TTY Status Codes: 201 \u2013 no error 404 \u2013 no such exec instance Stream details :\nSimilar to the stream behavior of POST /container/(id)/attach API Exec Resize POST /exec/(id)/resize Resizes the tty session used by the exec command id .\nThis API is valid only if tty was specified as part of creating and starting the exec command. Example request : POST /exec/e90e34656806/resize HTTP/1.1\n Content-Type: plain/text Example response : HTTP/1.1 201 OK\n Content-Type: plain/text Query Parameters: h \u2013 height of tty session w \u2013 width Status Codes: 201 \u2013 no error 404 \u2013 no such exec instance Exec Inspect GET /exec/(id)/json Return low-level information about the exec command id . Example request : GET /exec/11fb006128e8ceb3942e7c58d77750f24210e35f879dd204ac975c184b820b39/json HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: plain/text\n\n {\n \"ID\" : \"11fb006128e8ceb3942e7c58d77750f24210e35f879dd204ac975c184b820b39\",\n \"Running\" : false,\n \"ExitCode\" : 2,\n \"ProcessConfig\" : {\n \"privileged\" : false,\n \"user\" : \"\",\n \"tty\" : false,\n \"entrypoint\" : \"sh\",\n \"arguments\" : [\n \"-c\",\n \"exit 2\"\n ]\n },\n \"OpenStdin\" : false,\n \"OpenStderr\" : false,\n \"OpenStdout\" : false,\n \"Container\" : {\n \"State\" : {\n \"Running\" : true,\n \"Paused\" : false,\n \"Restarting\" : false,\n \"OOMKilled\" : false,\n \"Pid\" : 3650,\n \"ExitCode\" : 0,\n \"Error\" : \"\",\n \"StartedAt\" : \"2014-11-17T22:26:03.717657531Z\",\n \"FinishedAt\" : \"0001-01-01T00:00:00Z\"\n },\n \"ID\" : \"8f177a186b977fb451136e0fdf182abff5599a08b3c7f6ef0d36a55aaf89634c\",\n \"Created\" : \"2014-11-17T22:26:03.626304998Z\",\n \"Path\" : \"date\",\n \"Args\" : [],\n \"Config\" : {\n \"Hostname\" : \"8f177a186b97\",\n \"Domainname\" : \"\",\n \"User\" : \"\",\n \"Memory\" : 0,\n \"MemorySwap\" : 0,\n \"CpuShares\" : 0,\n \"Cpuset\" : \"\",\n \"AttachStdin\" : false,\n \"AttachStdout\" : false,\n \"AttachStderr\" : false,\n \"PortSpecs\" : null,\n \"ExposedPorts\" : null,\n \"Tty\" : false,\n \"OpenStdin\" : false,\n \"StdinOnce\" : false,\n \"Env\" : [ \"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\" ],\n \"Cmd\" : [\n \"date\"\n ],\n \"Image\" : \"ubuntu\",\n \"Volumes\" : null,\n \"WorkingDir\" : \"\",\n \"Entrypoint\" : null,\n \"NetworkDisabled\" : false,\n \"MacAddress\" : \"\",\n \"OnBuild\" : null,\n \"SecurityOpt\" : null\n },\n \"Image\" : \"5506de2b643be1e6febbf3b8a240760c6843244c41e12aa2f60ccbb7153d17f5\",\n \"NetworkSettings\" : {\n \"IPAddress\" : \"172.17.0.2\",\n \"IPPrefixLen\" : 16,\n \"MacAddress\" : \"02:42:ac:11:00:02\",\n \"Gateway\" : \"172.17.42.1\",\n \"Bridge\" : \"docker0\",\n \"PortMapping\" : null,\n \"Ports\" : {}\n },\n \"ResolvConfPath\" : \"/var/lib/docker/containers/8f177a186b977fb451136e0fdf182abff5599a08b3c7f6ef0d36a55aaf89634c/resolv.conf\",\n \"HostnamePath\" : \"/var/lib/docker/containers/8f177a186b977fb451136e0fdf182abff5599a08b3c7f6ef0d36a55aaf89634c/hostname\",\n \"HostsPath\" : \"/var/lib/docker/containers/8f177a186b977fb451136e0fdf182abff5599a08b3c7f6ef0d36a55aaf89634c/hosts\",\n \"Name\" : \"/test\",\n \"Driver\" : \"aufs\",\n \"ExecDriver\" : \"native-0.2\",\n \"MountLabel\" : \"\",\n \"ProcessLabel\" : \"\",\n \"AppArmorProfile\" : \"\",\n \"RestartCount\" : 0,\n \"Volumes\" : {},\n \"VolumesRW\" : {}\n }\n } Status Codes: 200 \u2013 no error 404 \u2013 no such exec instance 500 - server error",
|
|
"title": "2.3 Misc"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api_v1.16#3-going-further",
|
|
"tags": "",
|
|
"text": "",
|
|
"title": "3. Going further"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api_v1.16#31-inside-docker-run",
|
|
"tags": "",
|
|
"text": "As an example, the docker run command line makes the following API calls: Create the container If the status code is 404, it means the image doesn't exist: Try to pull it Then retry to create the container Start the container If you are not in detached mode: Attach to the container, using logs=1 (to have stdout and\n stderr from the container's start) and stream=1 If in detached mode or only stdin is attached: Display the container's id",
|
|
"title": "3.1 Inside docker run"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api_v1.16#32-hijacking",
|
|
"tags": "",
|
|
"text": "In this version of the API, /attach, uses hijacking to transport stdin,\nstdout and stderr on the same socket. This might change in the future.",
|
|
"title": "3.2 Hijacking"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api_v1.16#33-cors-requests",
|
|
"tags": "",
|
|
"text": "To enable cross origin requests to the remote api add the flag\n\"--api-enable-cors\" when running docker in daemon mode. $ docker -d -H=\"192.168.1.9:2375\" --api-enable-cors",
|
|
"title": "3.3 CORS Requests"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api_v1.15/",
|
|
"tags": "",
|
|
"text": "Docker Remote API v1.15\n1. Brief introduction\n\nThe Remote API has replaced rcli.\nThe daemon listens on unix:///var/run/docker.sock but you can\n Bind Docker to another host/port or a Unix socket.\nThe API tends to be REST, but for some complex commands, like attach\n or pull, the HTTP connection is hijacked to transport STDOUT,\n STDIN and STDERR.\n\n2. Endpoints\n2.1 Containers\nList containers\nGET /containers/json\nList containers\nExample request:\n GET /containers/json?all=1before=8dfafdbc3a40size=1 HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n [\n {\n \"Id\": \"8dfafdbc3a40\",\n \"Image\": \"ubuntu:latest\",\n \"Command\": \"echo 1\",\n \"Created\": 1367854155,\n \"Status\": \"Exit 0\",\n \"Ports\": [{\"PrivatePort\": 2222, \"PublicPort\": 3333, \"Type\": \"tcp\"}],\n \"SizeRw\": 12288,\n \"SizeRootFs\": 0\n },\n {\n \"Id\": \"9cd87474be90\",\n \"Image\": \"ubuntu:latest\",\n \"Command\": \"echo 222222\",\n \"Created\": 1367854155,\n \"Status\": \"Exit 0\",\n \"Ports\": [],\n \"SizeRw\": 12288,\n \"SizeRootFs\": 0\n },\n {\n \"Id\": \"3176a2479c92\",\n \"Image\": \"ubuntu:latest\",\n \"Command\": \"echo 3333333333333333\",\n \"Created\": 1367854154,\n \"Status\": \"Exit 0\",\n \"Ports\":[],\n \"SizeRw\":12288,\n \"SizeRootFs\":0\n },\n {\n \"Id\": \"4cb07b47f9fb\",\n \"Image\": \"ubuntu:latest\",\n \"Command\": \"echo 444444444444444444444444444444444\",\n \"Created\": 1367854152,\n \"Status\": \"Exit 0\",\n \"Ports\": [],\n \"SizeRw\": 12288,\n \"SizeRootFs\": 0\n }\n ]\n\nQuery Parameters:\n\nall \u2013 1/True/true or 0/False/false, Show all containers.\n Only running containers are shown by default (i.e., this defaults to false)\nlimit \u2013 Show limit last created\n containers, include non-running ones.\nsince \u2013 Show only containers created since Id, include\n non-running ones.\nbefore \u2013 Show only containers created before Id, include\n non-running ones.\nsize \u2013 1/True/true or 0/False/false, Show the containers\n sizes\nfilters - a json encoded value of the filters (a map[string][]string) to process on the containers list. Available filters:\nexited=int -- containers with exit code of int\nstatus=(restarting|running|paused|exited)\n\nStatus Codes:\n\n200 \u2013 no error\n400 \u2013 bad parameter\n500 \u2013 server error\n\nCreate a container\nPOST /containers/create\nCreate a container\nExample request:\n POST /containers/create HTTP/1.1\n Content-Type: application/json\n\n {\n \"Hostname\": \"\",\n \"Domainname\": \"\",\n \"User\": \"\",\n \"Memory\": 0,\n \"MemorySwap\": 0,\n \"CpuShares\": 512,\n \"Cpuset\": \"0,1\",\n \"AttachStdin\": false,\n \"AttachStdout\": true,\n \"AttachStderr\": true,\n \"Tty\": false,\n \"OpenStdin\": false,\n \"StdinOnce\": false,\n \"Env\": null,\n \"Cmd\": [\n \"date\"\n ],\n \"Entrypoint\": \"\",\n \"Image\": \"ubuntu\",\n \"Volumes\": {\n \"/tmp\": {}\n },\n \"WorkingDir\": \"\",\n \"NetworkDisabled\": false,\n \"MacAddress\": \"12:34:56:78:9a:bc\",\n \"ExposedPorts\": {\n \"22/tcp\": {}\n },\n \"SecurityOpts\": [\"\"],\n \"HostConfig\": {\n \"Binds\": [\"/tmp:/tmp\"],\n \"Links\": [\"redis3:redis\"],\n \"LxcConf\": {\"lxc.utsname\":\"docker\"},\n \"PortBindings\": { \"22/tcp\": [{ \"HostPort\": \"11022\" }] },\n \"PublishAllPorts\": false,\n \"Privileged\": false,\n \"Dns\": [\"8.8.8.8\"],\n \"DnsSearch\": [\"\"],\n \"ExtraHosts\": null,\n \"VolumesFrom\": [\"parent\", \"other:ro\"],\n \"CapAdd\": [\"NET_ADMIN\"],\n \"CapDrop\": [\"MKNOD\"],\n \"RestartPolicy\": { \"Name\": \"\", \"MaximumRetryCount\": 0 },\n \"NetworkMode\": \"bridge\",\n \"Devices\": []\n }\n }\n\nExample response:\n HTTP/1.1 201 Created\n Content-Type: application/json\n\n {\n \"Id\": \"f91ddc4b01e079c4481a8340bbbeca4dbd33d6e4a10662e499f8eacbb5bf252b\"\n \"Warnings\": []\n }\n\nJson Parameters:\n\nHostname - A string value containing the desired hostname to use for the\n container.\nDomainname - A string value containing the desired domain name to use\n for the container.\nUser - A string value containg the user to use inside the container.\nMemory - Memory limit in bytes.\nMemorySwap- Total memory usage (memory + swap); set -1 to disable swap.\nCpuShares - An integer value containing the CPU Shares for container\n (ie. the relative weight vs othercontainers).\n CpuSet - String value containg the cgroups Cpuset to use.\nAttachStdin - Boolean value, attaches to stdin.\nAttachStdout - Boolean value, attaches to stdout.\nAttachStderr - Boolean value, attaches to stderr.\nTty - Boolean value, Attach standard streams to a tty, including stdin if it is not closed.\nOpenStdin - Boolean value, opens stdin,\nStdinOnce - Boolean value, close stdin after the 1 attached client disconnects.\nEnv - A list of environment variables in the form of VAR=value\nCmd - Command to run specified as a string or an array of strings.\nEntrypoint - Set the entrypoint for the container a a string or an array\n of strings\nImage - String value containing the image name to use for the container\nVolumes \u2013 An object mapping mountpoint paths (strings) inside the\n container to empty objects.\nWorkingDir - A string value containing the working dir for commands to\n run in.\nNetworkDisabled - Boolean value, when true disables neworking for the\n container\nExposedPorts - An object mapping ports to an empty object in the form of:\n \"ExposedPorts\": { \"port/tcp|udp: {}\" }\nSecurityOpts: A list of string values to customize labels for MLS\n systems, such as SELinux.\nHostConfig\nBinds \u2013 A list of volume bindings for this container. Each volume\n binding is a string of the form container_path (to create a new\n volume for the container), host_path:container_path (to bind-mount\n a host path into the container), or host_path:container_path:ro\n (to make the bind-mount read-only inside the container).\nLinks - A list of links for the container. Each link entry should be of\n of the form \"container_name:alias\".\nLxcConf - LXC specific configurations. These configurations will only\n work when using the lxc execution driver.\nPortBindings - A map of exposed container ports and the host port they\n should map to. It should be specified in the form\n { port/protocol: [{ \"HostPort\": \"port\" }] }\n Take note that port is specified as a string and not an integer value.\nPublishAllPorts - Allocates a random host port for all of a container's\n exposed ports. Specified as a boolean value.\nPrivileged - Gives the container full access to the host. Specified as\n a boolean value.\nDns - A list of dns servers for the container to use.\nDnsSearch - A list of DNS search domains\nExtraHosts - A list of hostnames/IP mappings to be added to the\n container's /etc/hosts file. Specified in the form [\"hostname:IP\"].\nVolumesFrom - A list of volumes to inherit from another container.\n Specified in the form container name[:ro|rw]\nCapAdd - A list of kernel capabilties to add to the container.\nCapdrop - A list of kernel capabilties to drop from the container.\nRestartPolicy \u2013 The behavior to apply when the container exits. The\n value is an object with a Name property of either \"always\" to\n always restart or \"on-failure\" to restart only when the container\n exit code is non-zero. If on-failure is used, MaximumRetryCount\n controls the number of times to retry before giving up.\n The default is not to restart. (optional)\n An ever increasing delay (double the previous delay, starting at 100mS)\n is added before each restart to prevent flooding the server.\nNetworkMode - Sets the networking mode for the container. Supported\n values are: bridge, host, and container:name|id\nDevices - A list of devices to add to the container specified in the\n form\n { \"PathOnHost\": \"/dev/deviceName\", \"PathInContainer\": \"/dev/deviceName\", \"CgroupPermissions\": \"mrw\"}\n\nQuery Parameters:\n\nname \u2013 Assign the specified name to the container. Must\n match /?[a-zA-Z0-9_-]+.\n\nStatus Codes:\n\n201 \u2013 no error\n404 \u2013 no such container\n406 \u2013 impossible to attach (container not running)\n500 \u2013 server error\n\nInspect a container\nGET /containers/(id)/json\nReturn low-level information on the container id\nExample request:\n GET /containers/4fa6e0f0c678/json HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\n \"Id\": \"4fa6e0f0c6786287e131c3852c58a2e01cc697a68231826813597e4994f1d6e2\",\n \"Created\": \"2013-05-07T14:51:42.041847+02:00\",\n \"Path\": \"date\",\n \"Args\": [],\n \"Config\": {\n \"Hostname\": \"4fa6e0f0c678\",\n \"User\": \"\",\n \"Memory\": 0,\n \"MemorySwap\": 0,\n \"AttachStdin\": false,\n \"AttachStdout\": true,\n \"AttachStderr\": true,\n \"PortSpecs\": null,\n \"Tty\": false,\n \"OpenStdin\": false,\n \"StdinOnce\": false,\n \"Env\": null,\n \"Cmd\": [\n \"date\"\n ],\n \"Dns\": null,\n \"Image\": \"ubuntu\",\n \"Volumes\": {},\n \"VolumesFrom\": \"\",\n \"WorkingDir\": \"\"\n },\n \"State\": {\n \"Running\": false,\n \"Pid\": 0,\n \"ExitCode\": 0,\n \"StartedAt\": \"2013-05-07T14:51:42.087658+02:01360\",\n \"Ghost\": false\n },\n \"Image\": \"b750fe79269d2ec9a3c593ef05b4332b1d1a02a62b4accb2c21d589ff2f5f2dc\",\n \"NetworkSettings\": {\n \"IpAddress\": \"\",\n \"IpPrefixLen\": 0,\n \"Gateway\": \"\",\n \"Bridge\": \"\",\n \"PortMapping\": null\n },\n \"SysInitPath\": \"/home/kitty/go/src/github.com/docker/docker/bin/docker\",\n \"ResolvConfPath\": \"/etc/resolv.conf\",\n \"Volumes\": {},\n \"HostConfig\": {\n \"Binds\": null,\n \"ContainerIDFile\": \"\",\n \"LxcConf\": [],\n \"Privileged\": false,\n \"PortBindings\": {\n \"80/tcp\": [\n {\n \"HostIp\": \"0.0.0.0\",\n \"HostPort\": \"49153\"\n }\n ]\n },\n \"Links\": [\"/name:alias\"],\n \"PublishAllPorts\": false,\n \"CapAdd\": [\"NET_ADMIN\"],\n \"CapDrop\": [\"MKNOD\"]\n }\n }\n\nStatus Codes:\n\n200 \u2013 no error\n404 \u2013 no such container\n500 \u2013 server error\n\nList processes running inside a container\nGET /containers/(id)/top\nList processes running inside the container id\nExample request:\n GET /containers/4fa6e0f0c678/top HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\n \"Titles\": [\n \"USER\",\n \"PID\",\n \"%CPU\",\n \"%MEM\",\n \"VSZ\",\n \"RSS\",\n \"TTY\",\n \"STAT\",\n \"START\",\n \"TIME\",\n \"COMMAND\"\n ],\n \"Processes\": [\n [\"root\",\"20147\",\"0.0\",\"0.1\",\"18060\",\"1864\",\"pts/4\",\"S\",\"10:06\",\"0:00\",\"bash\"],\n [\"root\",\"20271\",\"0.0\",\"0.0\",\"4312\",\"352\",\"pts/4\",\"S+\",\"10:07\",\"0:00\",\"sleep\",\"10\"]\n ]\n }\n\nQuery Parameters:\n\nps_args \u2013 ps arguments to use (e.g., aux)\n\nStatus Codes:\n\n200 \u2013 no error\n404 \u2013 no such container\n500 \u2013 server error\n\nGet container logs\nGET /containers/(id)/logs\nGet stdout and stderr logs from the container id\nExample request:\n GET /containers/4fa6e0f0c678/logs?stderr=1stdout=1timestamps=1follow=1tail=10 HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/vnd.docker.raw-stream\n\n {{ STREAM }}\n\nQuery Parameters:\n\nfollow \u2013 1/True/true or 0/False/false, return stream. Default false\nstdout \u2013 1/True/true or 0/False/false, show stdout log. Default false\nstderr \u2013 1/True/true or 0/False/false, show stderr log. Default false\ntimestamps \u2013 1/True/true or 0/False/false, print timestamps for\n every log line. Default false\ntail \u2013 Output specified number of lines at the end of logs: all or number. Default all\n\nStatus Codes:\n\n200 \u2013 no error\n404 \u2013 no such container\n500 \u2013 server error\n\nInspect changes on a container's filesystem\nGET /containers/(id)/changes\nInspect changes on container id's filesystem\nExample request:\n GET /containers/4fa6e0f0c678/changes HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n [\n {\n \"Path\": \"/dev\",\n \"Kind\": 0\n },\n {\n \"Path\": \"/dev/kmsg\",\n \"Kind\": 1\n },\n {\n \"Path\": \"/test\",\n \"Kind\": 1\n }\n ]\n\nStatus Codes:\n\n200 \u2013 no error\n404 \u2013 no such container\n500 \u2013 server error\n\nExport a container\nGET /containers/(id)/export\nExport the contents of container id\nExample request:\n GET /containers/4fa6e0f0c678/export HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/octet-stream\n\n {{ TAR STREAM }}\n\nStatus Codes:\n\n200 \u2013 no error\n404 \u2013 no such container\n500 \u2013 server error\n\nResize a container TTY\nGET /containers/(id)/resize?h=heightw=width\nResize the TTY of container id\nExample request:\n GET /containers/4fa6e0f0c678/resize?h=40w=80 HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Length: 0\n Content-Type: text/plain; charset=utf-8\n\nStatus Codes:\n\n200 \u2013 no error\n404 \u2013 No such container\n500 \u2013 bad file descriptor\n\nStart a container\nPOST /containers/(id)/start\nStart the container id\nExample request:\n POST /containers/(id)/start HTTP/1.1\n Content-Type: application/json\n\n {\n \"Binds\": [\"/tmp:/tmp\"],\n \"Links\": [\"redis3:redis\"],\n \"LxcConf\": {\"lxc.utsname\":\"docker\"},\n \"PortBindings\": { \"22/tcp\": [{ \"HostPort\": \"11022\" }] },\n \"PublishAllPorts\": false,\n \"Privileged\": false,\n \"Dns\": [\"8.8.8.8\"],\n \"DnsSearch\": [\"\"],\n \"VolumesFrom\": [\"parent\", \"other:ro\"],\n \"CapAdd\": [\"NET_ADMIN\"],\n \"CapDrop\": [\"MKNOD\"],\n \"RestartPolicy\": { \"Name\": \"\", \"MaximumRetryCount\": 0 },\n \"NetworkMode\": \"bridge\",\n \"Devices\": []\n }\n\nExample response:\n HTTP/1.1 204 No Content\n\nJson Parameters:\n\nBinds \u2013 A list of volume bindings for this container. Each volume\n binding is a string of the form container_path (to create a new\n volume for the container), host_path:container_path (to bind-mount\n a host path into the container), or host_path:container_path:ro\n (to make the bind-mount read-only inside the container).\nLinks - A list of links for the container. Each link entry should be of\n of the form \"container_name:alias\".\nLxcConf - LXC specific configurations. These configurations will only\n work when using the lxc execution driver.\nPortBindings - A map of exposed container ports and the host port they\n should map to. It should be specified in the form\n { port/protocol: [{ \"HostPort\": \"port\" }] }\n Take note that port is specified as a string and not an integer value.\nPublishAllPorts - Allocates a random host port for all of a container's\n exposed ports. Specified as a boolean value.\nPrivileged - Gives the container full access to the host. Specified as\n a boolean value.\nDns - A list of dns servers for the container to use.\nDnsSearch - A list of DNS search domains\nVolumesFrom - A list of volumes to inherit from another container.\n Specified in the form container name[:ro|rw]\nCapAdd - A list of kernel capabilties to add to the container.\nCapdrop - A list of kernel capabilties to drop from the container.\nRestartPolicy \u2013 The behavior to apply when the container exits. The\n value is an object with a Name property of either \"always\" to\n always restart or \"on-failure\" to restart only when the container\n exit code is non-zero. If on-failure is used, MaximumRetryCount\n controls the number of times to retry before giving up.\n The default is not to restart. (optional)\n An ever increasing delay (double the previous delay, starting at 100mS)\n is added before each restart to prevent flooding the server.\nNetworkMode - Sets the networking mode for the container. Supported\n values are: bridge, host, and container:name|id\nDevices - A list of devices to add to the container specified in the\n form\n { \"PathOnHost\": \"/dev/deviceName\", \"PathInContainer\": \"/dev/deviceName\", \"CgroupPermissions\": \"mrw\"}\n\nStatus Codes:\n\n204 \u2013 no error\n304 \u2013 container already started\n404 \u2013 no such container\n500 \u2013 server error\n\nStop a container\nPOST /containers/(id)/stop\nStop the container id\nExample request:\n POST /containers/e90e34656806/stop?t=5 HTTP/1.1\n\nExample response:\n HTTP/1.1 204 No Content\n\nQuery Parameters:\n\nt \u2013 number of seconds to wait before killing the container\n\nStatus Codes:\n\n204 \u2013 no error\n304 \u2013 container already stopped\n404 \u2013 no such container\n500 \u2013 server error\n\nRestart a container\nPOST /containers/(id)/restart\nRestart the container id\nExample request:\n POST /containers/e90e34656806/restart?t=5 HTTP/1.1\n\nExample response:\n HTTP/1.1 204 No Content\n\nQuery Parameters:\n\nt \u2013 number of seconds to wait before killing the container\n\nStatus Codes:\n\n204 \u2013 no error\n404 \u2013 no such container\n500 \u2013 server error\n\nKill a container\nPOST /containers/(id)/kill\nKill the container id\nExample request:\n POST /containers/e90e34656806/kill HTTP/1.1\n\nExample response:\n HTTP/1.1 204 No Content\n\nQuery Parameters\n\nsignal - Signal to send to the container: integer or string like \"SIGINT\".\n When not set, SIGKILL is assumed and the call will waits for the container to exit.\n\nStatus Codes:\n\n204 \u2013 no error\n404 \u2013 no such container\n500 \u2013 server error\n\nPause a container\nPOST /containers/(id)/pause\nPause the container id\nExample request:\n POST /containers/e90e34656806/pause HTTP/1.1\n\nExample response:\n HTTP/1.1 204 No Content\n\nStatus Codes:\n\n204 \u2013 no error\n404 \u2013 no such container\n500 \u2013 server error\n\nUnpause a container\nPOST /containers/(id)/unpause\nUnpause the container id\nExample request:\n POST /containers/e90e34656806/unpause HTTP/1.1\n\nExample response:\n HTTP/1.1 204 No Content\n\nStatus Codes:\n\n204 \u2013 no error\n404 \u2013 no such container\n500 \u2013 server error\n\nAttach to a container\nPOST /containers/(id)/attach\nAttach to the container id\nExample request:\n POST /containers/16253994b7c4/attach?logs=1stream=0stdout=1 HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/vnd.docker.raw-stream\n\n {{ STREAM }}\n\nQuery Parameters:\n\nlogs \u2013 1/True/true or 0/False/false, return logs. Default false\nstream \u2013 1/True/true or 0/False/false, return stream.\n Default false\nstdin \u2013 1/True/true or 0/False/false, if stream=true, attach\n to stdin. Default false\nstdout \u2013 1/True/true or 0/False/false, if logs=true, return\n stdout log, if stream=true, attach to stdout. Default false\nstderr \u2013 1/True/true or 0/False/false, if logs=true, return\n stderr log, if stream=true, attach to stderr. Default false\n\nStatus Codes:\n\n200 \u2013 no error\n400 \u2013 bad parameter\n404 \u2013 no such container\n\n500 \u2013 server error\nStream details:\nWhen using the TTY setting is enabled in\nPOST /containers/create\n,\nthe stream is the raw data from the process PTY and client's stdin.\nWhen the TTY is disabled, then the stream is multiplexed to separate\nstdout and stderr.\nThe format is a Header and a Payload (frame).\nHEADER\nThe header will contain the information on which stream write the\nstream (stdout or stderr). It also contain the size of the\nassociated frame encoded on the last 4 bytes (uint32).\nIt is encoded on the first 8 bytes like this:\nheader := [8]byte{STREAM_TYPE, 0, 0, 0, SIZE1, SIZE2, SIZE3, SIZE4}\n\nSTREAM_TYPE can be:\n\n\n0: stdin (will be written on stdout)\n\n1: stdout\n\n2: stderr\nSIZE1, SIZE2, SIZE3, SIZE4 are the 4 bytes of\nthe uint32 size encoded as big endian.\nPAYLOAD\nThe payload is the raw stream.\nIMPLEMENTATION\nThe simplest way to implement the Attach protocol is the following:\n\nRead 8 bytes\nchose stdout or stderr depending on the first byte\nExtract the frame size from the last 4 byets\nRead the extracted size and output it on the correct output\nGoto 1\n\n\n\nAttach to a container (websocket)\nGET /containers/(id)/attach/ws\nAttach to the container id via websocket\nImplements websocket protocol handshake according to RFC 6455\nExample request\n GET /containers/e90e34656806/attach/ws?logs=0stream=1stdin=1stdout=1stderr=1 HTTP/1.1\n\nExample response\n {{ STREAM }}\n\nQuery Parameters:\n\nlogs \u2013 1/True/true or 0/False/false, return logs. Default false\nstream \u2013 1/True/true or 0/False/false, return stream.\n Default false\nstdin \u2013 1/True/true or 0/False/false, if stream=true, attach\n to stdin. Default false\nstdout \u2013 1/True/true or 0/False/false, if logs=true, return\n stdout log, if stream=true, attach to stdout. Default false\nstderr \u2013 1/True/true or 0/False/false, if logs=true, return\n stderr log, if stream=true, attach to stderr. Default false\n\nStatus Codes:\n\n200 \u2013 no error\n400 \u2013 bad parameter\n404 \u2013 no such container\n500 \u2013 server error\n\nWait a container\nPOST /containers/(id)/wait\nBlock until container id stops, then returns the exit code\nExample request:\n POST /containers/16253994b7c4/wait HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\"StatusCode\": 0}\n\nStatus Codes:\n\n200 \u2013 no error\n404 \u2013 no such container\n500 \u2013 server error\n\nRemove a container\nDELETE /containers/(id)\nRemove the container id from the filesystem\nExample request:\n DELETE /containers/16253994b7c4?v=1 HTTP/1.1\n\nExample response:\n HTTP/1.1 204 No Content\n\nQuery Parameters:\n\nv \u2013 1/True/true or 0/False/false, Remove the volumes\n associated to the container. Default false\nforce - 1/True/true or 0/False/false, Kill then remove the container.\n Default false\n\nStatus Codes:\n\n204 \u2013 no error\n400 \u2013 bad parameter\n404 \u2013 no such container\n500 \u2013 server error\n\nCopy files or folders from a container\nPOST /containers/(id)/copy\nCopy files or folders of container id\nExample request:\n POST /containers/4fa6e0f0c678/copy HTTP/1.1\n Content-Type: application/json\n\n {\n \"Resource\": \"test.txt\"\n }\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/x-tar\n\n {{ TAR STREAM }}\n\nStatus Codes:\n\n200 \u2013 no error\n404 \u2013 no such container\n500 \u2013 server error\n\n2.2 Images\nList Images\nGET /images/json\nExample request:\n GET /images/json?all=0 HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n [\n {\n \"RepoTags\": [\n \"ubuntu:12.04\",\n \"ubuntu:precise\",\n \"ubuntu:latest\"\n ],\n \"Id\": \"8dbd9e392a964056420e5d58ca5cc376ef18e2de93b5cc90e868a1bbc8318c1c\",\n \"Created\": 1365714795,\n \"Size\": 131506275,\n \"VirtualSize\": 131506275\n },\n {\n \"RepoTags\": [\n \"ubuntu:12.10\",\n \"ubuntu:quantal\"\n ],\n \"ParentId\": \"27cf784147099545\",\n \"Id\": \"b750fe79269d2ec9a3c593ef05b4332b1d1a02a62b4accb2c21d589ff2f5f2dc\",\n \"Created\": 1364102658,\n \"Size\": 24653,\n \"VirtualSize\": 180116135\n }\n ]\n\nQuery Parameters:\n\nall \u2013 1/True/true or 0/False/false, default false\nfilters \u2013 a json encoded value of the filters (a map[string][]string) to process on the images list. Available filters:\ndangling=true\n\nCreate an image\nPOST /images/create\nCreate an image, either by pulling it from the registry or by importing it\nExample request:\n POST /images/create?fromImage=ubuntu HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\"status\": \"Pulling...\"}\n {\"status\": \"Pulling\", \"progress\": \"1 B/ 100 B\", \"progressDetail\": {\"current\": 1, \"total\": 100}}\n {\"error\": \"Invalid...\"}\n ...\n\nWhen using this endpoint to pull an image from the registry, the\n`X-Registry-Auth` header can be used to include\na base64-encoded AuthConfig object.\n\nQuery Parameters:\n\nfromImage \u2013 name of the image to pull\nfromSrc \u2013 source to import. The value may be a URL from which the image\n can be retrieved or - to read the image from the request body.\nrepo \u2013 repository\ntag \u2013 tag\n\nregistry \u2013 the registry to pull from\nRequest Headers:\n\n\nX-Registry-Auth \u2013 base64-encoded AuthConfig object\n\n\nStatus Codes:\n\n200 \u2013 no error\n500 \u2013 server error\n\nInspect an image\nGET /images/(name)/json\nReturn low-level information on the image name\nExample request:\n GET /images/ubuntu/json HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\n \"Created\": \"2013-03-23T22:24:18.818426-07:00\",\n \"Container\": \"3d67245a8d72ecf13f33dffac9f79dcdf70f75acb84d308770391510e0c23ad0\",\n \"ContainerConfig\":\n {\n \"Hostname\": \"\",\n \"User\": \"\",\n \"Memory\": 0,\n \"MemorySwap\": 0,\n \"AttachStdin\": false,\n \"AttachStdout\": false,\n \"AttachStderr\": false,\n \"PortSpecs\": null,\n \"Tty\": true,\n \"OpenStdin\": true,\n \"StdinOnce\": false,\n \"Env\": null,\n \"Cmd\": [\"/bin/bash\"],\n \"Dns\": null,\n \"Image\": \"ubuntu\",\n \"Volumes\": null,\n \"VolumesFrom\": \"\",\n \"WorkingDir\": \"\"\n },\n \"Id\": \"b750fe79269d2ec9a3c593ef05b4332b1d1a02a62b4accb2c21d589ff2f5f2dc\",\n \"Parent\": \"27cf784147099545\",\n \"Size\": 6824592\n }\n\nStatus Codes:\n\n200 \u2013 no error\n404 \u2013 no such image\n500 \u2013 server error\n\nGet the history of an image\nGET /images/(name)/history\nReturn the history of the image name\nExample request:\n GET /images/ubuntu/history HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n [\n {\n \"Id\": \"b750fe79269d\",\n \"Created\": 1364102658,\n \"CreatedBy\": \"/bin/bash\"\n },\n {\n \"Id\": \"27cf78414709\",\n \"Created\": 1364068391,\n \"CreatedBy\": \"\"\n }\n ]\n\nStatus Codes:\n\n200 \u2013 no error\n404 \u2013 no such image\n500 \u2013 server error\n\nPush an image on the registry\nPOST /images/(name)/push\nPush the image name on the registry\nExample request:\n POST /images/test/push HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\"status\": \"Pushing...\"}\n {\"status\": \"Pushing\", \"progress\": \"1/? (n/a)\", \"progressDetail\": {\"current\": 1}}}\n {\"error\": \"Invalid...\"}\n ...\n\nIf you wish to push an image on to a private registry, that image must already have been tagged\ninto a repository which references that registry host name and port. This repository name should\nthen be used in the URL. This mirrors the flow of the CLI.\n\nExample request:\n POST /images/registry.acme.com:5000/test/push HTTP/1.1\n\nQuery Parameters:\n\ntag \u2013 the tag to associate with the image on the registry, optional\n\nRequest Headers:\n\nX-Registry-Auth \u2013 include a base64-encoded AuthConfig\n object.\n\nStatus Codes:\n\n200 \u2013 no error\n404 \u2013 no such image\n500 \u2013 server error\n\nTag an image into a repository\nPOST /images/(name)/tag\nTag the image name into a repository\nExample request:\n POST /images/test/tag?repo=myrepoforce=0tag=v42 HTTP/1.1\n\nExample response:\n HTTP/1.1 201 OK\n\nQuery Parameters:\n\nrepo \u2013 The repository to tag in\nforce \u2013 1/True/true or 0/False/false, default false\ntag - The new tag name\n\nStatus Codes:\n\n201 \u2013 no error\n400 \u2013 bad parameter\n404 \u2013 no such image\n409 \u2013 conflict\n500 \u2013 server error\n\nRemove an image\nDELETE /images/(name)\nRemove the image name from the filesystem\nExample request:\n DELETE /images/test HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-type: application/json\n\n [\n {\"Untagged\": \"3e2f21a89f\"},\n {\"Deleted\": \"3e2f21a89f\"},\n {\"Deleted\": \"53b4f83ac9\"}\n ]\n\nQuery Parameters:\n\nforce \u2013 1/True/true or 0/False/false, default false\nnoprune \u2013 1/True/true or 0/False/false, default false\n\nStatus Codes:\n\n200 \u2013 no error\n404 \u2013 no such image\n409 \u2013 conflict\n500 \u2013 server error\n\nSearch images\nGET /images/search\nSearch for an image on Docker Hub.\n\nNote:\nThe response keys have changed from API v1.6 to reflect the JSON\nsent by the registry server to the docker daemon's request.\n\nExample request:\n GET /images/search?term=sshd HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n [\n {\n \"description\": \"\",\n \"is_official\": false,\n \"is_automated\": false,\n \"name\": \"wma55/u1210sshd\",\n \"star_count\": 0\n },\n {\n \"description\": \"\",\n \"is_official\": false,\n \"is_automated\": false,\n \"name\": \"jdswinbank/sshd\",\n \"star_count\": 0\n },\n {\n \"description\": \"\",\n \"is_official\": false,\n \"is_automated\": false,\n \"name\": \"vgauthier/sshd\",\n \"star_count\": 0\n }\n ...\n ]\n\nQuery Parameters:\n\nterm \u2013 term to search\n\nStatus Codes:\n\n200 \u2013 no error\n500 \u2013 server error\n\n2.3 Misc\nBuild an image from Dockerfile via stdin\nPOST /build\nBuild an image from Dockerfile via stdin\nExample request:\n POST /build HTTP/1.1\n\n {{ TAR STREAM }}\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\"stream\": \"Step 1...\"}\n {\"stream\": \"...\"}\n {\"error\": \"Error...\", \"errorDetail\": {\"code\": 123, \"message\": \"Error...\"}}\n\nThe stream must be a tar archive compressed with one of the\nfollowing algorithms: identity (no compression), gzip, bzip2, xz.\n\nThe archive must include a file called `Dockerfile`\nat its root. It may include any number of other files,\nwhich will be accessible in the build context (See the [*ADD build\ncommand*](/reference/builder/#dockerbuilder)).\n\nQuery Parameters:\n\nt \u2013 repository name (and optionally a tag) to be applied to\n the resulting image in case of success\nremote \u2013 git or HTTP/HTTPS URI build source\nq \u2013 suppress verbose build output\nnocache \u2013 do not use the cache when building the image\nrm - remove intermediate containers after a successful build (default behavior)\n\nforcerm - always remove intermediate containers (includes rm)\nRequest Headers:\n\n\nContent-type \u2013 should be set to \"application/tar\".\n\nX-Registry-Config \u2013 base64-encoded ConfigFile objec\n\nStatus Codes:\n\n200 \u2013 no error\n500 \u2013 server error\n\nCheck auth configuration\nPOST /auth\nGet the default username and email\nExample request:\n POST /auth HTTP/1.1\n Content-Type: application/json\n\n {\n \"username\":\" hannibal\",\n \"password: \"xxxx\",\n \"email\": \"hannibal@a-team.com\",\n \"serveraddress\": \"https://index.docker.io/v1/\"\n }\n\nExample response:\n HTTP/1.1 200 OK\n\nStatus Codes:\n\n200 \u2013 no error\n204 \u2013 no error\n500 \u2013 server error\n\nDisplay system-wide information\nGET /info\nDisplay system-wide information\nExample request:\n GET /info HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\n \"Containers\": 11,\n \"Images\": 16,\n \"Driver\": \"btrfs\",\n \"ExecutionDriver\": \"native-0.1\",\n \"KernelVersion\": \"3.12.0-1-amd64\"\n \"Debug\": false,\n \"NFd\": 11,\n \"NGoroutines\": 21,\n \"NEventsListener\": 0,\n \"InitPath\": \"/usr/bin/docker\",\n \"IndexServerAddress\": [\"https://index.docker.io/v1/\"],\n \"MemoryLimit\": true,\n \"SwapLimit\": false,\n \"IPv4Forwarding\": true\n }\n\nStatus Codes:\n\n200 \u2013 no error\n500 \u2013 server error\n\nShow the docker version information\nGET /version\nShow the docker version information\nExample request:\n GET /version HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\n \"ApiVersion\": \"1.12\",\n \"Version\": \"0.2.2\",\n \"GitCommit\": \"5a2a5cc+CHANGES\",\n \"GoVersion\": \"go1.0.3\"\n }\n\nStatus Codes:\n\n200 \u2013 no error\n500 \u2013 server error\n\nPing the docker server\nGET /_ping\nPing the docker server\nExample request:\n GET /_ping HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: text/plain\n\n OK\n\nStatus Codes:\n\n200 - no error\n500 - server error\n\nCreate a new image from a container's changes\nPOST /commit\nCreate a new image from a container's changes\nExample request:\n POST /commit?container=44c004db4b17comment=messagerepo=myrepo HTTP/1.1\n Content-Type: application/json\n\n {\n \"Hostname\": \"\",\n \"Domainname\": \"\",\n \"User\": \"\",\n \"Memory\": 0,\n \"MemorySwap\": 0,\n \"CpuShares\": 512,\n \"Cpuset\": \"0,1\",\n \"AttachStdin\": false,\n \"AttachStdout\": true,\n \"AttachStderr\": true,\n \"PortSpecs\": null,\n \"Tty\": false,\n \"OpenStdin\": false,\n \"StdinOnce\": false,\n \"Env\": null,\n \"Cmd\": [\n \"date\"\n ],\n \"Volumes\": {\n \"/tmp\": {}\n },\n \"WorkingDir\": \"\",\n \"NetworkDisabled\": false,\n \"ExposedPorts\": {\n \"22/tcp\": {}\n }\n }\n\nExample response:\n HTTP/1.1 201 Created\n Content-Type: application/vnd.docker.raw-stream\n\n {\"Id\": \"596069db4bf5\"}\n\nJson Parameters:\n\nconfig - the container's configuration\n\nQuery Parameters:\n\ncontainer \u2013 source container\nrepo \u2013 repository\ntag \u2013 tag\ncomment \u2013 commit message\nauthor \u2013 author (e.g., \"John Hannibal Smith\n hannibal@a-team.com\")\n\nStatus Codes:\n\n201 \u2013 no error\n404 \u2013 no such container\n500 \u2013 server error\n\nMonitor Docker's events\nGET /events\nGet container events from docker, either in real time via streaming, or via\npolling (using since).\nDocker containers will report the following events:\ncreate, destroy, die, export, kill, pause, restart, start, stop, unpause\n\nand Docker images will report:\nuntag, delete\n\nExample request:\n GET /events?since=1374067924\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\"status\": \"create\", \"id\": \"dfdf82bd3881\",\"from\": \"ubuntu:latest\", \"time\":1374067924}\n {\"status\": \"start\", \"id\": \"dfdf82bd3881\",\"from\": \"ubuntu:latest\", \"time\":1374067924}\n {\"status\": \"stop\", \"id\": \"dfdf82bd3881\",\"from\": \"ubuntu:latest\", \"time\":1374067966}\n {\"status\": \"destroy\", \"id\": \"dfdf82bd3881\",\"from\": \"ubuntu:latest\", \"time\":1374067970}\n\nQuery Parameters:\n\nsince \u2013 timestamp used for polling\nuntil \u2013 timestamp used for polling\n\nStatus Codes:\n\n200 \u2013 no error\n500 \u2013 server error\n\nGet a tarball containing all images in a repository\nGET /images/(name)/get\nGet a tarball containing all images and metadata for the repository specified\nby name.\nIf name is a specific name and tag (e.g. ubuntu:latest), then only that image\n(and its parents) are returned. If name is an image ID, similarly only tha\nimage (and its parents) are returned, but with the exclusion of the\n'repositories' file in the tarball, as there were no image names referenced.\nSee the image tarball format for more details.\nExample request\n GET /images/ubuntu/get\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/x-tar\n\n Binary data stream\n\nStatus Codes:\n\n200 \u2013 no error\n500 \u2013 server error\n\nGet a tarball containing all images.\nGET /images/get\nGet a tarball containing all images and metadata for one or more repositories.\nFor each value of the names parameter: if it is a specific name and tag (e.g.\nubuntu:latest), then only that image (and its parents) are returned; if it is\nan image ID, similarly only that image (and its parents) are returned and there\nwould be no names referenced in the 'repositories' file for this image ID.\nSee the image tarball format for more details.\nExample request\n GET /images/get?names=myname%2Fmyapp%3Alatestnames=busybox\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/x-tar\n\n Binary data stream\n\nStatus Codes:\n\n200 \u2013 no error\n500 \u2013 server error\n\nLoad a tarball with a set of images and tags into docker\nPOST /images/load\nLoad a set of images and tags into the docker repository.\nSee the image tarball format for more details.\nExample request\n POST /images/load\n\n Tarball in body\n\nExample response:\n HTTP/1.1 200 OK\n\nStatus Codes:\n\n200 \u2013 no error\n500 \u2013 server error\n\nImage tarball format\nAn image tarball contains one directory per image layer (named using its long ID),\neach containing three files:\n\nVERSION: currently 1.0 - the file format version\njson: detailed layer information, similar to docker inspect layer_id\nlayer.tar: A tarfile containing the filesystem changes in this layer\n\nThe layer.tar file will contain aufs style .wh..wh.aufs files and directories\nfor storing attribute changes and deletions.\nIf the tarball defines a repository, there will also be a repositories file at\nthe root that contains a list of repository and tag names mapped to layer IDs.\n{hello-world:\n {latest: 565a9d68a73f6706862bfe8409a7f659776d4d60a8d096eb4a3cbce6999cc2a1}\n}\n\n\nExec Create\nPOST /containers/(id)/exec\nSets up an exec instance in a running container id\nExample request:\n POST /containers/e90e34656806/exec HTTP/1.1\n Content-Type: application/json\n\n {\n \"AttachStdin\": false,\n \"AttachStdout\": true,\n \"AttachStderr\": true,\n \"Tty\": false,\n \"Cmd\": [\n \"date\"\n ],\n }\n\nExample response:\n HTTP/1.1 201 OK\n Content-Type: application/json\n\n {\n \"Id\": \"f90e34656806\"\n }\n\nJson Parameters:\n\nAttachStdin - Boolean value, attaches to stdin of the exec command.\nAttachStdout - Boolean value, attaches to stdout of the exec command.\nAttachStderr - Boolean value, attaches to stderr of the exec command.\nTty - Boolean value to allocate a pseudo-TTY\nCmd - Command to run specified as a string or an array of strings.\n\nStatus Codes:\n\n201 \u2013 no error\n404 \u2013 no such container\n\nExec Start\nPOST /exec/(id)/start\nStarts a previously set up exec instance id. If detach is true, this API\nreturns after starting the exec command. Otherwise, this API sets up an\ninteractive session with the exec command.\nExample request:\n POST /exec/e90e34656806/start HTTP/1.1\n Content-Type: application/json\n\n {\n \"Detach\": false,\n \"Tty\": false,\n }\n\nExample response:\n HTTP/1.1 201 OK\n Content-Type: application/json\n\n {{ STREAM }}\n\nJson Parameters:\n\nDetach - Detach from the exec command\nTty - Boolean value to allocate a pseudo-TTY\n\nStatus Codes:\n\n201 \u2013 no error\n\n404 \u2013 no such exec instance\nStream details:\nSimilar to the stream behavior of POST /container/(id)/attach API\n\n\nExec Resize\nPOST /exec/(id)/resize\nResizes the tty session used by the exec command id.\nThis API is valid only if tty was specified as part of creating and starting the exec command.\nExample request:\n POST /exec/e90e34656806/resize HTTP/1.1\n Content-Type: plain/text\n\nExample response:\n HTTP/1.1 201 OK\n Content-Type: plain/text\n\nQuery Parameters:\n\nh \u2013 height of tty session\nw \u2013 width\n\nStatus Codes:\n\n201 \u2013 no error\n404 \u2013 no such exec instance\n\n3. Going further\n3.1 Inside docker run\nAs an example, the docker run command line makes the following API calls:\n\n\nCreate the container\n\n\nIf the status code is 404, it means the image doesn't exist:\n\nTry to pull it\nThen retry to create the container\n\n\n\nStart the container\n\n\nIf you are not in detached mode:\n\n\nAttach to the container, using logs=1 (to have stdout and\n stderr from the container's start) and stream=1\n\n\nIf in detached mode or only stdin is attached:\n\nDisplay the container's id\n\n3.2 Hijacking\nIn this version of the API, /attach, uses hijacking to transport stdin,\nstdout and stderr on the same socket. This might change in the future.\n3.3 CORS Requests\nTo enable cross origin requests to the remote api add the flag\n\"--api-enable-cors\" when running docker in daemon mode.\n$ docker -d -H=\"192.168.1.9:2375\" --api-enable-cors",
|
|
"title": "**HIDDEN**"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api_v1.15#docker-remote-api-v115",
|
|
"tags": "",
|
|
"text": "",
|
|
"title": "Docker Remote API v1.15"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api_v1.15#1-brief-introduction",
|
|
"tags": "",
|
|
"text": "The Remote API has replaced rcli . The daemon listens on unix:///var/run/docker.sock but you can\n Bind Docker to another host/port or a Unix socket . The API tends to be REST, but for some complex commands, like attach \n or pull , the HTTP connection is hijacked to transport STDOUT ,\n STDIN and STDERR .",
|
|
"title": "1. Brief introduction"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api_v1.15#2-endpoints",
|
|
"tags": "",
|
|
"text": "",
|
|
"title": "2. Endpoints"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api_v1.15#21-containers",
|
|
"tags": "",
|
|
"text": "List containers GET /containers/json List containers Example request : GET /containers/json?all=1 before=8dfafdbc3a40 size=1 HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n [\n {\n \"Id\": \"8dfafdbc3a40\",\n \"Image\": \"ubuntu:latest\",\n \"Command\": \"echo 1\",\n \"Created\": 1367854155,\n \"Status\": \"Exit 0\",\n \"Ports\": [{\"PrivatePort\": 2222, \"PublicPort\": 3333, \"Type\": \"tcp\"}],\n \"SizeRw\": 12288,\n \"SizeRootFs\": 0\n },\n {\n \"Id\": \"9cd87474be90\",\n \"Image\": \"ubuntu:latest\",\n \"Command\": \"echo 222222\",\n \"Created\": 1367854155,\n \"Status\": \"Exit 0\",\n \"Ports\": [],\n \"SizeRw\": 12288,\n \"SizeRootFs\": 0\n },\n {\n \"Id\": \"3176a2479c92\",\n \"Image\": \"ubuntu:latest\",\n \"Command\": \"echo 3333333333333333\",\n \"Created\": 1367854154,\n \"Status\": \"Exit 0\",\n \"Ports\":[],\n \"SizeRw\":12288,\n \"SizeRootFs\":0\n },\n {\n \"Id\": \"4cb07b47f9fb\",\n \"Image\": \"ubuntu:latest\",\n \"Command\": \"echo 444444444444444444444444444444444\",\n \"Created\": 1367854152,\n \"Status\": \"Exit 0\",\n \"Ports\": [],\n \"SizeRw\": 12288,\n \"SizeRootFs\": 0\n }\n ] Query Parameters: all \u2013 1/True/true or 0/False/false, Show all containers.\n Only running containers are shown by default (i.e., this defaults to false) limit \u2013 Show limit last created\n containers, include non-running ones. since \u2013 Show only containers created since Id, include\n non-running ones. before \u2013 Show only containers created before Id, include\n non-running ones. size \u2013 1/True/true or 0/False/false, Show the containers\n sizes filters - a json encoded value of the filters (a map[string][]string) to process on the containers list. Available filters: exited= int -- containers with exit code of int status=(restarting|running|paused|exited) Status Codes: 200 \u2013 no error 400 \u2013 bad parameter 500 \u2013 server error Create a container POST /containers/create Create a container Example request : POST /containers/create HTTP/1.1\n Content-Type: application/json\n\n {\n \"Hostname\": \"\",\n \"Domainname\": \"\",\n \"User\": \"\",\n \"Memory\": 0,\n \"MemorySwap\": 0,\n \"CpuShares\": 512,\n \"Cpuset\": \"0,1\",\n \"AttachStdin\": false,\n \"AttachStdout\": true,\n \"AttachStderr\": true,\n \"Tty\": false,\n \"OpenStdin\": false,\n \"StdinOnce\": false,\n \"Env\": null,\n \"Cmd\": [\n \"date\"\n ],\n \"Entrypoint\": \"\",\n \"Image\": \"ubuntu\",\n \"Volumes\": {\n \"/tmp\": {}\n },\n \"WorkingDir\": \"\",\n \"NetworkDisabled\": false,\n \"MacAddress\": \"12:34:56:78:9a:bc\",\n \"ExposedPorts\": {\n \"22/tcp\": {}\n },\n \"SecurityOpts\": [\"\"],\n \"HostConfig\": {\n \"Binds\": [\"/tmp:/tmp\"],\n \"Links\": [\"redis3:redis\"],\n \"LxcConf\": {\"lxc.utsname\":\"docker\"},\n \"PortBindings\": { \"22/tcp\": [{ \"HostPort\": \"11022\" }] },\n \"PublishAllPorts\": false,\n \"Privileged\": false,\n \"Dns\": [\"8.8.8.8\"],\n \"DnsSearch\": [\"\"],\n \"ExtraHosts\": null,\n \"VolumesFrom\": [\"parent\", \"other:ro\"],\n \"CapAdd\": [\"NET_ADMIN\"],\n \"CapDrop\": [\"MKNOD\"],\n \"RestartPolicy\": { \"Name\": \"\", \"MaximumRetryCount\": 0 },\n \"NetworkMode\": \"bridge\",\n \"Devices\": []\n }\n } Example response : HTTP/1.1 201 Created\n Content-Type: application/json\n\n {\n \"Id\": \"f91ddc4b01e079c4481a8340bbbeca4dbd33d6e4a10662e499f8eacbb5bf252b\"\n \"Warnings\": []\n } Json Parameters: Hostname - A string value containing the desired hostname to use for the\n container. Domainname - A string value containing the desired domain name to use\n for the container. User - A string value containg the user to use inside the container. Memory - Memory limit in bytes. MemorySwap - Total memory usage (memory + swap); set -1 to disable swap. CpuShares - An integer value containing the CPU Shares for container\n (ie. the relative weight vs othercontainers).\n CpuSet - String value containg the cgroups Cpuset to use. AttachStdin - Boolean value, attaches to stdin. AttachStdout - Boolean value, attaches to stdout. AttachStderr - Boolean value, attaches to stderr. Tty - Boolean value, Attach standard streams to a tty, including stdin if it is not closed. OpenStdin - Boolean value, opens stdin, StdinOnce - Boolean value, close stdin after the 1 attached client disconnects. Env - A list of environment variables in the form of VAR=value Cmd - Command to run specified as a string or an array of strings. Entrypoint - Set the entrypoint for the container a a string or an array\n of strings Image - String value containing the image name to use for the container Volumes \u2013 An object mapping mountpoint paths (strings) inside the\n container to empty objects. WorkingDir - A string value containing the working dir for commands to\n run in. NetworkDisabled - Boolean value, when true disables neworking for the\n container ExposedPorts - An object mapping ports to an empty object in the form of:\n \"ExposedPorts\": { \" port / tcp|udp : {}\" } SecurityOpts : A list of string values to customize labels for MLS\n systems, such as SELinux. HostConfig Binds \u2013 A list of volume bindings for this container. Each volume\n binding is a string of the form container_path (to create a new\n volume for the container), host_path:container_path (to bind-mount\n a host path into the container), or host_path:container_path:ro \n (to make the bind-mount read-only inside the container). Links - A list of links for the container. Each link entry should be of\n of the form \"container_name:alias\". LxcConf - LXC specific configurations. These configurations will only\n work when using the lxc execution driver. PortBindings - A map of exposed container ports and the host port they\n should map to. It should be specified in the form\n { port / protocol : [{ \"HostPort\": \" port \" }] } \n Take note that port is specified as a string and not an integer value. PublishAllPorts - Allocates a random host port for all of a container's\n exposed ports. Specified as a boolean value. Privileged - Gives the container full access to the host. Specified as\n a boolean value. Dns - A list of dns servers for the container to use. DnsSearch - A list of DNS search domains ExtraHosts - A list of hostnames/IP mappings to be added to the\n container's /etc/hosts file. Specified in the form [\"hostname:IP\"] . VolumesFrom - A list of volumes to inherit from another container.\n Specified in the form container name [: ro|rw ] CapAdd - A list of kernel capabilties to add to the container. Capdrop - A list of kernel capabilties to drop from the container. RestartPolicy \u2013 The behavior to apply when the container exits. The\n value is an object with a Name property of either \"always\" to\n always restart or \"on-failure\" to restart only when the container\n exit code is non-zero. If on-failure is used, MaximumRetryCount \n controls the number of times to retry before giving up.\n The default is not to restart. (optional)\n An ever increasing delay (double the previous delay, starting at 100mS)\n is added before each restart to prevent flooding the server. NetworkMode - Sets the networking mode for the container. Supported\n values are: bridge , host , and container: name|id Devices - A list of devices to add to the container specified in the\n form\n { \"PathOnHost\": \"/dev/deviceName\", \"PathInContainer\": \"/dev/deviceName\", \"CgroupPermissions\": \"mrw\"} Query Parameters: name \u2013 Assign the specified name to the container. Must\n match /?[a-zA-Z0-9_-]+ . Status Codes: 201 \u2013 no error 404 \u2013 no such container 406 \u2013 impossible to attach (container not running) 500 \u2013 server error Inspect a container GET /containers/(id)/json Return low-level information on the container id Example request : GET /containers/4fa6e0f0c678/json HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\n \"Id\": \"4fa6e0f0c6786287e131c3852c58a2e01cc697a68231826813597e4994f1d6e2\",\n \"Created\": \"2013-05-07T14:51:42.041847+02:00\",\n \"Path\": \"date\",\n \"Args\": [],\n \"Config\": {\n \"Hostname\": \"4fa6e0f0c678\",\n \"User\": \"\",\n \"Memory\": 0,\n \"MemorySwap\": 0,\n \"AttachStdin\": false,\n \"AttachStdout\": true,\n \"AttachStderr\": true,\n \"PortSpecs\": null,\n \"Tty\": false,\n \"OpenStdin\": false,\n \"StdinOnce\": false,\n \"Env\": null,\n \"Cmd\": [\n \"date\"\n ],\n \"Dns\": null,\n \"Image\": \"ubuntu\",\n \"Volumes\": {},\n \"VolumesFrom\": \"\",\n \"WorkingDir\": \"\"\n },\n \"State\": {\n \"Running\": false,\n \"Pid\": 0,\n \"ExitCode\": 0,\n \"StartedAt\": \"2013-05-07T14:51:42.087658+02:01360\",\n \"Ghost\": false\n },\n \"Image\": \"b750fe79269d2ec9a3c593ef05b4332b1d1a02a62b4accb2c21d589ff2f5f2dc\",\n \"NetworkSettings\": {\n \"IpAddress\": \"\",\n \"IpPrefixLen\": 0,\n \"Gateway\": \"\",\n \"Bridge\": \"\",\n \"PortMapping\": null\n },\n \"SysInitPath\": \"/home/kitty/go/src/github.com/docker/docker/bin/docker\",\n \"ResolvConfPath\": \"/etc/resolv.conf\",\n \"Volumes\": {},\n \"HostConfig\": {\n \"Binds\": null,\n \"ContainerIDFile\": \"\",\n \"LxcConf\": [],\n \"Privileged\": false,\n \"PortBindings\": {\n \"80/tcp\": [\n {\n \"HostIp\": \"0.0.0.0\",\n \"HostPort\": \"49153\"\n }\n ]\n },\n \"Links\": [\"/name:alias\"],\n \"PublishAllPorts\": false,\n \"CapAdd\": [\"NET_ADMIN\"],\n \"CapDrop\": [\"MKNOD\"]\n }\n } Status Codes: 200 \u2013 no error 404 \u2013 no such container 500 \u2013 server error List processes running inside a container GET /containers/(id)/top List processes running inside the container id Example request : GET /containers/4fa6e0f0c678/top HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\n \"Titles\": [\n \"USER\",\n \"PID\",\n \"%CPU\",\n \"%MEM\",\n \"VSZ\",\n \"RSS\",\n \"TTY\",\n \"STAT\",\n \"START\",\n \"TIME\",\n \"COMMAND\"\n ],\n \"Processes\": [\n [\"root\",\"20147\",\"0.0\",\"0.1\",\"18060\",\"1864\",\"pts/4\",\"S\",\"10:06\",\"0:00\",\"bash\"],\n [\"root\",\"20271\",\"0.0\",\"0.0\",\"4312\",\"352\",\"pts/4\",\"S+\",\"10:07\",\"0:00\",\"sleep\",\"10\"]\n ]\n } Query Parameters: ps_args \u2013 ps arguments to use (e.g., aux) Status Codes: 200 \u2013 no error 404 \u2013 no such container 500 \u2013 server error Get container logs GET /containers/(id)/logs Get stdout and stderr logs from the container id Example request : GET /containers/4fa6e0f0c678/logs?stderr=1 stdout=1 timestamps=1 follow=1 tail=10 HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/vnd.docker.raw-stream\n\n {{ STREAM }} Query Parameters: follow \u2013 1/True/true or 0/False/false, return stream. Default false stdout \u2013 1/True/true or 0/False/false, show stdout log. Default false stderr \u2013 1/True/true or 0/False/false, show stderr log. Default false timestamps \u2013 1/True/true or 0/False/false, print timestamps for\n every log line. Default false tail \u2013 Output specified number of lines at the end of logs: all or number . Default all Status Codes: 200 \u2013 no error 404 \u2013 no such container 500 \u2013 server error Inspect changes on a container's filesystem GET /containers/(id)/changes Inspect changes on container id 's filesystem Example request : GET /containers/4fa6e0f0c678/changes HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n [\n {\n \"Path\": \"/dev\",\n \"Kind\": 0\n },\n {\n \"Path\": \"/dev/kmsg\",\n \"Kind\": 1\n },\n {\n \"Path\": \"/test\",\n \"Kind\": 1\n }\n ] Status Codes: 200 \u2013 no error 404 \u2013 no such container 500 \u2013 server error Export a container GET /containers/(id)/export Export the contents of container id Example request : GET /containers/4fa6e0f0c678/export HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/octet-stream\n\n {{ TAR STREAM }} Status Codes: 200 \u2013 no error 404 \u2013 no such container 500 \u2013 server error Resize a container TTY GET /containers/(id)/resize?h= height w= width Resize the TTY of container id Example request : GET /containers/4fa6e0f0c678/resize?h=40 w=80 HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Length: 0\n Content-Type: text/plain; charset=utf-8 Status Codes: 200 \u2013 no error 404 \u2013 No such container 500 \u2013 bad file descriptor Start a container POST /containers/(id)/start Start the container id Example request : POST /containers/(id)/start HTTP/1.1\n Content-Type: application/json\n\n {\n \"Binds\": [\"/tmp:/tmp\"],\n \"Links\": [\"redis3:redis\"],\n \"LxcConf\": {\"lxc.utsname\":\"docker\"},\n \"PortBindings\": { \"22/tcp\": [{ \"HostPort\": \"11022\" }] },\n \"PublishAllPorts\": false,\n \"Privileged\": false,\n \"Dns\": [\"8.8.8.8\"],\n \"DnsSearch\": [\"\"],\n \"VolumesFrom\": [\"parent\", \"other:ro\"],\n \"CapAdd\": [\"NET_ADMIN\"],\n \"CapDrop\": [\"MKNOD\"],\n \"RestartPolicy\": { \"Name\": \"\", \"MaximumRetryCount\": 0 },\n \"NetworkMode\": \"bridge\",\n \"Devices\": []\n } Example response : HTTP/1.1 204 No Content Json Parameters: Binds \u2013 A list of volume bindings for this container. Each volume\n binding is a string of the form container_path (to create a new\n volume for the container), host_path:container_path (to bind-mount\n a host path into the container), or host_path:container_path:ro \n (to make the bind-mount read-only inside the container). Links - A list of links for the container. Each link entry should be of\n of the form \"container_name:alias\". LxcConf - LXC specific configurations. These configurations will only\n work when using the lxc execution driver. PortBindings - A map of exposed container ports and the host port they\n should map to. It should be specified in the form\n { port / protocol : [{ \"HostPort\": \" port \" }] } \n Take note that port is specified as a string and not an integer value. PublishAllPorts - Allocates a random host port for all of a container's\n exposed ports. Specified as a boolean value. Privileged - Gives the container full access to the host. Specified as\n a boolean value. Dns - A list of dns servers for the container to use. DnsSearch - A list of DNS search domains VolumesFrom - A list of volumes to inherit from another container.\n Specified in the form container name [: ro|rw ] CapAdd - A list of kernel capabilties to add to the container. Capdrop - A list of kernel capabilties to drop from the container. RestartPolicy \u2013 The behavior to apply when the container exits. The\n value is an object with a Name property of either \"always\" to\n always restart or \"on-failure\" to restart only when the container\n exit code is non-zero. If on-failure is used, MaximumRetryCount \n controls the number of times to retry before giving up.\n The default is not to restart. (optional)\n An ever increasing delay (double the previous delay, starting at 100mS)\n is added before each restart to prevent flooding the server. NetworkMode - Sets the networking mode for the container. Supported\n values are: bridge , host , and container: name|id Devices - A list of devices to add to the container specified in the\n form\n { \"PathOnHost\": \"/dev/deviceName\", \"PathInContainer\": \"/dev/deviceName\", \"CgroupPermissions\": \"mrw\"} Status Codes: 204 \u2013 no error 304 \u2013 container already started 404 \u2013 no such container 500 \u2013 server error Stop a container POST /containers/(id)/stop Stop the container id Example request : POST /containers/e90e34656806/stop?t=5 HTTP/1.1 Example response : HTTP/1.1 204 No Content Query Parameters: t \u2013 number of seconds to wait before killing the container Status Codes: 204 \u2013 no error 304 \u2013 container already stopped 404 \u2013 no such container 500 \u2013 server error Restart a container POST /containers/(id)/restart Restart the container id Example request : POST /containers/e90e34656806/restart?t=5 HTTP/1.1 Example response : HTTP/1.1 204 No Content Query Parameters: t \u2013 number of seconds to wait before killing the container Status Codes: 204 \u2013 no error 404 \u2013 no such container 500 \u2013 server error Kill a container POST /containers/(id)/kill Kill the container id Example request : POST /containers/e90e34656806/kill HTTP/1.1 Example response : HTTP/1.1 204 No Content Query Parameters signal - Signal to send to the container: integer or string like \"SIGINT\".\n When not set, SIGKILL is assumed and the call will waits for the container to exit. Status Codes: 204 \u2013 no error 404 \u2013 no such container 500 \u2013 server error Pause a container POST /containers/(id)/pause Pause the container id Example request : POST /containers/e90e34656806/pause HTTP/1.1 Example response : HTTP/1.1 204 No Content Status Codes: 204 \u2013 no error 404 \u2013 no such container 500 \u2013 server error Unpause a container POST /containers/(id)/unpause Unpause the container id Example request : POST /containers/e90e34656806/unpause HTTP/1.1 Example response : HTTP/1.1 204 No Content Status Codes: 204 \u2013 no error 404 \u2013 no such container 500 \u2013 server error Attach to a container POST /containers/(id)/attach Attach to the container id Example request : POST /containers/16253994b7c4/attach?logs=1 stream=0 stdout=1 HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/vnd.docker.raw-stream\n\n {{ STREAM }} Query Parameters: logs \u2013 1/True/true or 0/False/false, return logs. Default false stream \u2013 1/True/true or 0/False/false, return stream.\n Default false stdin \u2013 1/True/true or 0/False/false, if stream=true, attach\n to stdin. Default false stdout \u2013 1/True/true or 0/False/false, if logs=true, return\n stdout log, if stream=true, attach to stdout. Default false stderr \u2013 1/True/true or 0/False/false, if logs=true, return\n stderr log, if stream=true, attach to stderr. Default false Status Codes: 200 \u2013 no error 400 \u2013 bad parameter 404 \u2013 no such container 500 \u2013 server error Stream details : When using the TTY setting is enabled in POST /containers/create ,\nthe stream is the raw data from the process PTY and client's stdin.\nWhen the TTY is disabled, then the stream is multiplexed to separate\nstdout and stderr. The format is a Header and a Payload (frame). HEADER The header will contain the information on which stream write the\nstream (stdout or stderr). It also contain the size of the\nassociated frame encoded on the last 4 bytes (uint32). It is encoded on the first 8 bytes like this: header := [8]byte{STREAM_TYPE, 0, 0, 0, SIZE1, SIZE2, SIZE3, SIZE4} STREAM_TYPE can be: 0: stdin (will be written on stdout) 1: stdout 2: stderr SIZE1, SIZE2, SIZE3, SIZE4 are the 4 bytes of\nthe uint32 size encoded as big endian. PAYLOAD The payload is the raw stream. IMPLEMENTATION The simplest way to implement the Attach protocol is the following: Read 8 bytes chose stdout or stderr depending on the first byte Extract the frame size from the last 4 byets Read the extracted size and output it on the correct output Goto 1 Attach to a container (websocket) GET /containers/(id)/attach/ws Attach to the container id via websocket Implements websocket protocol handshake according to RFC 6455 Example request GET /containers/e90e34656806/attach/ws?logs=0 stream=1 stdin=1 stdout=1 stderr=1 HTTP/1.1 Example response {{ STREAM }} Query Parameters: logs \u2013 1/True/true or 0/False/false, return logs. Default false stream \u2013 1/True/true or 0/False/false, return stream.\n Default false stdin \u2013 1/True/true or 0/False/false, if stream=true, attach\n to stdin. Default false stdout \u2013 1/True/true or 0/False/false, if logs=true, return\n stdout log, if stream=true, attach to stdout. Default false stderr \u2013 1/True/true or 0/False/false, if logs=true, return\n stderr log, if stream=true, attach to stderr. Default false Status Codes: 200 \u2013 no error 400 \u2013 bad parameter 404 \u2013 no such container 500 \u2013 server error Wait a container POST /containers/(id)/wait Block until container id stops, then returns the exit code Example request : POST /containers/16253994b7c4/wait HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\"StatusCode\": 0} Status Codes: 200 \u2013 no error 404 \u2013 no such container 500 \u2013 server error Remove a container DELETE /containers/(id) Remove the container id from the filesystem Example request : DELETE /containers/16253994b7c4?v=1 HTTP/1.1 Example response : HTTP/1.1 204 No Content Query Parameters: v \u2013 1/True/true or 0/False/false, Remove the volumes\n associated to the container. Default false force - 1/True/true or 0/False/false, Kill then remove the container.\n Default false Status Codes: 204 \u2013 no error 400 \u2013 bad parameter 404 \u2013 no such container 500 \u2013 server error Copy files or folders from a container POST /containers/(id)/copy Copy files or folders of container id Example request : POST /containers/4fa6e0f0c678/copy HTTP/1.1\n Content-Type: application/json\n\n {\n \"Resource\": \"test.txt\"\n } Example response : HTTP/1.1 200 OK\n Content-Type: application/x-tar\n\n {{ TAR STREAM }} Status Codes: 200 \u2013 no error 404 \u2013 no such container 500 \u2013 server error",
|
|
"title": "2.1 Containers"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api_v1.15#22-images",
|
|
"tags": "",
|
|
"text": "List Images GET /images/json Example request : GET /images/json?all=0 HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n [\n {\n \"RepoTags\": [\n \"ubuntu:12.04\",\n \"ubuntu:precise\",\n \"ubuntu:latest\"\n ],\n \"Id\": \"8dbd9e392a964056420e5d58ca5cc376ef18e2de93b5cc90e868a1bbc8318c1c\",\n \"Created\": 1365714795,\n \"Size\": 131506275,\n \"VirtualSize\": 131506275\n },\n {\n \"RepoTags\": [\n \"ubuntu:12.10\",\n \"ubuntu:quantal\"\n ],\n \"ParentId\": \"27cf784147099545\",\n \"Id\": \"b750fe79269d2ec9a3c593ef05b4332b1d1a02a62b4accb2c21d589ff2f5f2dc\",\n \"Created\": 1364102658,\n \"Size\": 24653,\n \"VirtualSize\": 180116135\n }\n ] Query Parameters: all \u2013 1/True/true or 0/False/false, default false filters \u2013 a json encoded value of the filters (a map[string][]string) to process on the images list. Available filters: dangling=true Create an image POST /images/create Create an image, either by pulling it from the registry or by importing it Example request : POST /images/create?fromImage=ubuntu HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\"status\": \"Pulling...\"}\n {\"status\": \"Pulling\", \"progress\": \"1 B/ 100 B\", \"progressDetail\": {\"current\": 1, \"total\": 100}}\n {\"error\": \"Invalid...\"}\n ...\n\nWhen using this endpoint to pull an image from the registry, the\n`X-Registry-Auth` header can be used to include\na base64-encoded AuthConfig object. Query Parameters: fromImage \u2013 name of the image to pull fromSrc \u2013 source to import. The value may be a URL from which the image\n can be retrieved or - to read the image from the request body. repo \u2013 repository tag \u2013 tag registry \u2013 the registry to pull from Request Headers: X-Registry-Auth \u2013 base64-encoded AuthConfig object Status Codes: 200 \u2013 no error 500 \u2013 server error Inspect an image GET /images/(name)/json Return low-level information on the image name Example request : GET /images/ubuntu/json HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\n \"Created\": \"2013-03-23T22:24:18.818426-07:00\",\n \"Container\": \"3d67245a8d72ecf13f33dffac9f79dcdf70f75acb84d308770391510e0c23ad0\",\n \"ContainerConfig\":\n {\n \"Hostname\": \"\",\n \"User\": \"\",\n \"Memory\": 0,\n \"MemorySwap\": 0,\n \"AttachStdin\": false,\n \"AttachStdout\": false,\n \"AttachStderr\": false,\n \"PortSpecs\": null,\n \"Tty\": true,\n \"OpenStdin\": true,\n \"StdinOnce\": false,\n \"Env\": null,\n \"Cmd\": [\"/bin/bash\"],\n \"Dns\": null,\n \"Image\": \"ubuntu\",\n \"Volumes\": null,\n \"VolumesFrom\": \"\",\n \"WorkingDir\": \"\"\n },\n \"Id\": \"b750fe79269d2ec9a3c593ef05b4332b1d1a02a62b4accb2c21d589ff2f5f2dc\",\n \"Parent\": \"27cf784147099545\",\n \"Size\": 6824592\n } Status Codes: 200 \u2013 no error 404 \u2013 no such image 500 \u2013 server error Get the history of an image GET /images/(name)/history Return the history of the image name Example request : GET /images/ubuntu/history HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n [\n {\n \"Id\": \"b750fe79269d\",\n \"Created\": 1364102658,\n \"CreatedBy\": \"/bin/bash\"\n },\n {\n \"Id\": \"27cf78414709\",\n \"Created\": 1364068391,\n \"CreatedBy\": \"\"\n }\n ] Status Codes: 200 \u2013 no error 404 \u2013 no such image 500 \u2013 server error Push an image on the registry POST /images/(name)/push Push the image name on the registry Example request : POST /images/test/push HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\"status\": \"Pushing...\"}\n {\"status\": \"Pushing\", \"progress\": \"1/? (n/a)\", \"progressDetail\": {\"current\": 1}}}\n {\"error\": \"Invalid...\"}\n ...\n\nIf you wish to push an image on to a private registry, that image must already have been tagged\ninto a repository which references that registry host name and port. This repository name should\nthen be used in the URL. This mirrors the flow of the CLI. Example request : POST /images/registry.acme.com:5000/test/push HTTP/1.1 Query Parameters: tag \u2013 the tag to associate with the image on the registry, optional Request Headers: X-Registry-Auth \u2013 include a base64-encoded AuthConfig\n object. Status Codes: 200 \u2013 no error 404 \u2013 no such image 500 \u2013 server error Tag an image into a repository POST /images/(name)/tag Tag the image name into a repository Example request : POST /images/test/tag?repo=myrepo force=0 tag=v42 HTTP/1.1 Example response : HTTP/1.1 201 OK Query Parameters: repo \u2013 The repository to tag in force \u2013 1/True/true or 0/False/false, default false tag - The new tag name Status Codes: 201 \u2013 no error 400 \u2013 bad parameter 404 \u2013 no such image 409 \u2013 conflict 500 \u2013 server error Remove an image DELETE /images/(name) Remove the image name from the filesystem Example request : DELETE /images/test HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-type: application/json\n\n [\n {\"Untagged\": \"3e2f21a89f\"},\n {\"Deleted\": \"3e2f21a89f\"},\n {\"Deleted\": \"53b4f83ac9\"}\n ] Query Parameters: force \u2013 1/True/true or 0/False/false, default false noprune \u2013 1/True/true or 0/False/false, default false Status Codes: 200 \u2013 no error 404 \u2013 no such image 409 \u2013 conflict 500 \u2013 server error Search images GET /images/search Search for an image on Docker Hub . Note :\nThe response keys have changed from API v1.6 to reflect the JSON\nsent by the registry server to the docker daemon's request. Example request : GET /images/search?term=sshd HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n [\n {\n \"description\": \"\",\n \"is_official\": false,\n \"is_automated\": false,\n \"name\": \"wma55/u1210sshd\",\n \"star_count\": 0\n },\n {\n \"description\": \"\",\n \"is_official\": false,\n \"is_automated\": false,\n \"name\": \"jdswinbank/sshd\",\n \"star_count\": 0\n },\n {\n \"description\": \"\",\n \"is_official\": false,\n \"is_automated\": false,\n \"name\": \"vgauthier/sshd\",\n \"star_count\": 0\n }\n ...\n ] Query Parameters: term \u2013 term to search Status Codes: 200 \u2013 no error 500 \u2013 server error",
|
|
"title": "2.2 Images"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api_v1.15#23-misc",
|
|
"tags": "",
|
|
"text": "Build an image from Dockerfile via stdin POST /build Build an image from Dockerfile via stdin Example request : POST /build HTTP/1.1\n\n {{ TAR STREAM }} Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\"stream\": \"Step 1...\"}\n {\"stream\": \"...\"}\n {\"error\": \"Error...\", \"errorDetail\": {\"code\": 123, \"message\": \"Error...\"}}\n\nThe stream must be a tar archive compressed with one of the\nfollowing algorithms: identity (no compression), gzip, bzip2, xz.\n\nThe archive must include a file called `Dockerfile`\nat its root. It may include any number of other files,\nwhich will be accessible in the build context (See the [*ADD build\ncommand*](/reference/builder/#dockerbuilder)). Query Parameters: t \u2013 repository name (and optionally a tag) to be applied to\n the resulting image in case of success remote \u2013 git or HTTP/HTTPS URI build source q \u2013 suppress verbose build output nocache \u2013 do not use the cache when building the image rm - remove intermediate containers after a successful build (default behavior) forcerm - always remove intermediate containers (includes rm) Request Headers: Content-type \u2013 should be set to \"application/tar\" . X-Registry-Config \u2013 base64-encoded ConfigFile objec Status Codes: 200 \u2013 no error 500 \u2013 server error Check auth configuration POST /auth Get the default username and email Example request : POST /auth HTTP/1.1\n Content-Type: application/json\n\n {\n \"username\":\" hannibal\",\n \"password: \"xxxx\",\n \"email\": \"hannibal@a-team.com\",\n \"serveraddress\": \"https://index.docker.io/v1/\"\n } Example response : HTTP/1.1 200 OK Status Codes: 200 \u2013 no error 204 \u2013 no error 500 \u2013 server error Display system-wide information GET /info Display system-wide information Example request : GET /info HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\n \"Containers\": 11,\n \"Images\": 16,\n \"Driver\": \"btrfs\",\n \"ExecutionDriver\": \"native-0.1\",\n \"KernelVersion\": \"3.12.0-1-amd64\"\n \"Debug\": false,\n \"NFd\": 11,\n \"NGoroutines\": 21,\n \"NEventsListener\": 0,\n \"InitPath\": \"/usr/bin/docker\",\n \"IndexServerAddress\": [\"https://index.docker.io/v1/\"],\n \"MemoryLimit\": true,\n \"SwapLimit\": false,\n \"IPv4Forwarding\": true\n } Status Codes: 200 \u2013 no error 500 \u2013 server error Show the docker version information GET /version Show the docker version information Example request : GET /version HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\n \"ApiVersion\": \"1.12\",\n \"Version\": \"0.2.2\",\n \"GitCommit\": \"5a2a5cc+CHANGES\",\n \"GoVersion\": \"go1.0.3\"\n } Status Codes: 200 \u2013 no error 500 \u2013 server error Ping the docker server GET /_ping Ping the docker server Example request : GET /_ping HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: text/plain\n\n OK Status Codes: 200 - no error 500 - server error Create a new image from a container's changes POST /commit Create a new image from a container's changes Example request : POST /commit?container=44c004db4b17 comment=message repo=myrepo HTTP/1.1\n Content-Type: application/json\n\n {\n \"Hostname\": \"\",\n \"Domainname\": \"\",\n \"User\": \"\",\n \"Memory\": 0,\n \"MemorySwap\": 0,\n \"CpuShares\": 512,\n \"Cpuset\": \"0,1\",\n \"AttachStdin\": false,\n \"AttachStdout\": true,\n \"AttachStderr\": true,\n \"PortSpecs\": null,\n \"Tty\": false,\n \"OpenStdin\": false,\n \"StdinOnce\": false,\n \"Env\": null,\n \"Cmd\": [\n \"date\"\n ],\n \"Volumes\": {\n \"/tmp\": {}\n },\n \"WorkingDir\": \"\",\n \"NetworkDisabled\": false,\n \"ExposedPorts\": {\n \"22/tcp\": {}\n }\n } Example response : HTTP/1.1 201 Created\n Content-Type: application/vnd.docker.raw-stream\n\n {\"Id\": \"596069db4bf5\"} Json Parameters: config - the container's configuration Query Parameters: container \u2013 source container repo \u2013 repository tag \u2013 tag comment \u2013 commit message author \u2013 author (e.g., \"John Hannibal Smith\n hannibal@a-team.com \") Status Codes: 201 \u2013 no error 404 \u2013 no such container 500 \u2013 server error Monitor Docker's events GET /events Get container events from docker, either in real time via streaming, or via\npolling (using since). Docker containers will report the following events: create, destroy, die, export, kill, pause, restart, start, stop, unpause and Docker images will report: untag, delete Example request : GET /events?since=1374067924 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\"status\": \"create\", \"id\": \"dfdf82bd3881\",\"from\": \"ubuntu:latest\", \"time\":1374067924}\n {\"status\": \"start\", \"id\": \"dfdf82bd3881\",\"from\": \"ubuntu:latest\", \"time\":1374067924}\n {\"status\": \"stop\", \"id\": \"dfdf82bd3881\",\"from\": \"ubuntu:latest\", \"time\":1374067966}\n {\"status\": \"destroy\", \"id\": \"dfdf82bd3881\",\"from\": \"ubuntu:latest\", \"time\":1374067970} Query Parameters: since \u2013 timestamp used for polling until \u2013 timestamp used for polling Status Codes: 200 \u2013 no error 500 \u2013 server error Get a tarball containing all images in a repository GET /images/(name)/get Get a tarball containing all images and metadata for the repository specified\nby name . If name is a specific name and tag (e.g. ubuntu:latest), then only that image\n(and its parents) are returned. If name is an image ID, similarly only tha\nimage (and its parents) are returned, but with the exclusion of the\n'repositories' file in the tarball, as there were no image names referenced. See the image tarball format for more details. Example request GET /images/ubuntu/get Example response : HTTP/1.1 200 OK\n Content-Type: application/x-tar\n\n Binary data stream Status Codes: 200 \u2013 no error 500 \u2013 server error Get a tarball containing all images. GET /images/get Get a tarball containing all images and metadata for one or more repositories. For each value of the names parameter: if it is a specific name and tag (e.g.\nubuntu:latest), then only that image (and its parents) are returned; if it is\nan image ID, similarly only that image (and its parents) are returned and there\nwould be no names referenced in the 'repositories' file for this image ID. See the image tarball format for more details. Example request GET /images/get?names=myname%2Fmyapp%3Alatest names=busybox Example response : HTTP/1.1 200 OK\n Content-Type: application/x-tar\n\n Binary data stream Status Codes: 200 \u2013 no error 500 \u2013 server error Load a tarball with a set of images and tags into docker POST /images/load Load a set of images and tags into the docker repository.\nSee the image tarball format for more details. Example request POST /images/load\n\n Tarball in body Example response : HTTP/1.1 200 OK Status Codes: 200 \u2013 no error 500 \u2013 server error Image tarball format An image tarball contains one directory per image layer (named using its long ID),\neach containing three files: VERSION : currently 1.0 - the file format version json : detailed layer information, similar to docker inspect layer_id layer.tar : A tarfile containing the filesystem changes in this layer The layer.tar file will contain aufs style .wh..wh.aufs files and directories\nfor storing attribute changes and deletions. If the tarball defines a repository, there will also be a repositories file at\nthe root that contains a list of repository and tag names mapped to layer IDs. { hello-world :\n { latest : 565a9d68a73f6706862bfe8409a7f659776d4d60a8d096eb4a3cbce6999cc2a1 }\n} Exec Create POST /containers/(id)/exec Sets up an exec instance in a running container id Example request : POST /containers/e90e34656806/exec HTTP/1.1\n Content-Type: application/json\n\n {\n \"AttachStdin\": false,\n \"AttachStdout\": true,\n \"AttachStderr\": true,\n \"Tty\": false,\n \"Cmd\": [\n \"date\"\n ],\n } Example response : HTTP/1.1 201 OK\n Content-Type: application/json\n\n {\n \"Id\": \"f90e34656806\"\n } Json Parameters: AttachStdin - Boolean value, attaches to stdin of the exec command. AttachStdout - Boolean value, attaches to stdout of the exec command. AttachStderr - Boolean value, attaches to stderr of the exec command. Tty - Boolean value to allocate a pseudo-TTY Cmd - Command to run specified as a string or an array of strings. Status Codes: 201 \u2013 no error 404 \u2013 no such container Exec Start POST /exec/(id)/start Starts a previously set up exec instance id . If detach is true, this API\nreturns after starting the exec command. Otherwise, this API sets up an\ninteractive session with the exec command. Example request : POST /exec/e90e34656806/start HTTP/1.1\n Content-Type: application/json\n\n {\n \"Detach\": false,\n \"Tty\": false,\n } Example response : HTTP/1.1 201 OK\n Content-Type: application/json\n\n {{ STREAM }} Json Parameters: Detach - Detach from the exec command Tty - Boolean value to allocate a pseudo-TTY Status Codes: 201 \u2013 no error 404 \u2013 no such exec instance Stream details :\nSimilar to the stream behavior of POST /container/(id)/attach API Exec Resize POST /exec/(id)/resize Resizes the tty session used by the exec command id .\nThis API is valid only if tty was specified as part of creating and starting the exec command. Example request : POST /exec/e90e34656806/resize HTTP/1.1\n Content-Type: plain/text Example response : HTTP/1.1 201 OK\n Content-Type: plain/text Query Parameters: h \u2013 height of tty session w \u2013 width Status Codes: 201 \u2013 no error 404 \u2013 no such exec instance",
|
|
"title": "2.3 Misc"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api_v1.15#3-going-further",
|
|
"tags": "",
|
|
"text": "",
|
|
"title": "3. Going further"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api_v1.15#31-inside-docker-run",
|
|
"tags": "",
|
|
"text": "As an example, the docker run command line makes the following API calls: Create the container If the status code is 404, it means the image doesn't exist: Try to pull it Then retry to create the container Start the container If you are not in detached mode: Attach to the container, using logs=1 (to have stdout and\n stderr from the container's start) and stream=1 If in detached mode or only stdin is attached: Display the container's id",
|
|
"title": "3.1 Inside docker run"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api_v1.15#32-hijacking",
|
|
"tags": "",
|
|
"text": "In this version of the API, /attach, uses hijacking to transport stdin,\nstdout and stderr on the same socket. This might change in the future.",
|
|
"title": "3.2 Hijacking"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api_v1.15#33-cors-requests",
|
|
"tags": "",
|
|
"text": "To enable cross origin requests to the remote api add the flag\n\"--api-enable-cors\" when running docker in daemon mode. $ docker -d -H=\"192.168.1.9:2375\" --api-enable-cors",
|
|
"title": "3.3 CORS Requests"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api_v1.14/",
|
|
"tags": "",
|
|
"text": "Docker Remote API v1.14\n1. Brief introduction\n\nThe Remote API has replaced rcli.\nThe daemon listens on unix:///var/run/docker.sock but you can\n Bind Docker to another host/port or a Unix socket.\nThe API tends to be REST, but for some complex commands, like attach\n or pull, the HTTP connection is hijacked to transport STDOUT,\n STDIN and STDERR.\n\n2. Endpoints\n2.1 Containers\nList containers\nGET /containers/json\nList containers\nExample request:\n GET /containers/json?all=1before=8dfafdbc3a40size=1 HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n [\n {\n \"Id\": \"8dfafdbc3a40\",\n \"Image\": \"ubuntu:latest\",\n \"Command\": \"echo 1\",\n \"Created\": 1367854155,\n \"Status\": \"Exit 0\",\n \"Ports\": [{\"PrivatePort\": 2222, \"PublicPort\": 3333, \"Type\": \"tcp\"}],\n \"SizeRw\": 12288,\n \"SizeRootFs\": 0\n },\n {\n \"Id\": \"9cd87474be90\",\n \"Image\": \"ubuntu:latest\",\n \"Command\": \"echo 222222\",\n \"Created\": 1367854155,\n \"Status\": \"Exit 0\",\n \"Ports\": [],\n \"SizeRw\": 12288,\n \"SizeRootFs\": 0\n },\n {\n \"Id\": \"3176a2479c92\",\n \"Image\": \"ubuntu:latest\",\n \"Command\": \"echo 3333333333333333\",\n \"Created\": 1367854154,\n \"Status\": \"Exit 0\",\n \"Ports\":[],\n \"SizeRw\":12288,\n \"SizeRootFs\":0\n },\n {\n \"Id\": \"4cb07b47f9fb\",\n \"Image\": \"ubuntu:latest\",\n \"Command\": \"echo 444444444444444444444444444444444\",\n \"Created\": 1367854152,\n \"Status\": \"Exit 0\",\n \"Ports\": [],\n \"SizeRw\": 12288,\n \"SizeRootFs\": 0\n }\n ]\n\nQuery Parameters:\n\nall \u2013 1/True/true or 0/False/false, Show all containers.\n Only running containers are shown by default (i.e., this defaults to false)\nlimit \u2013 Show limit last created containers, include non-running ones.\nsince \u2013 Show only containers created since Id, include non-running ones.\nbefore \u2013 Show only containers created before Id, include non-running ones.\nsize \u2013 1/True/true or 0/False/false, Show the containers sizes\nfilters - a json encoded value of the filters (a map[string][]string) to process on the containers list. Available filters:\nexited=int -- containers with exit code of int\nstatus=(restarting|running|paused|exited)\n\nStatus Codes:\n\n200 \u2013 no error\n400 \u2013 bad parameter\n500 \u2013 server error\n\nCreate a container\nPOST /containers/create\nCreate a container\nExample request:\n POST /containers/create HTTP/1.1\n Content-Type: application/json\n\n {\n \"Hostname\":\"\",\n \"Domainname\": \"\",\n \"User\":\"\",\n \"Memory\":0,\n \"MemorySwap\":0,\n \"CpuShares\": 512,\n \"Cpuset\": \"0,1\",\n \"AttachStdin\":false,\n \"AttachStdout\":true,\n \"AttachStderr\":true,\n \"PortSpecs\":null,\n \"Tty\":false,\n \"OpenStdin\":false,\n \"StdinOnce\":false,\n \"Env\":null,\n \"Cmd\":[\n \"date\"\n ],\n \"Image\":\"ubuntu\",\n \"Volumes\":{\n \"/tmp\": {}\n },\n \"WorkingDir\":\"\",\n \"NetworkDisabled\": false,\n \"ExposedPorts\":{\n \"22/tcp\": {}\n },\n \"RestartPolicy\": { \"Name\": \"always\" }\n }\n\nExample response:\n HTTP/1.1 201 Created\n Content-Type: application/json\n\n {\n \"Id\":\"e90e34656806\"\n \"Warnings\":[]\n }\n\nJson Parameters:\n\nRestartPolicy \u2013 The behavior to apply when the container exits. The\n value is an object with a Name property of either \"always\" to\n always restart or \"on-failure\" to restart only when the container\n exit code is non-zero. If on-failure is used, MaximumRetryCount\n controls the number of times to retry before giving up.\n The default is not to restart. (optional)\n An ever increasing delay (double the previous delay, starting at 100mS)\n is added before each restart to prevent flooding the server.\nconfig \u2013 the container's configuration\n\nQuery Parameters:\n\nname \u2013 Assign the specified name to the container. Must match /?[a-zA-Z0-9_-]+.\n\nStatus Codes:\n\n201 \u2013 no error\n404 \u2013 no such container\n406 \u2013 impossible to attach (container not running)\n500 \u2013 server error\n\nInspect a container\nGET /containers/(id)/json\nReturn low-level information on the container id\nExample request:\n GET /containers/4fa6e0f0c678/json HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\n \"Id\": \"4fa6e0f0c6786287e131c3852c58a2e01cc697a68231826813597e4994f1d6e2\",\n \"Created\": \"2013-05-07T14:51:42.041847+02:00\",\n \"Path\": \"date\",\n \"Args\": [],\n \"Config\": {\n \"Hostname\": \"4fa6e0f0c678\",\n \"User\": \"\",\n \"Memory\": 0,\n \"MemorySwap\": 0,\n \"AttachStdin\": false,\n \"AttachStdout\": true,\n \"AttachStderr\": true,\n \"PortSpecs\": null,\n \"Tty\": false,\n \"OpenStdin\": false,\n \"StdinOnce\": false,\n \"Env\": null,\n \"Cmd\": [\n \"date\"\n ],\n \"Dns\": null,\n \"Image\": \"ubuntu\",\n \"Volumes\": {},\n \"VolumesFrom\": \"\",\n \"WorkingDir\": \"\"\n },\n \"State\": {\n \"Running\": false,\n \"Pid\": 0,\n \"ExitCode\": 0,\n \"StartedAt\": \"2013-05-07T14:51:42.087658+02:01360\",\n \"Ghost\": false\n },\n \"Image\": \"b750fe79269d2ec9a3c593ef05b4332b1d1a02a62b4accb2c21d589ff2f5f2dc\",\n \"NetworkSettings\": {\n \"IpAddress\": \"\",\n \"IpPrefixLen\": 0,\n \"Gateway\": \"\",\n \"Bridge\": \"\",\n \"PortMapping\": null\n },\n \"SysInitPath\": \"/home/kitty/go/src/github.com/docker/docker/bin/docker\",\n \"ResolvConfPath\": \"/etc/resolv.conf\",\n \"Volumes\": {},\n \"HostConfig\": {\n \"Binds\": null,\n \"ContainerIDFile\": \"\",\n \"LxcConf\": [],\n \"Privileged\": false,\n \"PortBindings\": {\n \"80/tcp\": [\n {\n \"HostIp\": \"0.0.0.0\",\n \"HostPort\": \"49153\"\n }\n ]\n },\n \"Links\": [\"/name:alias\"],\n \"PublishAllPorts\": false,\n \"CapAdd\": [\"NET_ADMIN\"],\n \"CapDrop\": [\"MKNOD\"]\n }\n }\n\nStatus Codes:\n\n200 \u2013 no error\n404 \u2013 no such container\n500 \u2013 server error\n\nList processes running inside a container\nGET /containers/(id)/top\nList processes running inside the container id\nExample request:\n GET /containers/4fa6e0f0c678/top HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\n \"Titles\": [\n \"USER\",\n \"PID\",\n \"%CPU\",\n \"%MEM\",\n \"VSZ\",\n \"RSS\",\n \"TTY\",\n \"STAT\",\n \"START\",\n \"TIME\",\n \"COMMAND\"\n ],\n \"Processes\": [\n [\"root\",\"20147\",\"0.0\",\"0.1\",\"18060\",\"1864\",\"pts/4\",\"S\",\"10:06\",\"0:00\",\"bash\"],\n [\"root\",\"20271\",\"0.0\",\"0.0\",\"4312\",\"352\",\"pts/4\",\"S+\",\"10:07\",\"0:00\",\"sleep\",\"10\"]\n ]\n }\n\nQuery Parameters:\n\nps_args \u2013 ps arguments to use (e.g., aux)\n\nStatus Codes:\n\n200 \u2013 no error\n404 \u2013 no such container\n500 \u2013 server error\n\nGet container logs\nGET /containers/(id)/logs\nGet stdout and stderr logs from the container id\nExample request:\n GET /containers/4fa6e0f0c678/logs?stderr=1stdout=1timestamps=1follow=1tail=10 HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/vnd.docker.raw-stream\n\n {{ STREAM }}\n\nQuery Parameters:\n\nfollow \u2013 1/True/true or 0/False/false, return stream. Default false\nstdout \u2013 1/True/true or 0/False/false, show stdout log. Default false\nstderr \u2013 1/True/true or 0/False/false, show stderr log. Default false\ntimestamps \u2013 1/True/true or 0/False/false, print timestamps for every\n log line. Default false\ntail \u2013 Output specified number of lines at the end of logs: all or\n number. Default all\n\nStatus Codes:\n\n200 \u2013 no error\n404 \u2013 no such container\n500 \u2013 server error\n\nInspect changes on a container's filesystem\nGET /containers/(id)/changes\nInspect changes on container id's filesystem\nExample request:\n GET /containers/4fa6e0f0c678/changes HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n [\n {\n \"Path\": \"/dev\",\n \"Kind\": 0\n },\n {\n \"Path\": \"/dev/kmsg\",\n \"Kind\": 1\n },\n {\n \"Path\": \"/test\",\n \"Kind\": 1\n }\n ]\n\nStatus Codes:\n\n200 \u2013 no error\n404 \u2013 no such container\n500 \u2013 server error\n\nExport a container\nGET /containers/(id)/export\nExport the contents of container id\nExample request:\n GET /containers/4fa6e0f0c678/export HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/octet-stream\n\n {{ TAR STREAM }}\n\nStatus Codes:\n\n200 \u2013 no error\n404 \u2013 no such container\n500 \u2013 server error\n\nStart a container\nPOST /containers/(id)/start\nStart the container id\nExample request:\n POST /containers/(id)/start HTTP/1.1\n Content-Type: application/json\n\n {\n \"Binds\":[\"/tmp:/tmp\"],\n \"Links\":[\"redis3:redis\"],\n \"LxcConf\":[{\"Key\":\"lxc.utsname\",\"Value\":\"docker\"}],\n \"PortBindings\":{ \"22/tcp\": [{ \"HostPort\": \"11022\" }] },\n \"PublishAllPorts\":false,\n \"Privileged\":false,\n \"Dns\": [\"8.8.8.8\"],\n \"VolumesFrom\": [\"parent\", \"other:ro\"],\n \"CapAdd\": [\"NET_ADMIN\"],\n \"CapDrop\": [\"MKNOD\"]\n }\n\nExample response:\n HTTP/1.1 204 No Content\n\nJson Parameters:\n\nhostConfig \u2013 the container's host configuration (optional)\n\nStatus Codes:\n\n204 \u2013 no error\n304 \u2013 container already started\n404 \u2013 no such container\n500 \u2013 server error\n\nStop a container\nPOST /containers/(id)/stop\nStop the container id\nExample request:\n POST /containers/e90e34656806/stop?t=5 HTTP/1.1\n\nExample response:\n HTTP/1.1 204 No Content\n\nQuery Parameters:\n\nt \u2013 number of seconds to wait before killing the container\n\nStatus Codes:\n\n204 \u2013 no error\n304 \u2013 container already stopped\n404 \u2013 no such container\n500 \u2013 server error\n\nRestart a container\nPOST /containers/(id)/restart\nRestart the container id\nExample request:\n POST /containers/e90e34656806/restart?t=5 HTTP/1.1\n\nExample response:\n HTTP/1.1 204 No Content\n\nQuery Parameters:\n\nt \u2013 number of seconds to wait before killing the container\n\nStatus Codes:\n\n204 \u2013 no error\n404 \u2013 no such container\n500 \u2013 server error\n\nKill a container\nPOST /containers/(id)/kill\nKill the container id\nExample request:\n POST /containers/e90e34656806/kill HTTP/1.1\n\nExample response:\n HTTP/1.1 204 No Content\n\nQuery Parameters\n\nsignal - Signal to send to the container: integer or string like \"SIGINT\".\n When not set, SIGKILL is assumed and the call will wait for the container to exit.\n\nStatus Codes:\n\n204 \u2013 no error\n404 \u2013 no such container\n500 \u2013 server error\n\nPause a container\nPOST /containers/(id)/pause\nPause the container id\nExample request:\n POST /containers/e90e34656806/pause HTTP/1.1\n\nExample response:\n HTTP/1.1 204 No Content\n\nStatus Codes:\n\n204 \u2013 no error\n404 \u2013 no such container\n500 \u2013 server error\n\nUnpause a container\nPOST /containers/(id)/unpause\nUnpause the container id\nExample request:\n POST /containers/e90e34656806/unpause HTTP/1.1\n\nExample response:\n HTTP/1.1 204 No Content\n\nStatus Codes:\n\n204 \u2013 no error\n404 \u2013 no such container\n500 \u2013 server error\n\nAttach to a container\nPOST /containers/(id)/attach\nAttach to the container id\nExample request:\n POST /containers/16253994b7c4/attach?logs=1stream=0stdout=1 HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/vnd.docker.raw-stream\n\n {{ STREAM }}\n\nQuery Parameters:\n\nlogs \u2013 1/True/true or 0/False/false, return logs. Default false\nstream \u2013 1/True/true or 0/False/false, return stream. Default false\nstdin \u2013 1/True/true or 0/False/false, if stream=true, attach to stdin.\n Default false\nstdout \u2013 1/True/true or 0/False/false, if logs=true, return\n stdout log, if stream=true, attach to stdout. Default false\nstderr \u2013 1/True/true or 0/False/false, if logs=true, return\n stderr log, if stream=true, attach to stderr. Default false\n\nStatus Codes:\n\n200 \u2013 no error\n400 \u2013 bad parameter\n404 \u2013 no such container\n\n500 \u2013 server error\nStream details:\nWhen using the TTY setting is enabled in\nPOST /containers/create\n,\nthe stream is the raw data from the process PTY and client's stdin.\nWhen the TTY is disabled, then the stream is multiplexed to separate\nstdout and stderr.\nThe format is a Header and a Payload (frame).\nHEADER\nThe header will contain the information on which stream write the\nstream (stdout or stderr). It also contain the size of the\nassociated frame encoded on the last 4 bytes (uint32).\nIt is encoded on the first 8 bytes like this:\nheader := [8]byte{STREAM_TYPE, 0, 0, 0, SIZE1, SIZE2, SIZE3, SIZE4}\n\nSTREAM_TYPE can be:\n\n\n0: stdin (will be written on stdout)\n\n1: stdout\n\n2: stderr\nSIZE1, SIZE2, SIZE3, SIZE4 are the 4 bytes of\nthe uint32 size encoded as big endian.\nPAYLOAD\nThe payload is the raw stream.\nIMPLEMENTATION\nThe simplest way to implement the Attach protocol is the following:\n\nRead 8 bytes\nchose stdout or stderr depending on the first byte\nExtract the frame size from the last 4 byets\nRead the extracted size and output it on the correct output\nGoto 1\n\n\n\nAttach to a container (websocket)\nGET /containers/(id)/attach/ws\nAttach to the container id via websocket\nImplements websocket protocol handshake according to RFC 6455\nExample request\n GET /containers/e90e34656806/attach/ws?logs=0stream=1stdin=1stdout=1stderr=1 HTTP/1.1\n\nExample response\n {{ STREAM }}\n\nQuery Parameters:\n\nlogs \u2013 1/True/true or 0/False/false, return logs. Default false\nstream \u2013 1/True/true or 0/False/false, return stream.\n Default false\nstdin \u2013 1/True/true or 0/False/false, if stream=true, attach\n to stdin. Default false\nstdout \u2013 1/True/true or 0/False/false, if logs=true, return\n stdout log, if stream=true, attach to stdout. Default false\nstderr \u2013 1/True/true or 0/False/false, if logs=true, return\n stderr log, if stream=true, attach to stderr. Default false\n\nStatus Codes:\n\n200 \u2013 no error\n400 \u2013 bad parameter\n404 \u2013 no such container\n500 \u2013 server error\n\nWait a container\nPOST /containers/(id)/wait\nBlock until container id stops, then returns the exit code\nExample request:\n POST /containers/16253994b7c4/wait HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\"StatusCode\": 0}\n\nStatus Codes:\n\n200 \u2013 no error\n404 \u2013 no such container\n500 \u2013 server error\n\nRemove a container\nDELETE /containers/(id)\nRemove the container id from the filesystem\nExample request:\n DELETE /containers/16253994b7c4?v=1 HTTP/1.1\n\nExample response:\n HTTP/1.1 204 No Content\n\nQuery Parameters:\n\nv \u2013 1/True/true or 0/False/false, Remove the volumes\n associated to the container. Default false\nforce - 1/True/true or 0/False/false, Kill then remove the container.\n Default false\n\nStatus Codes:\n\n204 \u2013 no error\n400 \u2013 bad parameter\n404 \u2013 no such container\n500 \u2013 server error\n\nCopy files or folders from a container\nPOST /containers/(id)/copy\nCopy files or folders of container id\nExample request:\n POST /containers/4fa6e0f0c678/copy HTTP/1.1\n Content-Type: application/json\n\n {\n \"Resource\": \"test.txt\"\n }\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/octet-stream\n\n {{ TAR STREAM }}\n\nStatus Codes:\n\n200 \u2013 no error\n404 \u2013 no such container\n500 \u2013 server error\n\n2.2 Images\nList Images\nGET /images/json\nExample request:\n GET /images/json?all=0 HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n [\n {\n \"RepoTags\": [\n \"ubuntu:12.04\",\n \"ubuntu:precise\",\n \"ubuntu:latest\"\n ],\n \"Id\": \"8dbd9e392a964056420e5d58ca5cc376ef18e2de93b5cc90e868a1bbc8318c1c\",\n \"Created\": 1365714795,\n \"Size\": 131506275,\n \"VirtualSize\": 131506275\n },\n {\n \"RepoTags\": [\n \"ubuntu:12.10\",\n \"ubuntu:quantal\"\n ],\n \"ParentId\": \"27cf784147099545\",\n \"Id\": \"b750fe79269d2ec9a3c593ef05b4332b1d1a02a62b4accb2c21d589ff2f5f2dc\",\n \"Created\": 1364102658,\n \"Size\": 24653,\n \"VirtualSize\": 180116135\n }\n ]\n\nQuery Parameters:\n\nall \u2013 1/True/true or 0/False/false, default false\nfilters \u2013 a json encoded value of the filters (a map[string][]string) to process on the images list. Available filters:\ndangling=true\n\nCreate an image\nPOST /images/create\nCreate an image, either by pulling it from the registry or by importing it\nExample request:\n POST /images/create?fromImage=ubuntu HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\"status\": \"Pulling...\"}\n {\"status\": \"Pulling\", \"progress\": \"1 B/ 100 B\", \"progressDetail\": {\"current\": 1, \"total\": 100}}\n {\"error\": \"Invalid...\"}\n ...\n\nWhen using this endpoint to pull an image from the registry, the\n`X-Registry-Auth` header can be used to include\na base64-encoded AuthConfig object.\n\nQuery Parameters:\n\nfromImage \u2013 name of the image to pull\nfromSrc \u2013 source to import, - means stdin\nrepo \u2013 repository\ntag \u2013 tag\nregistry \u2013 the registry to pull from\n\nRequest Headers:\n\nX-Registry-Auth \u2013 base64-encoded AuthConfig object\n\nStatus Codes:\n\n200 \u2013 no error\n500 \u2013 server error\n\nInspect an image\nGET /images/(name)/json\nReturn low-level information on the image name\nExample request:\n GET /images/ubuntu/json HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\n \"Created\": \"2013-03-23T22:24:18.818426-07:00\",\n \"Container\": \"3d67245a8d72ecf13f33dffac9f79dcdf70f75acb84d308770391510e0c23ad0\",\n \"ContainerConfig\":\n {\n \"Hostname\": \"\",\n \"User\": \"\",\n \"Memory\": 0,\n \"MemorySwap\": 0,\n \"AttachStdin\": false,\n \"AttachStdout\": false,\n \"AttachStderr\": false,\n \"PortSpecs\": null,\n \"Tty\": true,\n \"OpenStdin\": true,\n \"StdinOnce\": false,\n \"Env\": null,\n \"Cmd\": [\"/bin/bash\"],\n \"Dns\": null,\n \"Image\": \"ubuntu\",\n \"Volumes\": null,\n \"VolumesFrom\": \"\",\n \"WorkingDir\": \"\"\n },\n \"Id\": \"b750fe79269d2ec9a3c593ef05b4332b1d1a02a62b4accb2c21d589ff2f5f2dc\",\n \"Parent\": \"27cf784147099545\",\n \"Size\": 6824592\n }\n\nStatus Codes:\n\n200 \u2013 no error\n404 \u2013 no such image\n500 \u2013 server error\n\nGet the history of an image\nGET /images/(name)/history\nReturn the history of the image name\nExample request:\n GET /images/ubuntu/history HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n [\n {\n \"Id\": \"b750fe79269d\",\n \"Created\": 1364102658,\n \"CreatedBy\": \"/bin/bash\"\n },\n {\n \"Id\": \"27cf78414709\",\n \"Created\": 1364068391,\n \"CreatedBy\": \"\"\n }\n ]\n\nStatus Codes:\n\n200 \u2013 no error\n404 \u2013 no such image\n500 \u2013 server error\n\nPush an image on the registry\nPOST /images/(name)/push\nPush the image name on the registry\nExample request:\n POST /images/test/push HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\"status\": \"Pushing...\"}\n {\"status\": \"Pushing\", \"progress\": \"1/? (n/a)\", \"progressDetail\": {\"current\": 1}}}\n {\"error\": \"Invalid...\"}\n ...\n\nIf you wish to push an image on to a private registry, that image must already have been tagged\ninto a repository which references that registry host name and port. This repository name should\nthen be used in the URL. This mirrors the flow of the CLI.\n\nExample request:\n POST /images/registry.acme.com:5000/test/push HTTP/1.1\n\nQuery Parameters:\n\ntag \u2013 the tag to associate with the image on the registry, optional\n\nRequest Headers:\n\nX-Registry-Auth \u2013 include a base64-encoded AuthConfig object.\n\nStatus Codes:\n\n200 \u2013 no error\n404 \u2013 no such image\n500 \u2013 server error\n\nTag an image into a repository\nPOST /images/(name)/tag\nTag the image name into a repository\nExample request:\n POST /images/test/tag?repo=myrepoforce=0tag=v42 HTTP/1.1\n\nExample response:\n HTTP/1.1 201 OK\n\nQuery Parameters:\n\nrepo \u2013 The repository to tag in\nforce \u2013 1/True/true or 0/False/false, default false\ntag - The new tag name\n\nStatus Codes:\n\n201 \u2013 no error\n400 \u2013 bad parameter\n404 \u2013 no such image\n409 \u2013 conflict\n500 \u2013 server error\n\nRemove an image\nDELETE /images/(name)\nRemove the image name from the filesystem\nExample request:\n DELETE /images/test HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-type: application/json\n\n [\n {\"Untagged\": \"3e2f21a89f\"},\n {\"Deleted\": \"3e2f21a89f\"},\n {\"Deleted\": \"53b4f83ac9\"}\n ]\n\nQuery Parameters:\n\nforce \u2013 1/True/true or 0/False/false, default false\nnoprune \u2013 1/True/true or 0/False/false, default false\n\nStatus Codes:\n\n200 \u2013 no error\n404 \u2013 no such image\n409 \u2013 conflict\n500 \u2013 server error\n\nSearch images\nGET /images/search\nSearch for an image on Docker Hub.\n\nNote:\nThe response keys have changed from API v1.6 to reflect the JSON\nsent by the registry server to the docker daemon's request.\n\nExample request:\n GET /images/search?term=sshd HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n [\n {\n \"description\": \"\",\n \"is_official\": false,\n \"is_automated\": false,\n \"name\": \"wma55/u1210sshd\",\n \"star_count\": 0\n },\n {\n \"description\": \"\",\n \"is_official\": false,\n \"is_automated\": false,\n \"name\": \"jdswinbank/sshd\",\n \"star_count\": 0\n },\n {\n \"description\": \"\",\n \"is_official\": false,\n \"is_automated\": false,\n \"name\": \"vgauthier/sshd\",\n \"star_count\": 0\n }\n ...\n ]\n\nQuery Parameters:\n\nterm \u2013 term to search\n\nStatus Codes:\n\n200 \u2013 no error\n500 \u2013 server error\n\n2.3 Misc\nBuild an image from Dockerfile via stdin\nPOST /build\nBuild an image from Dockerfile via stdin\nExample request:\n POST /build HTTP/1.1\n\n {{ TAR STREAM }}\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\"stream\": \"Step 1...\"}\n {\"stream\": \"...\"}\n {\"error\": \"Error...\", \"errorDetail\": {\"code\": 123, \"message\": \"Error...\"}}\n\nThe stream must be a tar archive compressed with one of the\nfollowing algorithms: identity (no compression), gzip, bzip2, xz.\n\nThe archive must include a file called `Dockerfile`\nat its root. It may include any number of other files,\nwhich will be accessible in the build context (See the [*ADD build\ncommand*](/reference/builder/#dockerbuilder)).\n\nQuery Parameters:\n\nt \u2013 repository name (and optionally a tag) to be applied to\n the resulting image in case of success\nremote \u2013 git or HTTP/HTTPS URI build source\nq \u2013 suppress verbose build output\nnocache \u2013 do not use the cache when building the image\nrm - remove intermediate containers after a successful build (default behavior)\n\nforcerm - always remove intermediate containers (includes rm)\nRequest Headers:\n\n\nContent-type \u2013 should be set to \"application/tar\".\n\nX-Registry-Config \u2013 base64-encoded ConfigFile objec\n\nStatus Codes:\n\n200 \u2013 no error\n500 \u2013 server error\n\nCheck auth configuration\nPOST /auth\nGet the default username and email\nExample request:\n POST /auth HTTP/1.1\n Content-Type: application/json\n\n {\n \"username\":\" hannibal\",\n \"password: \"xxxx\",\n \"email\": \"hannibal@a-team.com\",\n \"serveraddress\": \"https://index.docker.io/v1/\"\n }\n\nExample response:\n HTTP/1.1 200 OK\n\nStatus Codes:\n\n200 \u2013 no error\n204 \u2013 no error\n500 \u2013 server error\n\nDisplay system-wide information\nGET /info\nDisplay system-wide information\nExample request:\n GET /info HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\n \"Containers\": 11,\n \"Images\": 16,\n \"Driver\": \"btrfs\",\n \"ExecutionDriver\": \"native-0.1\",\n \"KernelVersion\": \"3.12.0-1-amd64\"\n \"Debug\": false,\n \"NFd\": 11,\n \"NGoroutines\": 21,\n \"NEventsListener\": 0,\n \"InitPath\": \"/usr/bin/docker\",\n \"IndexServerAddress\": [\"https://index.docker.io/v1/\"],\n \"MemoryLimit\": true,\n \"SwapLimit\": false,\n \"IPv4Forwarding\": true\n }\n\nStatus Codes:\n\n200 \u2013 no error\n500 \u2013 server error\n\nShow the docker version information\nGET /version\nShow the docker version information\nExample request:\n GET /version HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\n \"ApiVersion\": \"1.12\",\n \"Version\": \"0.2.2\",\n \"GitCommit\": \"5a2a5cc+CHANGES\",\n \"GoVersion\": \"go1.0.3\"\n }\n\nStatus Codes:\n\n200 \u2013 no error\n500 \u2013 server error\n\nPing the docker server\nGET /_ping\nPing the docker server\nExample request:\n GET /_ping HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: text/plain\n\n OK\n\nStatus Codes:\n\n200 - no error\n500 - server error\n\nCreate a new image from a container's changes\nPOST /commit\nCreate a new image from a container's changes\nExample request:\n POST /commit?container=44c004db4b17comment=messagerepo=myrepo HTTP/1.1\n Content-Type: application/json\n\n {\n \"Hostname\": \"\",\n \"Domainname\": \"\",\n \"User\": \"\",\n \"Memory\": 0,\n \"MemorySwap\": 0,\n \"CpuShares\": 512,\n \"Cpuset\": \"0,1\",\n \"AttachStdin\": false,\n \"AttachStdout\": true,\n \"AttachStderr\": true,\n \"PortSpecs\": null,\n \"Tty\": false,\n \"OpenStdin\": false,\n \"StdinOnce\": false,\n \"Env\": null,\n \"Cmd\": [\n \"date\"\n ],\n \"Volumes\": {\n \"/tmp\": {}\n },\n \"WorkingDir\": \"\",\n \"NetworkDisabled\": false,\n \"ExposedPorts\": {\n \"22/tcp\": {}\n }\n }\n\nExample response:\n HTTP/1.1 201 Created\n Content-Type: application/vnd.docker.raw-stream\n\n {\"Id\": \"596069db4bf5\"}\n\nJson Parameters:\n\nconfig - the container's configuration\n\nQuery Parameters:\n\ncontainer \u2013 source container\nrepo \u2013 repository\ntag \u2013 tag\ncomment \u2013 commit message\nauthor \u2013 author (e.g., \"John Hannibal Smith\n hannibal@a-team.com\")\n\nStatus Codes:\n\n201 \u2013 no error\n404 \u2013 no such container\n500 \u2013 server error\n\nMonitor Docker's events\nGET /events\nGet container events from docker, either in real time via streaming, or via\npolling (using since).\nDocker containers will report the following events:\ncreate, destroy, die, export, kill, pause, restart, start, stop, unpause\n\nand Docker images will report:\nuntag, delete\n\nExample request:\n GET /events?since=1374067924\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\"status\": \"create\", \"id\": \"dfdf82bd3881\",\"from\": \"ubuntu:latest\", \"time\":1374067924}\n {\"status\": \"start\", \"id\": \"dfdf82bd3881\",\"from\": \"ubuntu:latest\", \"time\":1374067924}\n {\"status\": \"stop\", \"id\": \"dfdf82bd3881\",\"from\": \"ubuntu:latest\", \"time\":1374067966}\n {\"status\": \"destroy\", \"id\": \"dfdf82bd3881\",\"from\": \"ubuntu:latest\", \"time\":1374067970}\n\nQuery Parameters:\n\nsince \u2013 timestamp used for polling\nuntil \u2013 timestamp used for polling\n\nStatus Codes:\n\n200 \u2013 no error\n500 \u2013 server error\n\nGet a tarball containing all images and tags in a repository\nGET /images/(name)/get\nGet a tarball containing all images and metadata for the repository\nspecified by name.\nSee the image tarball format for more details.\nExample request\n GET /images/ubuntu/get\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/x-tar\n\n Binary data stream\n\nStatus Codes:\n\n200 \u2013 no error\n500 \u2013 server error\n\nLoad a tarball with a set of images and tags into docker\nPOST /images/load\nLoad a set of images and tags into the docker repository.\nSee the image tarball format for more details.\nExample request\n POST /images/load\n\n Tarball in body\n\nExample response:\n HTTP/1.1 200 OK\n\nStatus Codes:\n\n200 \u2013 no error\n500 \u2013 server error\n\nImage tarball format\nAn image tarball contains one directory per image layer (named using its long ID),\neach containing three files:\n\nVERSION: currently 1.0 - the file format version\njson: detailed layer information, similar to docker inspect layer_id\nlayer.tar: A tarfile containing the filesystem changes in this layer\n\nThe layer.tar file will contain aufs style .wh..wh.aufs files and directories\nfor storing attribute changes and deletions.\nIf the tarball defines a repository, there will also be a repositories file at\nthe root that contains a list of repository and tag names mapped to layer IDs.\n{hello-world:\n {latest: 565a9d68a73f6706862bfe8409a7f659776d4d60a8d096eb4a3cbce6999cc2a1}\n}\n\n\n3. Going further\n3.1 Inside docker run\nAs an example, the docker run command line makes the following API calls:\n\n\nCreate the container\n\n\nIf the status code is 404, it means the image doesn't exist:\n\nTry to pull it\nThen retry to create the container\n\n\n\nStart the container\n\n\nIf you are not in detached mode:\n\nAttach to the container, using logs=1 (to have stdout and\n stderr from the container's start) and stream=1\n\n\n\nIf in detached mode or only stdin is attached:\n\nDisplay the container's id\n\n\n\n3.2 Hijacking\nIn this version of the API, /attach, uses hijacking to transport stdin,\nstdout and stderr on the same socket. This might change in the future.\n3.3 CORS Requests\nTo enable cross origin requests to the remote api add the flag\n\"--api-enable-cors\" when running docker in daemon mode.\n$ docker -d -H=\"192.168.1.9:2375\" --api-enable-cors",
|
|
"title": "**HIDDEN**"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api_v1.14#docker-remote-api-v114",
|
|
"tags": "",
|
|
"text": "",
|
|
"title": "Docker Remote API v1.14"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api_v1.14#1-brief-introduction",
|
|
"tags": "",
|
|
"text": "The Remote API has replaced rcli . The daemon listens on unix:///var/run/docker.sock but you can\n Bind Docker to another host/port or a Unix socket . The API tends to be REST, but for some complex commands, like attach \n or pull , the HTTP connection is hijacked to transport STDOUT ,\n STDIN and STDERR .",
|
|
"title": "1. Brief introduction"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api_v1.14#2-endpoints",
|
|
"tags": "",
|
|
"text": "",
|
|
"title": "2. Endpoints"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api_v1.14#21-containers",
|
|
"tags": "",
|
|
"text": "List containers GET /containers/json List containers Example request : GET /containers/json?all=1 before=8dfafdbc3a40 size=1 HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n [\n {\n \"Id\": \"8dfafdbc3a40\",\n \"Image\": \"ubuntu:latest\",\n \"Command\": \"echo 1\",\n \"Created\": 1367854155,\n \"Status\": \"Exit 0\",\n \"Ports\": [{\"PrivatePort\": 2222, \"PublicPort\": 3333, \"Type\": \"tcp\"}],\n \"SizeRw\": 12288,\n \"SizeRootFs\": 0\n },\n {\n \"Id\": \"9cd87474be90\",\n \"Image\": \"ubuntu:latest\",\n \"Command\": \"echo 222222\",\n \"Created\": 1367854155,\n \"Status\": \"Exit 0\",\n \"Ports\": [],\n \"SizeRw\": 12288,\n \"SizeRootFs\": 0\n },\n {\n \"Id\": \"3176a2479c92\",\n \"Image\": \"ubuntu:latest\",\n \"Command\": \"echo 3333333333333333\",\n \"Created\": 1367854154,\n \"Status\": \"Exit 0\",\n \"Ports\":[],\n \"SizeRw\":12288,\n \"SizeRootFs\":0\n },\n {\n \"Id\": \"4cb07b47f9fb\",\n \"Image\": \"ubuntu:latest\",\n \"Command\": \"echo 444444444444444444444444444444444\",\n \"Created\": 1367854152,\n \"Status\": \"Exit 0\",\n \"Ports\": [],\n \"SizeRw\": 12288,\n \"SizeRootFs\": 0\n }\n ] Query Parameters: all \u2013 1/True/true or 0/False/false, Show all containers.\n Only running containers are shown by default (i.e., this defaults to false) limit \u2013 Show limit last created containers, include non-running ones. since \u2013 Show only containers created since Id, include non-running ones. before \u2013 Show only containers created before Id, include non-running ones. size \u2013 1/True/true or 0/False/false, Show the containers sizes filters - a json encoded value of the filters (a map[string][]string) to process on the containers list. Available filters: exited= int -- containers with exit code of int status=(restarting|running|paused|exited) Status Codes: 200 \u2013 no error 400 \u2013 bad parameter 500 \u2013 server error Create a container POST /containers/create Create a container Example request : POST /containers/create HTTP/1.1\n Content-Type: application/json\n\n {\n \"Hostname\":\"\",\n \"Domainname\": \"\",\n \"User\":\"\",\n \"Memory\":0,\n \"MemorySwap\":0,\n \"CpuShares\": 512,\n \"Cpuset\": \"0,1\",\n \"AttachStdin\":false,\n \"AttachStdout\":true,\n \"AttachStderr\":true,\n \"PortSpecs\":null,\n \"Tty\":false,\n \"OpenStdin\":false,\n \"StdinOnce\":false,\n \"Env\":null,\n \"Cmd\":[\n \"date\"\n ],\n \"Image\":\"ubuntu\",\n \"Volumes\":{\n \"/tmp\": {}\n },\n \"WorkingDir\":\"\",\n \"NetworkDisabled\": false,\n \"ExposedPorts\":{\n \"22/tcp\": {}\n },\n \"RestartPolicy\": { \"Name\": \"always\" }\n } Example response : HTTP/1.1 201 Created\n Content-Type: application/json\n\n {\n \"Id\":\"e90e34656806\"\n \"Warnings\":[]\n } Json Parameters: RestartPolicy \u2013 The behavior to apply when the container exits. The\n value is an object with a Name property of either \"always\" to\n always restart or \"on-failure\" to restart only when the container\n exit code is non-zero. If on-failure is used, MaximumRetryCount \n controls the number of times to retry before giving up.\n The default is not to restart. (optional)\n An ever increasing delay (double the previous delay, starting at 100mS)\n is added before each restart to prevent flooding the server. config \u2013 the container's configuration Query Parameters: name \u2013 Assign the specified name to the container. Must match /?[a-zA-Z0-9_-]+ . Status Codes: 201 \u2013 no error 404 \u2013 no such container 406 \u2013 impossible to attach (container not running) 500 \u2013 server error Inspect a container GET /containers/(id)/json Return low-level information on the container id Example request : GET /containers/4fa6e0f0c678/json HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\n \"Id\": \"4fa6e0f0c6786287e131c3852c58a2e01cc697a68231826813597e4994f1d6e2\",\n \"Created\": \"2013-05-07T14:51:42.041847+02:00\",\n \"Path\": \"date\",\n \"Args\": [],\n \"Config\": {\n \"Hostname\": \"4fa6e0f0c678\",\n \"User\": \"\",\n \"Memory\": 0,\n \"MemorySwap\": 0,\n \"AttachStdin\": false,\n \"AttachStdout\": true,\n \"AttachStderr\": true,\n \"PortSpecs\": null,\n \"Tty\": false,\n \"OpenStdin\": false,\n \"StdinOnce\": false,\n \"Env\": null,\n \"Cmd\": [\n \"date\"\n ],\n \"Dns\": null,\n \"Image\": \"ubuntu\",\n \"Volumes\": {},\n \"VolumesFrom\": \"\",\n \"WorkingDir\": \"\"\n },\n \"State\": {\n \"Running\": false,\n \"Pid\": 0,\n \"ExitCode\": 0,\n \"StartedAt\": \"2013-05-07T14:51:42.087658+02:01360\",\n \"Ghost\": false\n },\n \"Image\": \"b750fe79269d2ec9a3c593ef05b4332b1d1a02a62b4accb2c21d589ff2f5f2dc\",\n \"NetworkSettings\": {\n \"IpAddress\": \"\",\n \"IpPrefixLen\": 0,\n \"Gateway\": \"\",\n \"Bridge\": \"\",\n \"PortMapping\": null\n },\n \"SysInitPath\": \"/home/kitty/go/src/github.com/docker/docker/bin/docker\",\n \"ResolvConfPath\": \"/etc/resolv.conf\",\n \"Volumes\": {},\n \"HostConfig\": {\n \"Binds\": null,\n \"ContainerIDFile\": \"\",\n \"LxcConf\": [],\n \"Privileged\": false,\n \"PortBindings\": {\n \"80/tcp\": [\n {\n \"HostIp\": \"0.0.0.0\",\n \"HostPort\": \"49153\"\n }\n ]\n },\n \"Links\": [\"/name:alias\"],\n \"PublishAllPorts\": false,\n \"CapAdd\": [\"NET_ADMIN\"],\n \"CapDrop\": [\"MKNOD\"]\n }\n } Status Codes: 200 \u2013 no error 404 \u2013 no such container 500 \u2013 server error List processes running inside a container GET /containers/(id)/top List processes running inside the container id Example request : GET /containers/4fa6e0f0c678/top HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\n \"Titles\": [\n \"USER\",\n \"PID\",\n \"%CPU\",\n \"%MEM\",\n \"VSZ\",\n \"RSS\",\n \"TTY\",\n \"STAT\",\n \"START\",\n \"TIME\",\n \"COMMAND\"\n ],\n \"Processes\": [\n [\"root\",\"20147\",\"0.0\",\"0.1\",\"18060\",\"1864\",\"pts/4\",\"S\",\"10:06\",\"0:00\",\"bash\"],\n [\"root\",\"20271\",\"0.0\",\"0.0\",\"4312\",\"352\",\"pts/4\",\"S+\",\"10:07\",\"0:00\",\"sleep\",\"10\"]\n ]\n } Query Parameters: ps_args \u2013 ps arguments to use (e.g., aux) Status Codes: 200 \u2013 no error 404 \u2013 no such container 500 \u2013 server error Get container logs GET /containers/(id)/logs Get stdout and stderr logs from the container id Example request : GET /containers/4fa6e0f0c678/logs?stderr=1 stdout=1 timestamps=1 follow=1 tail=10 HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/vnd.docker.raw-stream\n\n {{ STREAM }} Query Parameters: follow \u2013 1/True/true or 0/False/false, return stream. Default false stdout \u2013 1/True/true or 0/False/false, show stdout log. Default false stderr \u2013 1/True/true or 0/False/false, show stderr log. Default false timestamps \u2013 1/True/true or 0/False/false, print timestamps for every\n log line. Default false tail \u2013 Output specified number of lines at the end of logs: all or\n number . Default all Status Codes: 200 \u2013 no error 404 \u2013 no such container 500 \u2013 server error Inspect changes on a container's filesystem GET /containers/(id)/changes Inspect changes on container id 's filesystem Example request : GET /containers/4fa6e0f0c678/changes HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n [\n {\n \"Path\": \"/dev\",\n \"Kind\": 0\n },\n {\n \"Path\": \"/dev/kmsg\",\n \"Kind\": 1\n },\n {\n \"Path\": \"/test\",\n \"Kind\": 1\n }\n ] Status Codes: 200 \u2013 no error 404 \u2013 no such container 500 \u2013 server error Export a container GET /containers/(id)/export Export the contents of container id Example request : GET /containers/4fa6e0f0c678/export HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/octet-stream\n\n {{ TAR STREAM }} Status Codes: 200 \u2013 no error 404 \u2013 no such container 500 \u2013 server error Start a container POST /containers/(id)/start Start the container id Example request : POST /containers/(id)/start HTTP/1.1\n Content-Type: application/json\n\n {\n \"Binds\":[\"/tmp:/tmp\"],\n \"Links\":[\"redis3:redis\"],\n \"LxcConf\":[{\"Key\":\"lxc.utsname\",\"Value\":\"docker\"}],\n \"PortBindings\":{ \"22/tcp\": [{ \"HostPort\": \"11022\" }] },\n \"PublishAllPorts\":false,\n \"Privileged\":false,\n \"Dns\": [\"8.8.8.8\"],\n \"VolumesFrom\": [\"parent\", \"other:ro\"],\n \"CapAdd\": [\"NET_ADMIN\"],\n \"CapDrop\": [\"MKNOD\"]\n } Example response : HTTP/1.1 204 No Content Json Parameters: hostConfig \u2013 the container's host configuration (optional) Status Codes: 204 \u2013 no error 304 \u2013 container already started 404 \u2013 no such container 500 \u2013 server error Stop a container POST /containers/(id)/stop Stop the container id Example request : POST /containers/e90e34656806/stop?t=5 HTTP/1.1 Example response : HTTP/1.1 204 No Content Query Parameters: t \u2013 number of seconds to wait before killing the container Status Codes: 204 \u2013 no error 304 \u2013 container already stopped 404 \u2013 no such container 500 \u2013 server error Restart a container POST /containers/(id)/restart Restart the container id Example request : POST /containers/e90e34656806/restart?t=5 HTTP/1.1 Example response : HTTP/1.1 204 No Content Query Parameters: t \u2013 number of seconds to wait before killing the container Status Codes: 204 \u2013 no error 404 \u2013 no such container 500 \u2013 server error Kill a container POST /containers/(id)/kill Kill the container id Example request : POST /containers/e90e34656806/kill HTTP/1.1 Example response : HTTP/1.1 204 No Content Query Parameters signal - Signal to send to the container: integer or string like \"SIGINT\".\n When not set, SIGKILL is assumed and the call will wait for the container to exit. Status Codes: 204 \u2013 no error 404 \u2013 no such container 500 \u2013 server error Pause a container POST /containers/(id)/pause Pause the container id Example request : POST /containers/e90e34656806/pause HTTP/1.1 Example response : HTTP/1.1 204 No Content Status Codes: 204 \u2013 no error 404 \u2013 no such container 500 \u2013 server error Unpause a container POST /containers/(id)/unpause Unpause the container id Example request : POST /containers/e90e34656806/unpause HTTP/1.1 Example response : HTTP/1.1 204 No Content Status Codes: 204 \u2013 no error 404 \u2013 no such container 500 \u2013 server error Attach to a container POST /containers/(id)/attach Attach to the container id Example request : POST /containers/16253994b7c4/attach?logs=1 stream=0 stdout=1 HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/vnd.docker.raw-stream\n\n {{ STREAM }} Query Parameters: logs \u2013 1/True/true or 0/False/false, return logs. Default false stream \u2013 1/True/true or 0/False/false, return stream. Default false stdin \u2013 1/True/true or 0/False/false, if stream=true, attach to stdin.\n Default false stdout \u2013 1/True/true or 0/False/false, if logs=true, return\n stdout log, if stream=true, attach to stdout. Default false stderr \u2013 1/True/true or 0/False/false, if logs=true, return\n stderr log, if stream=true, attach to stderr. Default false Status Codes: 200 \u2013 no error 400 \u2013 bad parameter 404 \u2013 no such container 500 \u2013 server error Stream details : When using the TTY setting is enabled in POST /containers/create ,\nthe stream is the raw data from the process PTY and client's stdin.\nWhen the TTY is disabled, then the stream is multiplexed to separate\nstdout and stderr. The format is a Header and a Payload (frame). HEADER The header will contain the information on which stream write the\nstream (stdout or stderr). It also contain the size of the\nassociated frame encoded on the last 4 bytes (uint32). It is encoded on the first 8 bytes like this: header := [8]byte{STREAM_TYPE, 0, 0, 0, SIZE1, SIZE2, SIZE3, SIZE4} STREAM_TYPE can be: 0: stdin (will be written on stdout) 1: stdout 2: stderr SIZE1, SIZE2, SIZE3, SIZE4 are the 4 bytes of\nthe uint32 size encoded as big endian. PAYLOAD The payload is the raw stream. IMPLEMENTATION The simplest way to implement the Attach protocol is the following: Read 8 bytes chose stdout or stderr depending on the first byte Extract the frame size from the last 4 byets Read the extracted size and output it on the correct output Goto 1 Attach to a container (websocket) GET /containers/(id)/attach/ws Attach to the container id via websocket Implements websocket protocol handshake according to RFC 6455 Example request GET /containers/e90e34656806/attach/ws?logs=0 stream=1 stdin=1 stdout=1 stderr=1 HTTP/1.1 Example response {{ STREAM }} Query Parameters: logs \u2013 1/True/true or 0/False/false, return logs. Default false stream \u2013 1/True/true or 0/False/false, return stream.\n Default false stdin \u2013 1/True/true or 0/False/false, if stream=true, attach\n to stdin. Default false stdout \u2013 1/True/true or 0/False/false, if logs=true, return\n stdout log, if stream=true, attach to stdout. Default false stderr \u2013 1/True/true or 0/False/false, if logs=true, return\n stderr log, if stream=true, attach to stderr. Default false Status Codes: 200 \u2013 no error 400 \u2013 bad parameter 404 \u2013 no such container 500 \u2013 server error Wait a container POST /containers/(id)/wait Block until container id stops, then returns the exit code Example request : POST /containers/16253994b7c4/wait HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\"StatusCode\": 0} Status Codes: 200 \u2013 no error 404 \u2013 no such container 500 \u2013 server error Remove a container DELETE /containers/(id) Remove the container id from the filesystem Example request : DELETE /containers/16253994b7c4?v=1 HTTP/1.1 Example response : HTTP/1.1 204 No Content Query Parameters: v \u2013 1/True/true or 0/False/false, Remove the volumes\n associated to the container. Default false force - 1/True/true or 0/False/false, Kill then remove the container.\n Default false Status Codes: 204 \u2013 no error 400 \u2013 bad parameter 404 \u2013 no such container 500 \u2013 server error Copy files or folders from a container POST /containers/(id)/copy Copy files or folders of container id Example request : POST /containers/4fa6e0f0c678/copy HTTP/1.1\n Content-Type: application/json\n\n {\n \"Resource\": \"test.txt\"\n } Example response : HTTP/1.1 200 OK\n Content-Type: application/octet-stream\n\n {{ TAR STREAM }} Status Codes: 200 \u2013 no error 404 \u2013 no such container 500 \u2013 server error",
|
|
"title": "2.1 Containers"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api_v1.14#22-images",
|
|
"tags": "",
|
|
"text": "List Images GET /images/json Example request : GET /images/json?all=0 HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n [\n {\n \"RepoTags\": [\n \"ubuntu:12.04\",\n \"ubuntu:precise\",\n \"ubuntu:latest\"\n ],\n \"Id\": \"8dbd9e392a964056420e5d58ca5cc376ef18e2de93b5cc90e868a1bbc8318c1c\",\n \"Created\": 1365714795,\n \"Size\": 131506275,\n \"VirtualSize\": 131506275\n },\n {\n \"RepoTags\": [\n \"ubuntu:12.10\",\n \"ubuntu:quantal\"\n ],\n \"ParentId\": \"27cf784147099545\",\n \"Id\": \"b750fe79269d2ec9a3c593ef05b4332b1d1a02a62b4accb2c21d589ff2f5f2dc\",\n \"Created\": 1364102658,\n \"Size\": 24653,\n \"VirtualSize\": 180116135\n }\n ] Query Parameters: all \u2013 1/True/true or 0/False/false, default false filters \u2013 a json encoded value of the filters (a map[string][]string) to process on the images list. Available filters: dangling=true Create an image POST /images/create Create an image, either by pulling it from the registry or by importing it Example request : POST /images/create?fromImage=ubuntu HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\"status\": \"Pulling...\"}\n {\"status\": \"Pulling\", \"progress\": \"1 B/ 100 B\", \"progressDetail\": {\"current\": 1, \"total\": 100}}\n {\"error\": \"Invalid...\"}\n ...\n\nWhen using this endpoint to pull an image from the registry, the\n`X-Registry-Auth` header can be used to include\na base64-encoded AuthConfig object. Query Parameters: fromImage \u2013 name of the image to pull fromSrc \u2013 source to import, - means stdin repo \u2013 repository tag \u2013 tag registry \u2013 the registry to pull from Request Headers: X-Registry-Auth \u2013 base64-encoded AuthConfig object Status Codes: 200 \u2013 no error 500 \u2013 server error Inspect an image GET /images/(name)/json Return low-level information on the image name Example request : GET /images/ubuntu/json HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\n \"Created\": \"2013-03-23T22:24:18.818426-07:00\",\n \"Container\": \"3d67245a8d72ecf13f33dffac9f79dcdf70f75acb84d308770391510e0c23ad0\",\n \"ContainerConfig\":\n {\n \"Hostname\": \"\",\n \"User\": \"\",\n \"Memory\": 0,\n \"MemorySwap\": 0,\n \"AttachStdin\": false,\n \"AttachStdout\": false,\n \"AttachStderr\": false,\n \"PortSpecs\": null,\n \"Tty\": true,\n \"OpenStdin\": true,\n \"StdinOnce\": false,\n \"Env\": null,\n \"Cmd\": [\"/bin/bash\"],\n \"Dns\": null,\n \"Image\": \"ubuntu\",\n \"Volumes\": null,\n \"VolumesFrom\": \"\",\n \"WorkingDir\": \"\"\n },\n \"Id\": \"b750fe79269d2ec9a3c593ef05b4332b1d1a02a62b4accb2c21d589ff2f5f2dc\",\n \"Parent\": \"27cf784147099545\",\n \"Size\": 6824592\n } Status Codes: 200 \u2013 no error 404 \u2013 no such image 500 \u2013 server error Get the history of an image GET /images/(name)/history Return the history of the image name Example request : GET /images/ubuntu/history HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n [\n {\n \"Id\": \"b750fe79269d\",\n \"Created\": 1364102658,\n \"CreatedBy\": \"/bin/bash\"\n },\n {\n \"Id\": \"27cf78414709\",\n \"Created\": 1364068391,\n \"CreatedBy\": \"\"\n }\n ] Status Codes: 200 \u2013 no error 404 \u2013 no such image 500 \u2013 server error Push an image on the registry POST /images/(name)/push Push the image name on the registry Example request : POST /images/test/push HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\"status\": \"Pushing...\"}\n {\"status\": \"Pushing\", \"progress\": \"1/? (n/a)\", \"progressDetail\": {\"current\": 1}}}\n {\"error\": \"Invalid...\"}\n ...\n\nIf you wish to push an image on to a private registry, that image must already have been tagged\ninto a repository which references that registry host name and port. This repository name should\nthen be used in the URL. This mirrors the flow of the CLI. Example request : POST /images/registry.acme.com:5000/test/push HTTP/1.1 Query Parameters: tag \u2013 the tag to associate with the image on the registry, optional Request Headers: X-Registry-Auth \u2013 include a base64-encoded AuthConfig object. Status Codes: 200 \u2013 no error 404 \u2013 no such image 500 \u2013 server error Tag an image into a repository POST /images/(name)/tag Tag the image name into a repository Example request : POST /images/test/tag?repo=myrepo force=0 tag=v42 HTTP/1.1 Example response : HTTP/1.1 201 OK Query Parameters: repo \u2013 The repository to tag in force \u2013 1/True/true or 0/False/false, default false tag - The new tag name Status Codes: 201 \u2013 no error 400 \u2013 bad parameter 404 \u2013 no such image 409 \u2013 conflict 500 \u2013 server error Remove an image DELETE /images/(name) Remove the image name from the filesystem Example request : DELETE /images/test HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-type: application/json\n\n [\n {\"Untagged\": \"3e2f21a89f\"},\n {\"Deleted\": \"3e2f21a89f\"},\n {\"Deleted\": \"53b4f83ac9\"}\n ] Query Parameters: force \u2013 1/True/true or 0/False/false, default false noprune \u2013 1/True/true or 0/False/false, default false Status Codes: 200 \u2013 no error 404 \u2013 no such image 409 \u2013 conflict 500 \u2013 server error Search images GET /images/search Search for an image on Docker Hub . Note :\nThe response keys have changed from API v1.6 to reflect the JSON\nsent by the registry server to the docker daemon's request. Example request : GET /images/search?term=sshd HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n [\n {\n \"description\": \"\",\n \"is_official\": false,\n \"is_automated\": false,\n \"name\": \"wma55/u1210sshd\",\n \"star_count\": 0\n },\n {\n \"description\": \"\",\n \"is_official\": false,\n \"is_automated\": false,\n \"name\": \"jdswinbank/sshd\",\n \"star_count\": 0\n },\n {\n \"description\": \"\",\n \"is_official\": false,\n \"is_automated\": false,\n \"name\": \"vgauthier/sshd\",\n \"star_count\": 0\n }\n ...\n ] Query Parameters: term \u2013 term to search Status Codes: 200 \u2013 no error 500 \u2013 server error",
|
|
"title": "2.2 Images"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api_v1.14#23-misc",
|
|
"tags": "",
|
|
"text": "Build an image from Dockerfile via stdin POST /build Build an image from Dockerfile via stdin Example request : POST /build HTTP/1.1\n\n {{ TAR STREAM }} Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\"stream\": \"Step 1...\"}\n {\"stream\": \"...\"}\n {\"error\": \"Error...\", \"errorDetail\": {\"code\": 123, \"message\": \"Error...\"}}\n\nThe stream must be a tar archive compressed with one of the\nfollowing algorithms: identity (no compression), gzip, bzip2, xz.\n\nThe archive must include a file called `Dockerfile`\nat its root. It may include any number of other files,\nwhich will be accessible in the build context (See the [*ADD build\ncommand*](/reference/builder/#dockerbuilder)). Query Parameters: t \u2013 repository name (and optionally a tag) to be applied to\n the resulting image in case of success remote \u2013 git or HTTP/HTTPS URI build source q \u2013 suppress verbose build output nocache \u2013 do not use the cache when building the image rm - remove intermediate containers after a successful build (default behavior) forcerm - always remove intermediate containers (includes rm) Request Headers: Content-type \u2013 should be set to \"application/tar\" . X-Registry-Config \u2013 base64-encoded ConfigFile objec Status Codes: 200 \u2013 no error 500 \u2013 server error Check auth configuration POST /auth Get the default username and email Example request : POST /auth HTTP/1.1\n Content-Type: application/json\n\n {\n \"username\":\" hannibal\",\n \"password: \"xxxx\",\n \"email\": \"hannibal@a-team.com\",\n \"serveraddress\": \"https://index.docker.io/v1/\"\n } Example response : HTTP/1.1 200 OK Status Codes: 200 \u2013 no error 204 \u2013 no error 500 \u2013 server error Display system-wide information GET /info Display system-wide information Example request : GET /info HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\n \"Containers\": 11,\n \"Images\": 16,\n \"Driver\": \"btrfs\",\n \"ExecutionDriver\": \"native-0.1\",\n \"KernelVersion\": \"3.12.0-1-amd64\"\n \"Debug\": false,\n \"NFd\": 11,\n \"NGoroutines\": 21,\n \"NEventsListener\": 0,\n \"InitPath\": \"/usr/bin/docker\",\n \"IndexServerAddress\": [\"https://index.docker.io/v1/\"],\n \"MemoryLimit\": true,\n \"SwapLimit\": false,\n \"IPv4Forwarding\": true\n } Status Codes: 200 \u2013 no error 500 \u2013 server error Show the docker version information GET /version Show the docker version information Example request : GET /version HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\n \"ApiVersion\": \"1.12\",\n \"Version\": \"0.2.2\",\n \"GitCommit\": \"5a2a5cc+CHANGES\",\n \"GoVersion\": \"go1.0.3\"\n } Status Codes: 200 \u2013 no error 500 \u2013 server error Ping the docker server GET /_ping Ping the docker server Example request : GET /_ping HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: text/plain\n\n OK Status Codes: 200 - no error 500 - server error Create a new image from a container's changes POST /commit Create a new image from a container's changes Example request : POST /commit?container=44c004db4b17 comment=message repo=myrepo HTTP/1.1\n Content-Type: application/json\n\n {\n \"Hostname\": \"\",\n \"Domainname\": \"\",\n \"User\": \"\",\n \"Memory\": 0,\n \"MemorySwap\": 0,\n \"CpuShares\": 512,\n \"Cpuset\": \"0,1\",\n \"AttachStdin\": false,\n \"AttachStdout\": true,\n \"AttachStderr\": true,\n \"PortSpecs\": null,\n \"Tty\": false,\n \"OpenStdin\": false,\n \"StdinOnce\": false,\n \"Env\": null,\n \"Cmd\": [\n \"date\"\n ],\n \"Volumes\": {\n \"/tmp\": {}\n },\n \"WorkingDir\": \"\",\n \"NetworkDisabled\": false,\n \"ExposedPorts\": {\n \"22/tcp\": {}\n }\n } Example response : HTTP/1.1 201 Created\n Content-Type: application/vnd.docker.raw-stream\n\n {\"Id\": \"596069db4bf5\"} Json Parameters: config - the container's configuration Query Parameters: container \u2013 source container repo \u2013 repository tag \u2013 tag comment \u2013 commit message author \u2013 author (e.g., \"John Hannibal Smith\n hannibal@a-team.com \") Status Codes: 201 \u2013 no error 404 \u2013 no such container 500 \u2013 server error Monitor Docker's events GET /events Get container events from docker, either in real time via streaming, or via\npolling (using since). Docker containers will report the following events: create, destroy, die, export, kill, pause, restart, start, stop, unpause and Docker images will report: untag, delete Example request : GET /events?since=1374067924 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\"status\": \"create\", \"id\": \"dfdf82bd3881\",\"from\": \"ubuntu:latest\", \"time\":1374067924}\n {\"status\": \"start\", \"id\": \"dfdf82bd3881\",\"from\": \"ubuntu:latest\", \"time\":1374067924}\n {\"status\": \"stop\", \"id\": \"dfdf82bd3881\",\"from\": \"ubuntu:latest\", \"time\":1374067966}\n {\"status\": \"destroy\", \"id\": \"dfdf82bd3881\",\"from\": \"ubuntu:latest\", \"time\":1374067970} Query Parameters: since \u2013 timestamp used for polling until \u2013 timestamp used for polling Status Codes: 200 \u2013 no error 500 \u2013 server error Get a tarball containing all images and tags in a repository GET /images/(name)/get Get a tarball containing all images and metadata for the repository\nspecified by name . See the image tarball format for more details. Example request GET /images/ubuntu/get Example response : HTTP/1.1 200 OK\n Content-Type: application/x-tar\n\n Binary data stream Status Codes: 200 \u2013 no error 500 \u2013 server error Load a tarball with a set of images and tags into docker POST /images/load Load a set of images and tags into the docker repository.\nSee the image tarball format for more details. Example request POST /images/load\n\n Tarball in body Example response : HTTP/1.1 200 OK Status Codes: 200 \u2013 no error 500 \u2013 server error Image tarball format An image tarball contains one directory per image layer (named using its long ID),\neach containing three files: VERSION : currently 1.0 - the file format version json : detailed layer information, similar to docker inspect layer_id layer.tar : A tarfile containing the filesystem changes in this layer The layer.tar file will contain aufs style .wh..wh.aufs files and directories\nfor storing attribute changes and deletions. If the tarball defines a repository, there will also be a repositories file at\nthe root that contains a list of repository and tag names mapped to layer IDs. { hello-world :\n { latest : 565a9d68a73f6706862bfe8409a7f659776d4d60a8d096eb4a3cbce6999cc2a1 }\n}",
|
|
"title": "2.3 Misc"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api_v1.14#3-going-further",
|
|
"tags": "",
|
|
"text": "",
|
|
"title": "3. Going further"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api_v1.14#31-inside-docker-run",
|
|
"tags": "",
|
|
"text": "As an example, the docker run command line makes the following API calls: Create the container If the status code is 404, it means the image doesn't exist: Try to pull it Then retry to create the container Start the container If you are not in detached mode: Attach to the container, using logs=1 (to have stdout and\n stderr from the container's start) and stream=1 If in detached mode or only stdin is attached: Display the container's id",
|
|
"title": "3.1 Inside docker run"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api_v1.14#32-hijacking",
|
|
"tags": "",
|
|
"text": "In this version of the API, /attach, uses hijacking to transport stdin,\nstdout and stderr on the same socket. This might change in the future.",
|
|
"title": "3.2 Hijacking"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api_v1.14#33-cors-requests",
|
|
"tags": "",
|
|
"text": "To enable cross origin requests to the remote api add the flag\n\"--api-enable-cors\" when running docker in daemon mode. $ docker -d -H=\"192.168.1.9:2375\" --api-enable-cors",
|
|
"title": "3.3 CORS Requests"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api_v1.13/",
|
|
"tags": "",
|
|
"text": "Docker Remote API v1.13\n1. Brief introduction\n\nThe Remote API has replaced rcli.\nThe daemon listens on unix:///var/run/docker.sock but you can\n Bind Docker to another host/port or a Unix socket.\nThe API tends to be REST, but for some complex commands, like attach\n or pull, the HTTP connection is hijacked to transport STDOUT,\n STDIN and STDERR.\n\n2. Endpoints\n2.1 Containers\nList containers\nGET /containers/json\nList containers\nExample request:\n GET /containers/json?all=1before=8dfafdbc3a40size=1 HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n [\n {\n \"Id\": \"8dfafdbc3a40\",\n \"Image\": \"ubuntu:latest\",\n \"Command\": \"echo 1\",\n \"Created\": 1367854155,\n \"Status\": \"Exit 0\",\n \"Ports\": [{\"PrivatePort\": 2222, \"PublicPort\": 3333, \"Type\": \"tcp\"}],\n \"SizeRw\": 12288,\n \"SizeRootFs\": 0\n },\n {\n \"Id\": \"9cd87474be90\",\n \"Image\": \"ubuntu:latest\",\n \"Command\": \"echo 222222\",\n \"Created\": 1367854155,\n \"Status\": \"Exit 0\",\n \"Ports\": [],\n \"SizeRw\": 12288,\n \"SizeRootFs\": 0\n },\n {\n \"Id\": \"3176a2479c92\",\n \"Image\": \"ubuntu:latest\",\n \"Command\": \"echo 3333333333333333\",\n \"Created\": 1367854154,\n \"Status\": \"Exit 0\",\n \"Ports\":[],\n \"SizeRw\":12288,\n \"SizeRootFs\":0\n },\n {\n \"Id\": \"4cb07b47f9fb\",\n \"Image\": \"ubuntu:latest\",\n \"Command\": \"echo 444444444444444444444444444444444\",\n \"Created\": 1367854152,\n \"Status\": \"Exit 0\",\n \"Ports\": [],\n \"SizeRw\": 12288,\n \"SizeRootFs\": 0\n }\n ]\n\nQuery Parameters:\n\nall \u2013 1/True/true or 0/False/false, Show all containers.\n Only running containers are shown by default (i.e., this defaults to false)\nlimit \u2013 Show limit last created containers, include non-running ones.\nsince \u2013 Show only containers created since Id, include non-running ones.\nbefore \u2013 Show only containers created before Id, include non-running ones.\nsize \u2013 1/True/true or 0/False/false, Show the containers sizes\n\nStatus Codes:\n\n200 \u2013 no error\n400 \u2013 bad parameter\n500 \u2013 server error\n\nCreate a container\nPOST /containers/create\nCreate a container\nExample request:\n POST /containers/create HTTP/1.1\n Content-Type: application/json\n\n {\n \"Hostname\":\"\",\n \"Domainname\": \"\",\n \"User\":\"\",\n \"Memory\":0,\n \"MemorySwap\":0,\n \"CpuShares\": 512,\n \"Cpuset\": \"0,1\",\n \"AttachStdin\":false,\n \"AttachStdout\":true,\n \"AttachStderr\":true,\n \"PortSpecs\":null,\n \"Tty\":false,\n \"OpenStdin\":false,\n \"StdinOnce\":false,\n \"Env\":null,\n \"Cmd\":[\n \"date\"\n ],\n \"Image\":\"ubuntu\",\n \"Volumes\":{\n \"/tmp\": {}\n },\n \"WorkingDir\":\"\",\n \"NetworkDisabled\": false,\n \"ExposedPorts\":{\n \"22/tcp\": {}\n }\n }\n\nExample response:\n HTTP/1.1 201 Created\n Content-Type: application/json\n\n {\n \"Id\":\"e90e34656806\"\n \"Warnings\":[]\n }\n\nJson Parameters:\n\nconfig \u2013 the container's configuration\n\nQuery Parameters:\n\n\n\nname \u2013 Assign the specified name to the container. Mus\n match /?[a-zA-Z0-9_-]+.\n\nStatus Codes:\n\n201 \u2013 no error\n404 \u2013 no such container\n406 \u2013 impossible to attach (container not running)\n500 \u2013 server error\n\nInspect a container\nGET /containers/(id)/json\nReturn low-level information on the container id\nExample request:\n GET /containers/4fa6e0f0c678/json HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\n \"Id\": \"4fa6e0f0c6786287e131c3852c58a2e01cc697a68231826813597e4994f1d6e2\",\n \"Created\": \"2013-05-07T14:51:42.041847+02:00\",\n \"Path\": \"date\",\n \"Args\": [],\n \"Config\": {\n \"Hostname\": \"4fa6e0f0c678\",\n \"User\": \"\",\n \"Memory\": 0,\n \"MemorySwap\": 0,\n \"AttachStdin\": false,\n \"AttachStdout\": true,\n \"AttachStderr\": true,\n \"PortSpecs\": null,\n \"Tty\": false,\n \"OpenStdin\": false,\n \"StdinOnce\": false,\n \"Env\": null,\n \"Cmd\": [\n \"date\"\n ],\n \"Dns\": null,\n \"Image\": \"ubuntu\",\n \"Volumes\": {},\n \"VolumesFrom\": \"\",\n \"WorkingDir\": \"\"\n },\n \"State\": {\n \"Running\": false,\n \"Pid\": 0,\n \"ExitCode\": 0,\n \"StartedAt\": \"2013-05-07T14:51:42.087658+02:01360\",\n \"Ghost\": false\n },\n \"Image\": \"b750fe79269d2ec9a3c593ef05b4332b1d1a02a62b4accb2c21d589ff2f5f2dc\",\n \"NetworkSettings\": {\n \"IpAddress\": \"\",\n \"IpPrefixLen\": 0,\n \"Gateway\": \"\",\n \"Bridge\": \"\",\n \"PortMapping\": null\n },\n \"SysInitPath\": \"/home/kitty/go/src/github.com/docker/docker/bin/docker\",\n \"ResolvConfPath\": \"/etc/resolv.conf\",\n \"Volumes\": {},\n \"HostConfig\": {\n \"Binds\": null,\n \"ContainerIDFile\": \"\",\n \"LxcConf\": [],\n \"Privileged\": false,\n \"PortBindings\": {\n \"80/tcp\": [\n {\n \"HostIp\": \"0.0.0.0\",\n \"HostPort\": \"49153\"\n }\n ]\n },\n \"Links\": [\"/name:alias\"],\n \"PublishAllPorts\": false\n }\n }\n\nStatus Codes:\n\n200 \u2013 no error\n404 \u2013 no such container\n500 \u2013 server error\n\nList processes running inside a container\nGET /containers/(id)/top\nList processes running inside the container id\nExample request:\n GET /containers/4fa6e0f0c678/top HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\n \"Titles\": [\n \"USER\",\n \"PID\",\n \"%CPU\",\n \"%MEM\",\n \"VSZ\",\n \"RSS\",\n \"TTY\",\n \"STAT\",\n \"START\",\n \"TIME\",\n \"COMMAND\"\n ],\n \"Processes\": [\n [\"root\",\"20147\",\"0.0\",\"0.1\",\"18060\",\"1864\",\"pts/4\",\"S\",\"10:06\",\"0:00\",\"bash\"],\n [\"root\",\"20271\",\"0.0\",\"0.0\",\"4312\",\"352\",\"pts/4\",\"S+\",\"10:07\",\"0:00\",\"sleep\",\"10\"]\n ]\n }\n\nQuery Parameters:\n\nps_args \u2013 ps arguments to use (e.g., aux)\n\nStatus Codes:\n\n200 \u2013 no error\n404 \u2013 no such container\n500 \u2013 server error\n\nGet container logs\nGET /containers/(id)/logs\nGet stdout and stderr logs from the container id\nExample request:\n GET /containers/4fa6e0f0c678/logs?stderr=1stdout=1timestamps=1follow=1tail=10 HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/vnd.docker.raw-stream\n\n {{ STREAM }}\n\nQuery Parameters:\n\nfollow \u2013 1/True/true or 0/False/false, return stream. Default false\nstdout \u2013 1/True/true or 0/False/false, show stdout log. Default false\nstderr \u2013 1/True/true or 0/False/false, show stderr log. Default false\ntimestamps \u2013 1/True/true or 0/False/false, print timestamps for every\n log line. Default false\ntail \u2013 Output specified number of lines at the end of logs: all or\n number. Default all\n\nStatus Codes:\n\n200 \u2013 no error\n404 \u2013 no such container\n500 \u2013 server error\n\nInspect changes on a container's filesystem\nGET /containers/(id)/changes\nInspect changes on container id's filesystem\nExample request:\n GET /containers/4fa6e0f0c678/changes HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n [\n {\n \"Path\": \"/dev\",\n \"Kind\": 0\n },\n {\n \"Path\": \"/dev/kmsg\",\n \"Kind\": 1\n },\n {\n \"Path\": \"/test\",\n \"Kind\": 1\n }\n ]\n\nStatus Codes:\n\n200 \u2013 no error\n404 \u2013 no such container\n500 \u2013 server error\n\nExport a container\nGET /containers/(id)/export\nExport the contents of container id\nExample request:\n GET /containers/4fa6e0f0c678/export HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/octet-stream\n\n {{ TAR STREAM }}\n\nStatus Codes:\n\n200 \u2013 no error\n404 \u2013 no such container\n500 \u2013 server error\n\nStart a container\nPOST /containers/(id)/start\nStart the container id\nExample request:\n POST /containers/(id)/start HTTP/1.1\n Content-Type: application/json\n\n {\n \"Binds\":[\"/tmp:/tmp\"],\n \"Links\":[\"redis3:redis\"],\n \"LxcConf\":[{\"Key\":\"lxc.utsname\",\"Value\":\"docker\"}],\n \"PortBindings\":{ \"22/tcp\": [{ \"HostPort\": \"11022\" }] },\n \"PublishAllPorts\":false,\n \"Privileged\":false,\n \"Dns\": [\"8.8.8.8\"],\n \"VolumesFrom\": [\"parent\", \"other:ro\"]\n }\n\nExample response:\n HTTP/1.1 204 No Content\n Content-Type: text/plain\n\nJson Parameters:\n\n\n\nhostConfig \u2013 the container's host configuration (optional)\n\nStatus Codes:\n\n204 \u2013 no error\n304 \u2013 container already started\n404 \u2013 no such container\n500 \u2013 server error\n\nStop a container\nPOST /containers/(id)/stop\nStop the container id\nExample request:\n POST /containers/e90e34656806/stop?t=5 HTTP/1.1\n\nExample response:\n HTTP/1.1 204 No Content\n\nQuery Parameters:\n\nt \u2013 number of seconds to wait before killing the container\n\nStatus Codes:\n\n204 \u2013 no error\n304 \u2013 container already stopped\n404 \u2013 no such container\n500 \u2013 server error\n\nRestart a container\nPOST /containers/(id)/restart\nRestart the container id\nExample request:\n POST /containers/e90e34656806/restart?t=5 HTTP/1.1\n\nExample response:\n HTTP/1.1 204 No Content\n\nQuery Parameters:\n\nt \u2013 number of seconds to wait before killing the container\n\nStatus Codes:\n\n204 \u2013 no error\n404 \u2013 no such container\n500 \u2013 server error\n\nKill a container\nPOST /containers/(id)/kill\nKill the container id\nExample request:\n POST /containers/e90e34656806/kill HTTP/1.1\n\nExample response:\n HTTP/1.1 204 No Content\n\nQuery Parameters\n\nsignal - Signal to send to the container: integer or string like \"SIGINT\".\n When not set, SIGKILL is assumed and the call will wait for the container to exit.\n\nStatus Codes:\n\n204 \u2013 no error\n404 \u2013 no such container\n500 \u2013 server error\n\nPause a container\nPOST /containers/(id)/pause\nPause the container id\nExample request:\n POST /containers/e90e34656806/pause HTTP/1.1\n\nExample response:\n HTTP/1.1 204 No Content\n\nStatus Codes:\n\n204 \u2013 no error\n404 \u2013 no such container\n500 \u2013 server error\n\nUnpause a container\nPOST /containers/(id)/unpause\nUnpause the container id\nExample request:\n POST /containers/e90e34656806/unpause HTTP/1.1\n\nExample response:\n HTTP/1.1 204 No Content\n\nStatus Codes:\n\n204 \u2013 no error\n404 \u2013 no such container\n500 \u2013 server error\n\nAttach to a container\nPOST /containers/(id)/attach\nAttach to the container id\nExample request:\n POST /containers/16253994b7c4/attach?logs=1stream=0stdout=1 HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/vnd.docker.raw-stream\n\n {{ STREAM }}\n\nQuery Parameters:\n\nlogs \u2013 1/True/true or 0/False/false, return logs. Default false\nstream \u2013 1/True/true or 0/False/false, return stream. Default false\nstdin \u2013 1/True/true or 0/False/false, if stream=true, attach to stdin.\n Default false\nstdout \u2013 1/True/true or 0/False/false, if logs=true, return\n stdout log, if stream=true, attach to stdout. Default false\nstderr \u2013 1/True/true or 0/False/false, if logs=true, return\n stderr log, if stream=true, attach to stderr. Default false\n\nStatus Codes:\n\n200 \u2013 no error\n400 \u2013 bad parameter\n404 \u2013 no such container\n\n500 \u2013 server error\nStream details:\nWhen using the TTY setting is enabled in\nPOST /containers/create\n,\nthe stream is the raw data from the process PTY and client's stdin.\nWhen the TTY is disabled, then the stream is multiplexed to separate\nstdout and stderr.\nThe format is a Header and a Payload (frame).\nHEADER\nThe header will contain the information on which stream write the\nstream (stdout or stderr). It also contain the size of the\nassociated frame encoded on the last 4 bytes (uint32).\nIt is encoded on the first 8 bytes like this:\nheader := [8]byte{STREAM_TYPE, 0, 0, 0, SIZE1, SIZE2, SIZE3, SIZE4}\n\nSTREAM_TYPE can be:\n\n\n0: stdin (will be written on stdout)\n\n1: stdout\n\n2: stderr\nSIZE1, SIZE2, SIZE3, SIZE4 are the 4 bytes of\nthe uint32 size encoded as big endian.\nPAYLOAD\nThe payload is the raw stream.\nIMPLEMENTATION\nThe simplest way to implement the Attach protocol is the following:\n\nRead 8 bytes\nchose stdout or stderr depending on the first byte\nExtract the frame size from the last 4 byets\nRead the extracted size and output it on the correct output\nGoto 1\n\n\n\nAttach to a container (websocket)\nGET /containers/(id)/attach/ws\nAttach to the container id via websocket\nImplements websocket protocol handshake according to RFC 6455\nExample request\n GET /containers/e90e34656806/attach/ws?logs=0stream=1stdin=1stdout=1stderr=1 HTTP/1.1\n\nExample response\n {{ STREAM }}\n\nQuery Parameters:\n\nlogs \u2013 1/True/true or 0/False/false, return logs. Default false\nstream \u2013 1/True/true or 0/False/false, return stream.\n Default false\nstdin \u2013 1/True/true or 0/False/false, if stream=true, attach\n to stdin. Default false\nstdout \u2013 1/True/true or 0/False/false, if logs=true, return\n stdout log, if stream=true, attach to stdout. Default false\nstderr \u2013 1/True/true or 0/False/false, if logs=true, return\n stderr log, if stream=true, attach to stderr. Default false\n\nStatus Codes:\n\n200 \u2013 no error\n400 \u2013 bad parameter\n404 \u2013 no such container\n500 \u2013 server error\n\nWait a container\nPOST /containers/(id)/wait\nBlock until container id stops, then returns the exit code\nExample request:\n POST /containers/16253994b7c4/wait HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\"StatusCode\": 0}\n\nStatus Codes:\n\n200 \u2013 no error\n404 \u2013 no such container\n500 \u2013 server error\n\nRemove a container\nDELETE /containers/(id)\nRemove the container id from the filesystem\nExample request:\n DELETE /containers/16253994b7c4?v=1 HTTP/1.1\n\nExample response:\n HTTP/1.1 204 No Content\n\nQuery Parameters:\n\nv \u2013 1/True/true or 0/False/false, Remove the volumes\n associated to the container. Default false\nforce \u2013 1/True/true or 0/False/false, Removes the container\n even if it was running. Default false\n\nStatus Codes:\n\n204 \u2013 no error\n400 \u2013 bad parameter\n404 \u2013 no such container\n500 \u2013 server error\n\nCopy files or folders from a container\nPOST /containers/(id)/copy\nCopy files or folders of container id\nExample request:\n POST /containers/4fa6e0f0c678/copy HTTP/1.1\n Content-Type: application/json\n\n {\n \"Resource\": \"test.txt\"\n }\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/octet-stream\n\n {{ TAR STREAM }}\n\nStatus Codes:\n\n200 \u2013 no error\n404 \u2013 no such container\n500 \u2013 server error\n\n2.2 Images\nList Images\nGET /images/json\nExample request:\n GET /images/json?all=0 HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n [\n {\n \"RepoTags\": [\n \"ubuntu:12.04\",\n \"ubuntu:precise\",\n \"ubuntu:latest\"\n ],\n \"Id\": \"8dbd9e392a964056420e5d58ca5cc376ef18e2de93b5cc90e868a1bbc8318c1c\",\n \"Created\": 1365714795,\n \"Size\": 131506275,\n \"VirtualSize\": 131506275\n },\n {\n \"RepoTags\": [\n \"ubuntu:12.10\",\n \"ubuntu:quantal\"\n ],\n \"ParentId\": \"27cf784147099545\",\n \"Id\": \"b750fe79269d2ec9a3c593ef05b4332b1d1a02a62b4accb2c21d589ff2f5f2dc\",\n \"Created\": 1364102658,\n \"Size\": 24653,\n \"VirtualSize\": 180116135\n }\n ]\n\nQuery Parameters:\n\nall \u2013 1/True/true or 0/False/false, default false\nfilters \u2013 a json encoded value of the filters (a map[string][]string) to process on the images list. Available filters:\ndangling=true\n\nCreate an image\nPOST /images/create\nCreate an image, either by pulling it from the registry or by importing it\nExample request:\n POST /images/create?fromImage=ubuntu HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\"status\": \"Pulling...\"}\n {\"status\": \"Pulling\", \"progress\": \"1 B/ 100 B\", \"progressDetail\": {\"current\": 1, \"total\": 100}}\n {\"error\": \"Invalid...\"}\n ...\n\nWhen using this endpoint to pull an image from the registry, the\n`X-Registry-Auth` header can be used to include\na base64-encoded AuthConfig object.\n\nQuery Parameters:\n\nfromImage \u2013 name of the image to pull\nfromSrc \u2013 source to import, - means stdin\nrepo \u2013 repository\ntag \u2013 tag\nregistry \u2013 the registry to pull from\n\nRequest Headers:\n\nX-Registry-Auth \u2013 base64-encoded AuthConfig object\n\nStatus Codes:\n\n200 \u2013 no error\n500 \u2013 server error\n\nInspect an image\nGET /images/(name)/json\nReturn low-level information on the image name\nExample request:\n GET /images/ubuntu/json HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\n \"Created\": \"2013-03-23T22:24:18.818426-07:00\",\n \"Container\": \"3d67245a8d72ecf13f33dffac9f79dcdf70f75acb84d308770391510e0c23ad0\",\n \"ContainerConfig\":\n {\n \"Hostname\": \"\",\n \"User\": \"\",\n \"Memory\": 0,\n \"MemorySwap\": 0,\n \"AttachStdin\": false,\n \"AttachStdout\": false,\n \"AttachStderr\": false,\n \"PortSpecs\": null,\n \"Tty\": true,\n \"OpenStdin\": true,\n \"StdinOnce\": false,\n \"Env\": null,\n \"Cmd\": [\"/bin/bash\"],\n \"Dns\": null,\n \"Image\": \"ubuntu\",\n \"Volumes\": null,\n \"VolumesFrom\": \"\",\n \"WorkingDir\": \"\"\n },\n \"Id\": \"b750fe79269d2ec9a3c593ef05b4332b1d1a02a62b4accb2c21d589ff2f5f2dc\",\n \"Parent\": \"27cf784147099545\",\n \"Size\": 6824592\n }\n\nStatus Codes:\n\n200 \u2013 no error\n404 \u2013 no such image\n500 \u2013 server error\n\nGet the history of an image\nGET /images/(name)/history\nReturn the history of the image name\nExample request:\n GET /images/ubuntu/history HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n [\n {\n \"Id\": \"b750fe79269d\",\n \"Created\": 1364102658,\n \"CreatedBy\": \"/bin/bash\"\n },\n {\n \"Id\": \"27cf78414709\",\n \"Created\": 1364068391,\n \"CreatedBy\": \"\"\n }\n ]\n\nStatus Codes:\n\n200 \u2013 no error\n404 \u2013 no such image\n500 \u2013 server error\n\nPush an image on the registry\nPOST /images/(name)/push\nPush the image name on the registry\nExample request:\n POST /images/test/push HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\"status\": \"Pushing...\"}\n {\"status\": \"Pushing\", \"progress\": \"1/? (n/a)\", \"progressDetail\": {\"current\": 1}}}\n {\"error\": \"Invalid...\"}\n ...\n\nIf you wish to push an image on to a private registry, that image must already have been tagged\ninto a repository which references that registry host name and port. This repository name should\nthen be used in the URL. This mirrors the flow of the CLI.\n\nExample request:\n POST /images/registry.acme.com:5000/test/push HTTP/1.1\n\nQuery Parameters:\n\ntag \u2013 the tag to associate with the image on the registry, optional\n\nRequest Headers:\n\nX-Registry-Auth \u2013 include a base64-encoded AuthConfig object.\n\nStatus Codes:\n\n200 \u2013 no error\n404 \u2013 no such image\n500 \u2013 server error\n\nTag an image into a repository\nPOST /images/(name)/tag\nTag the image name into a repository\nExample request:\n POST /images/test/tag?repo=myrepoforce=0tag=v42 HTTP/1.1\n\nExample response:\n HTTP/1.1 201 OK\n\nQuery Parameters:\n\nrepo \u2013 The repository to tag in\nforce \u2013 1/True/true or 0/False/false, default false\ntag - The new tag name\n\nStatus Codes:\n\n201 \u2013 no error\n400 \u2013 bad parameter\n404 \u2013 no such image\n409 \u2013 conflict\n500 \u2013 server error\n\nRemove an image\nDELETE /images/(name)\nRemove the image name from the filesystem\nExample request:\n DELETE /images/test HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-type: application/json\n\n [\n {\"Untagged\": \"3e2f21a89f\"},\n {\"Deleted\": \"3e2f21a89f\"},\n {\"Deleted\": \"53b4f83ac9\"}\n ]\n\nQuery Parameters:\n\nforce \u2013 1/True/true or 0/False/false, default false\nnoprune \u2013 1/True/true or 0/False/false, default false\n\nStatus Codes:\n\n200 \u2013 no error\n404 \u2013 no such image\n409 \u2013 conflict\n500 \u2013 server error\n\nSearch images\nGET /images/search\nSearch for an image on Docker Hub.\n\nNote:\nThe response keys have changed from API v1.6 to reflect the JSON\nsent by the registry server to the docker daemon's request.\n\nExample request:\n GET /images/search?term=sshd HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n [\n {\n \"description\": \"\",\n \"is_official\": false,\n \"is_automated\": false,\n \"name\": \"wma55/u1210sshd\",\n \"star_count\": 0\n },\n {\n \"description\": \"\",\n \"is_official\": false,\n \"is_automated\": false,\n \"name\": \"jdswinbank/sshd\",\n \"star_count\": 0\n },\n {\n \"description\": \"\",\n \"is_official\": false,\n \"is_automated\": false,\n \"name\": \"vgauthier/sshd\",\n \"star_count\": 0\n }\n ...\n ]\n\nQuery Parameters:\n\nterm \u2013 term to search\n\nStatus Codes:\n\n200 \u2013 no error\n500 \u2013 server error\n\n2.3 Misc\nBuild an image from Dockerfile via stdin\nPOST /build\nBuild an image from Dockerfile via stdin\nExample request:\n POST /build HTTP/1.1\n\n {{ TAR STREAM }}\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\"stream\": \"Step 1...\"}\n {\"stream\": \"...\"}\n {\"error\": \"Error...\", \"errorDetail\": {\"code\": 123, \"message\": \"Error...\"}}\n\nThe stream must be a tar archive compressed with one of the\nfollowing algorithms: identity (no compression), gzip, bzip2, xz.\n\nThe archive must include a file called `Dockerfile`\nat its root. It may include any number of other files,\nwhich will be accessible in the build context (See the [*ADD build\ncommand*](/reference/builder/#dockerbuilder)).\n\nQuery Parameters:\n\nt \u2013 repository name (and optionally a tag) to be applied to\n the resulting image in case of success\nremote \u2013 git or HTTP/HTTPS URI build source\nq \u2013 suppress verbose build output\nnocache \u2013 do not use the cache when building the image\nrm - remove intermediate containers after a successful build (default behavior)\n\nforcerm - always remove intermediate containers (includes rm)\nRequest Headers:\n\n\nContent-type \u2013 should be set to \"application/tar\".\n\nX-Registry-Config \u2013 base64-encoded ConfigFile objec\n\nStatus Codes:\n\n200 \u2013 no error\n500 \u2013 server error\n\nCheck auth configuration\nPOST /auth\nGet the default username and email\nExample request:\n POST /auth HTTP/1.1\n Content-Type: application/json\n\n {\n \"username\":\" hannibal\",\n \"password: \"xxxx\",\n \"email\": \"hannibal@a-team.com\",\n \"serveraddress\": \"https://index.docker.io/v1/\"\n }\n\nExample response:\n HTTP/1.1 200 OK\n\nStatus Codes:\n\n200 \u2013 no error\n204 \u2013 no error\n500 \u2013 server error\n\nDisplay system-wide information\nGET /info\nDisplay system-wide information\nExample request:\n GET /info HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\n \"Containers\": 11,\n \"Images\": 16,\n \"Driver\": \"btrfs\",\n \"ExecutionDriver\": \"native-0.1\",\n \"KernelVersion\": \"3.12.0-1-amd64\"\n \"Debug\": false,\n \"NFd\": 11,\n \"NGoroutines\": 21,\n \"NEventsListener\": 0,\n \"InitPath\": \"/usr/bin/docker\",\n \"IndexServerAddress\": [\"https://index.docker.io/v1/\"],\n \"MemoryLimit\": true,\n \"SwapLimit\": false,\n \"IPv4Forwarding\": true\n }\n\nStatus Codes:\n\n200 \u2013 no error\n500 \u2013 server error\n\nShow the docker version information\nGET /version\nShow the docker version information\nExample request:\n GET /version HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\n \"ApiVersion\": \"1.12\",\n \"Version\": \"0.2.2\",\n \"GitCommit\": \"5a2a5cc+CHANGES\",\n \"GoVersion\": \"go1.0.3\"\n }\n\nStatus Codes:\n\n200 \u2013 no error\n500 \u2013 server error\n\nPing the docker server\nGET /_ping\nPing the docker server\nExample request:\n GET /_ping HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: text/plain\n\n OK\n\nStatus Codes:\n\n200 - no error\n500 - server error\n\nCreate a new image from a container's changes\nPOST /commit\nCreate a new image from a container's changes\nExample request:\n POST /commit?container=44c004db4b17comment=messagerepo=myrepo HTTP/1.1\n Content-Type: application/json\n\n {\n \"Hostname\": \"\",\n \"Domainname\": \"\",\n \"User\": \"\",\n \"Memory\": 0,\n \"MemorySwap\": 0,\n \"CpuShares\": 512,\n \"Cpuset\": \"0,1\",\n \"AttachStdin\": false,\n \"AttachStdout\": true,\n \"AttachStderr\": true,\n \"PortSpecs\": null,\n \"Tty\": false,\n \"OpenStdin\": false,\n \"StdinOnce\": false,\n \"Env\": null,\n \"Cmd\": [\n \"date\"\n ],\n \"Volumes\": {\n \"/tmp\": {}\n },\n \"WorkingDir\": \"\",\n \"NetworkDisabled\": false,\n \"ExposedPorts\": {\n \"22/tcp\": {}\n }\n }\n\nExample response:\n HTTP/1.1 201 Created\n Content-Type: application/vnd.docker.raw-stream\n\n {\"Id\": \"596069db4bf5\"}\n\nJson Parameters:\n\nconfig - the container's configuration\n\nQuery Parameters:\n\ncontainer \u2013 source container\nrepo \u2013 repository\ntag \u2013 tag\ncomment \u2013 commit message\nauthor \u2013 author (e.g., \"John Hannibal Smith\n hannibal@a-team.com\")\n\nStatus Codes:\n\n201 \u2013 no error\n404 \u2013 no such container\n500 \u2013 server error\n\nMonitor Docker's events\nGET /events\nGet container events from docker, either in real time via streaming, or via\npolling (using since).\nDocker containers will report the following events:\ncreate, destroy, die, export, kill, pause, restart, start, stop, unpause\n\nand Docker images will report:\nuntag, delete\n\nExample request:\n GET /events?since=1374067924\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\"status\": \"create\", \"id\": \"dfdf82bd3881\",\"from\": \"ubuntu:latest\", \"time\":1374067924}\n {\"status\": \"start\", \"id\": \"dfdf82bd3881\",\"from\": \"ubuntu:latest\", \"time\":1374067924}\n {\"status\": \"stop\", \"id\": \"dfdf82bd3881\",\"from\": \"ubuntu:latest\", \"time\":1374067966}\n {\"status\": \"destroy\", \"id\": \"dfdf82bd3881\",\"from\": \"ubuntu:latest\", \"time\":1374067970}\n\nQuery Parameters:\n\nsince \u2013 timestamp used for polling\nuntil \u2013 timestamp used for polling\n\nStatus Codes:\n\n200 \u2013 no error\n500 \u2013 server error\n\nGet a tarball containing all images and tags in a repository\nGET /images/(name)/get\nGet a tarball containing all images and metadata for the repository\nspecified by name.\nSee the image tarball format for more details.\nExample request\n GET /images/ubuntu/get\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/x-tar\n\n Binary data stream\n\nStatus Codes:\n\n200 \u2013 no error\n500 \u2013 server error\n\nLoad a tarball with a set of images and tags into docker\nPOST /images/load\nLoad a set of images and tags into the docker repository.\nSee the image tarball format for more details.\nExample request\n POST /images/load\n\n Tarball in body\n\nExample response:\n HTTP/1.1 200 OK\n\nStatus Codes:\n\n200 \u2013 no error\n500 \u2013 server error\n\nImage tarball format\nAn image tarball contains one directory per image layer (named using its long ID),\neach containing three files:\n\nVERSION: currently 1.0 - the file format version\njson: detailed layer information, similar to docker inspect layer_id\nlayer.tar: A tarfile containing the filesystem changes in this layer\n\nThe layer.tar file will contain aufs style .wh..wh.aufs files and directories\nfor storing attribute changes and deletions.\nIf the tarball defines a repository, there will also be a repositories file at\nthe root that contains a list of repository and tag names mapped to layer IDs.\n{hello-world:\n {latest: 565a9d68a73f6706862bfe8409a7f659776d4d60a8d096eb4a3cbce6999cc2a1}\n}\n\n\n3. Going further\n3.1 Inside docker run\nAs an example, the docker run command line makes the following API calls:\n\n\nCreate the container\n\n\nIf the status code is 404, it means the image doesn't exist:\n\nTry to pull it\nThen retry to create the container\n\n\n\nStart the container\n\n\nIf you are not in detached mode:\n\nAttach to the container, using logs=1 (to have stdout and\n stderr from the container's start) and stream=1\n\n\n\nIf in detached mode or only stdin is attached:\n\nDisplay the container's id\n\n\n\n3.2 Hijacking\nIn this version of the API, /attach, uses hijacking to transport stdin,\nstdout and stderr on the same socket. This might change in the future.\n3.3 CORS Requests\nTo enable cross origin requests to the remote api add the flag\n\"--api-enable-cors\" when running docker in daemon mode.\n$ docker -d -H=\"192.168.1.9:2375\" --api-enable-cors",
|
|
"title": "**HIDDEN**"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api_v1.13#docker-remote-api-v113",
|
|
"tags": "",
|
|
"text": "",
|
|
"title": "Docker Remote API v1.13"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api_v1.13#1-brief-introduction",
|
|
"tags": "",
|
|
"text": "The Remote API has replaced rcli . The daemon listens on unix:///var/run/docker.sock but you can\n Bind Docker to another host/port or a Unix socket . The API tends to be REST, but for some complex commands, like attach \n or pull , the HTTP connection is hijacked to transport STDOUT ,\n STDIN and STDERR .",
|
|
"title": "1. Brief introduction"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api_v1.13#2-endpoints",
|
|
"tags": "",
|
|
"text": "",
|
|
"title": "2. Endpoints"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api_v1.13#21-containers",
|
|
"tags": "",
|
|
"text": "List containers GET /containers/json List containers Example request : GET /containers/json?all=1 before=8dfafdbc3a40 size=1 HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n [\n {\n \"Id\": \"8dfafdbc3a40\",\n \"Image\": \"ubuntu:latest\",\n \"Command\": \"echo 1\",\n \"Created\": 1367854155,\n \"Status\": \"Exit 0\",\n \"Ports\": [{\"PrivatePort\": 2222, \"PublicPort\": 3333, \"Type\": \"tcp\"}],\n \"SizeRw\": 12288,\n \"SizeRootFs\": 0\n },\n {\n \"Id\": \"9cd87474be90\",\n \"Image\": \"ubuntu:latest\",\n \"Command\": \"echo 222222\",\n \"Created\": 1367854155,\n \"Status\": \"Exit 0\",\n \"Ports\": [],\n \"SizeRw\": 12288,\n \"SizeRootFs\": 0\n },\n {\n \"Id\": \"3176a2479c92\",\n \"Image\": \"ubuntu:latest\",\n \"Command\": \"echo 3333333333333333\",\n \"Created\": 1367854154,\n \"Status\": \"Exit 0\",\n \"Ports\":[],\n \"SizeRw\":12288,\n \"SizeRootFs\":0\n },\n {\n \"Id\": \"4cb07b47f9fb\",\n \"Image\": \"ubuntu:latest\",\n \"Command\": \"echo 444444444444444444444444444444444\",\n \"Created\": 1367854152,\n \"Status\": \"Exit 0\",\n \"Ports\": [],\n \"SizeRw\": 12288,\n \"SizeRootFs\": 0\n }\n ] Query Parameters: all \u2013 1/True/true or 0/False/false, Show all containers.\n Only running containers are shown by default (i.e., this defaults to false) limit \u2013 Show limit last created containers, include non-running ones. since \u2013 Show only containers created since Id, include non-running ones. before \u2013 Show only containers created before Id, include non-running ones. size \u2013 1/True/true or 0/False/false, Show the containers sizes Status Codes: 200 \u2013 no error 400 \u2013 bad parameter 500 \u2013 server error Create a container POST /containers/create Create a container Example request : POST /containers/create HTTP/1.1\n Content-Type: application/json\n\n {\n \"Hostname\":\"\",\n \"Domainname\": \"\",\n \"User\":\"\",\n \"Memory\":0,\n \"MemorySwap\":0,\n \"CpuShares\": 512,\n \"Cpuset\": \"0,1\",\n \"AttachStdin\":false,\n \"AttachStdout\":true,\n \"AttachStderr\":true,\n \"PortSpecs\":null,\n \"Tty\":false,\n \"OpenStdin\":false,\n \"StdinOnce\":false,\n \"Env\":null,\n \"Cmd\":[\n \"date\"\n ],\n \"Image\":\"ubuntu\",\n \"Volumes\":{\n \"/tmp\": {}\n },\n \"WorkingDir\":\"\",\n \"NetworkDisabled\": false,\n \"ExposedPorts\":{\n \"22/tcp\": {}\n }\n } Example response : HTTP/1.1 201 Created\n Content-Type: application/json\n\n {\n \"Id\":\"e90e34656806\"\n \"Warnings\":[]\n } Json Parameters: config \u2013 the container's configuration Query Parameters: name \u2013 Assign the specified name to the container. Mus\n match /?[a-zA-Z0-9_-]+ . Status Codes: 201 \u2013 no error 404 \u2013 no such container 406 \u2013 impossible to attach (container not running) 500 \u2013 server error Inspect a container GET /containers/(id)/json Return low-level information on the container id Example request : GET /containers/4fa6e0f0c678/json HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\n \"Id\": \"4fa6e0f0c6786287e131c3852c58a2e01cc697a68231826813597e4994f1d6e2\",\n \"Created\": \"2013-05-07T14:51:42.041847+02:00\",\n \"Path\": \"date\",\n \"Args\": [],\n \"Config\": {\n \"Hostname\": \"4fa6e0f0c678\",\n \"User\": \"\",\n \"Memory\": 0,\n \"MemorySwap\": 0,\n \"AttachStdin\": false,\n \"AttachStdout\": true,\n \"AttachStderr\": true,\n \"PortSpecs\": null,\n \"Tty\": false,\n \"OpenStdin\": false,\n \"StdinOnce\": false,\n \"Env\": null,\n \"Cmd\": [\n \"date\"\n ],\n \"Dns\": null,\n \"Image\": \"ubuntu\",\n \"Volumes\": {},\n \"VolumesFrom\": \"\",\n \"WorkingDir\": \"\"\n },\n \"State\": {\n \"Running\": false,\n \"Pid\": 0,\n \"ExitCode\": 0,\n \"StartedAt\": \"2013-05-07T14:51:42.087658+02:01360\",\n \"Ghost\": false\n },\n \"Image\": \"b750fe79269d2ec9a3c593ef05b4332b1d1a02a62b4accb2c21d589ff2f5f2dc\",\n \"NetworkSettings\": {\n \"IpAddress\": \"\",\n \"IpPrefixLen\": 0,\n \"Gateway\": \"\",\n \"Bridge\": \"\",\n \"PortMapping\": null\n },\n \"SysInitPath\": \"/home/kitty/go/src/github.com/docker/docker/bin/docker\",\n \"ResolvConfPath\": \"/etc/resolv.conf\",\n \"Volumes\": {},\n \"HostConfig\": {\n \"Binds\": null,\n \"ContainerIDFile\": \"\",\n \"LxcConf\": [],\n \"Privileged\": false,\n \"PortBindings\": {\n \"80/tcp\": [\n {\n \"HostIp\": \"0.0.0.0\",\n \"HostPort\": \"49153\"\n }\n ]\n },\n \"Links\": [\"/name:alias\"],\n \"PublishAllPorts\": false\n }\n } Status Codes: 200 \u2013 no error 404 \u2013 no such container 500 \u2013 server error List processes running inside a container GET /containers/(id)/top List processes running inside the container id Example request : GET /containers/4fa6e0f0c678/top HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\n \"Titles\": [\n \"USER\",\n \"PID\",\n \"%CPU\",\n \"%MEM\",\n \"VSZ\",\n \"RSS\",\n \"TTY\",\n \"STAT\",\n \"START\",\n \"TIME\",\n \"COMMAND\"\n ],\n \"Processes\": [\n [\"root\",\"20147\",\"0.0\",\"0.1\",\"18060\",\"1864\",\"pts/4\",\"S\",\"10:06\",\"0:00\",\"bash\"],\n [\"root\",\"20271\",\"0.0\",\"0.0\",\"4312\",\"352\",\"pts/4\",\"S+\",\"10:07\",\"0:00\",\"sleep\",\"10\"]\n ]\n } Query Parameters: ps_args \u2013 ps arguments to use (e.g., aux) Status Codes: 200 \u2013 no error 404 \u2013 no such container 500 \u2013 server error Get container logs GET /containers/(id)/logs Get stdout and stderr logs from the container id Example request : GET /containers/4fa6e0f0c678/logs?stderr=1 stdout=1 timestamps=1 follow=1 tail=10 HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/vnd.docker.raw-stream\n\n {{ STREAM }} Query Parameters: follow \u2013 1/True/true or 0/False/false, return stream. Default false stdout \u2013 1/True/true or 0/False/false, show stdout log. Default false stderr \u2013 1/True/true or 0/False/false, show stderr log. Default false timestamps \u2013 1/True/true or 0/False/false, print timestamps for every\n log line. Default false tail \u2013 Output specified number of lines at the end of logs: all or\n number . Default all Status Codes: 200 \u2013 no error 404 \u2013 no such container 500 \u2013 server error Inspect changes on a container's filesystem GET /containers/(id)/changes Inspect changes on container id 's filesystem Example request : GET /containers/4fa6e0f0c678/changes HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n [\n {\n \"Path\": \"/dev\",\n \"Kind\": 0\n },\n {\n \"Path\": \"/dev/kmsg\",\n \"Kind\": 1\n },\n {\n \"Path\": \"/test\",\n \"Kind\": 1\n }\n ] Status Codes: 200 \u2013 no error 404 \u2013 no such container 500 \u2013 server error Export a container GET /containers/(id)/export Export the contents of container id Example request : GET /containers/4fa6e0f0c678/export HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/octet-stream\n\n {{ TAR STREAM }} Status Codes: 200 \u2013 no error 404 \u2013 no such container 500 \u2013 server error Start a container POST /containers/(id)/start Start the container id Example request : POST /containers/(id)/start HTTP/1.1\n Content-Type: application/json\n\n {\n \"Binds\":[\"/tmp:/tmp\"],\n \"Links\":[\"redis3:redis\"],\n \"LxcConf\":[{\"Key\":\"lxc.utsname\",\"Value\":\"docker\"}],\n \"PortBindings\":{ \"22/tcp\": [{ \"HostPort\": \"11022\" }] },\n \"PublishAllPorts\":false,\n \"Privileged\":false,\n \"Dns\": [\"8.8.8.8\"],\n \"VolumesFrom\": [\"parent\", \"other:ro\"]\n } Example response : HTTP/1.1 204 No Content\n Content-Type: text/plain Json Parameters: hostConfig \u2013 the container's host configuration (optional) Status Codes: 204 \u2013 no error 304 \u2013 container already started 404 \u2013 no such container 500 \u2013 server error Stop a container POST /containers/(id)/stop Stop the container id Example request : POST /containers/e90e34656806/stop?t=5 HTTP/1.1 Example response : HTTP/1.1 204 No Content Query Parameters: t \u2013 number of seconds to wait before killing the container Status Codes: 204 \u2013 no error 304 \u2013 container already stopped 404 \u2013 no such container 500 \u2013 server error Restart a container POST /containers/(id)/restart Restart the container id Example request : POST /containers/e90e34656806/restart?t=5 HTTP/1.1 Example response : HTTP/1.1 204 No Content Query Parameters: t \u2013 number of seconds to wait before killing the container Status Codes: 204 \u2013 no error 404 \u2013 no such container 500 \u2013 server error Kill a container POST /containers/(id)/kill Kill the container id Example request : POST /containers/e90e34656806/kill HTTP/1.1 Example response : HTTP/1.1 204 No Content Query Parameters signal - Signal to send to the container: integer or string like \"SIGINT\".\n When not set, SIGKILL is assumed and the call will wait for the container to exit. Status Codes: 204 \u2013 no error 404 \u2013 no such container 500 \u2013 server error Pause a container POST /containers/(id)/pause Pause the container id Example request : POST /containers/e90e34656806/pause HTTP/1.1 Example response : HTTP/1.1 204 No Content Status Codes: 204 \u2013 no error 404 \u2013 no such container 500 \u2013 server error Unpause a container POST /containers/(id)/unpause Unpause the container id Example request : POST /containers/e90e34656806/unpause HTTP/1.1 Example response : HTTP/1.1 204 No Content Status Codes: 204 \u2013 no error 404 \u2013 no such container 500 \u2013 server error Attach to a container POST /containers/(id)/attach Attach to the container id Example request : POST /containers/16253994b7c4/attach?logs=1 stream=0 stdout=1 HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/vnd.docker.raw-stream\n\n {{ STREAM }} Query Parameters: logs \u2013 1/True/true or 0/False/false, return logs. Default false stream \u2013 1/True/true or 0/False/false, return stream. Default false stdin \u2013 1/True/true or 0/False/false, if stream=true, attach to stdin.\n Default false stdout \u2013 1/True/true or 0/False/false, if logs=true, return\n stdout log, if stream=true, attach to stdout. Default false stderr \u2013 1/True/true or 0/False/false, if logs=true, return\n stderr log, if stream=true, attach to stderr. Default false Status Codes: 200 \u2013 no error 400 \u2013 bad parameter 404 \u2013 no such container 500 \u2013 server error Stream details : When using the TTY setting is enabled in POST /containers/create ,\nthe stream is the raw data from the process PTY and client's stdin.\nWhen the TTY is disabled, then the stream is multiplexed to separate\nstdout and stderr. The format is a Header and a Payload (frame). HEADER The header will contain the information on which stream write the\nstream (stdout or stderr). It also contain the size of the\nassociated frame encoded on the last 4 bytes (uint32). It is encoded on the first 8 bytes like this: header := [8]byte{STREAM_TYPE, 0, 0, 0, SIZE1, SIZE2, SIZE3, SIZE4} STREAM_TYPE can be: 0: stdin (will be written on stdout) 1: stdout 2: stderr SIZE1, SIZE2, SIZE3, SIZE4 are the 4 bytes of\nthe uint32 size encoded as big endian. PAYLOAD The payload is the raw stream. IMPLEMENTATION The simplest way to implement the Attach protocol is the following: Read 8 bytes chose stdout or stderr depending on the first byte Extract the frame size from the last 4 byets Read the extracted size and output it on the correct output Goto 1 Attach to a container (websocket) GET /containers/(id)/attach/ws Attach to the container id via websocket Implements websocket protocol handshake according to RFC 6455 Example request GET /containers/e90e34656806/attach/ws?logs=0 stream=1 stdin=1 stdout=1 stderr=1 HTTP/1.1 Example response {{ STREAM }} Query Parameters: logs \u2013 1/True/true or 0/False/false, return logs. Default false stream \u2013 1/True/true or 0/False/false, return stream.\n Default false stdin \u2013 1/True/true or 0/False/false, if stream=true, attach\n to stdin. Default false stdout \u2013 1/True/true or 0/False/false, if logs=true, return\n stdout log, if stream=true, attach to stdout. Default false stderr \u2013 1/True/true or 0/False/false, if logs=true, return\n stderr log, if stream=true, attach to stderr. Default false Status Codes: 200 \u2013 no error 400 \u2013 bad parameter 404 \u2013 no such container 500 \u2013 server error Wait a container POST /containers/(id)/wait Block until container id stops, then returns the exit code Example request : POST /containers/16253994b7c4/wait HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\"StatusCode\": 0} Status Codes: 200 \u2013 no error 404 \u2013 no such container 500 \u2013 server error Remove a container DELETE /containers/(id) Remove the container id from the filesystem Example request : DELETE /containers/16253994b7c4?v=1 HTTP/1.1 Example response : HTTP/1.1 204 No Content Query Parameters: v \u2013 1/True/true or 0/False/false, Remove the volumes\n associated to the container. Default false force \u2013 1/True/true or 0/False/false, Removes the container\n even if it was running. Default false Status Codes: 204 \u2013 no error 400 \u2013 bad parameter 404 \u2013 no such container 500 \u2013 server error Copy files or folders from a container POST /containers/(id)/copy Copy files or folders of container id Example request : POST /containers/4fa6e0f0c678/copy HTTP/1.1\n Content-Type: application/json\n\n {\n \"Resource\": \"test.txt\"\n } Example response : HTTP/1.1 200 OK\n Content-Type: application/octet-stream\n\n {{ TAR STREAM }} Status Codes: 200 \u2013 no error 404 \u2013 no such container 500 \u2013 server error",
|
|
"title": "2.1 Containers"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api_v1.13#22-images",
|
|
"tags": "",
|
|
"text": "List Images GET /images/json Example request : GET /images/json?all=0 HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n [\n {\n \"RepoTags\": [\n \"ubuntu:12.04\",\n \"ubuntu:precise\",\n \"ubuntu:latest\"\n ],\n \"Id\": \"8dbd9e392a964056420e5d58ca5cc376ef18e2de93b5cc90e868a1bbc8318c1c\",\n \"Created\": 1365714795,\n \"Size\": 131506275,\n \"VirtualSize\": 131506275\n },\n {\n \"RepoTags\": [\n \"ubuntu:12.10\",\n \"ubuntu:quantal\"\n ],\n \"ParentId\": \"27cf784147099545\",\n \"Id\": \"b750fe79269d2ec9a3c593ef05b4332b1d1a02a62b4accb2c21d589ff2f5f2dc\",\n \"Created\": 1364102658,\n \"Size\": 24653,\n \"VirtualSize\": 180116135\n }\n ] Query Parameters: all \u2013 1/True/true or 0/False/false, default false filters \u2013 a json encoded value of the filters (a map[string][]string) to process on the images list. Available filters: dangling=true Create an image POST /images/create Create an image, either by pulling it from the registry or by importing it Example request : POST /images/create?fromImage=ubuntu HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\"status\": \"Pulling...\"}\n {\"status\": \"Pulling\", \"progress\": \"1 B/ 100 B\", \"progressDetail\": {\"current\": 1, \"total\": 100}}\n {\"error\": \"Invalid...\"}\n ...\n\nWhen using this endpoint to pull an image from the registry, the\n`X-Registry-Auth` header can be used to include\na base64-encoded AuthConfig object. Query Parameters: fromImage \u2013 name of the image to pull fromSrc \u2013 source to import, - means stdin repo \u2013 repository tag \u2013 tag registry \u2013 the registry to pull from Request Headers: X-Registry-Auth \u2013 base64-encoded AuthConfig object Status Codes: 200 \u2013 no error 500 \u2013 server error Inspect an image GET /images/(name)/json Return low-level information on the image name Example request : GET /images/ubuntu/json HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\n \"Created\": \"2013-03-23T22:24:18.818426-07:00\",\n \"Container\": \"3d67245a8d72ecf13f33dffac9f79dcdf70f75acb84d308770391510e0c23ad0\",\n \"ContainerConfig\":\n {\n \"Hostname\": \"\",\n \"User\": \"\",\n \"Memory\": 0,\n \"MemorySwap\": 0,\n \"AttachStdin\": false,\n \"AttachStdout\": false,\n \"AttachStderr\": false,\n \"PortSpecs\": null,\n \"Tty\": true,\n \"OpenStdin\": true,\n \"StdinOnce\": false,\n \"Env\": null,\n \"Cmd\": [\"/bin/bash\"],\n \"Dns\": null,\n \"Image\": \"ubuntu\",\n \"Volumes\": null,\n \"VolumesFrom\": \"\",\n \"WorkingDir\": \"\"\n },\n \"Id\": \"b750fe79269d2ec9a3c593ef05b4332b1d1a02a62b4accb2c21d589ff2f5f2dc\",\n \"Parent\": \"27cf784147099545\",\n \"Size\": 6824592\n } Status Codes: 200 \u2013 no error 404 \u2013 no such image 500 \u2013 server error Get the history of an image GET /images/(name)/history Return the history of the image name Example request : GET /images/ubuntu/history HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n [\n {\n \"Id\": \"b750fe79269d\",\n \"Created\": 1364102658,\n \"CreatedBy\": \"/bin/bash\"\n },\n {\n \"Id\": \"27cf78414709\",\n \"Created\": 1364068391,\n \"CreatedBy\": \"\"\n }\n ] Status Codes: 200 \u2013 no error 404 \u2013 no such image 500 \u2013 server error Push an image on the registry POST /images/(name)/push Push the image name on the registry Example request : POST /images/test/push HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\"status\": \"Pushing...\"}\n {\"status\": \"Pushing\", \"progress\": \"1/? (n/a)\", \"progressDetail\": {\"current\": 1}}}\n {\"error\": \"Invalid...\"}\n ...\n\nIf you wish to push an image on to a private registry, that image must already have been tagged\ninto a repository which references that registry host name and port. This repository name should\nthen be used in the URL. This mirrors the flow of the CLI. Example request : POST /images/registry.acme.com:5000/test/push HTTP/1.1 Query Parameters: tag \u2013 the tag to associate with the image on the registry, optional Request Headers: X-Registry-Auth \u2013 include a base64-encoded AuthConfig object. Status Codes: 200 \u2013 no error 404 \u2013 no such image 500 \u2013 server error Tag an image into a repository POST /images/(name)/tag Tag the image name into a repository Example request : POST /images/test/tag?repo=myrepo force=0 tag=v42 HTTP/1.1 Example response : HTTP/1.1 201 OK Query Parameters: repo \u2013 The repository to tag in force \u2013 1/True/true or 0/False/false, default false tag - The new tag name Status Codes: 201 \u2013 no error 400 \u2013 bad parameter 404 \u2013 no such image 409 \u2013 conflict 500 \u2013 server error Remove an image DELETE /images/(name) Remove the image name from the filesystem Example request : DELETE /images/test HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-type: application/json\n\n [\n {\"Untagged\": \"3e2f21a89f\"},\n {\"Deleted\": \"3e2f21a89f\"},\n {\"Deleted\": \"53b4f83ac9\"}\n ] Query Parameters: force \u2013 1/True/true or 0/False/false, default false noprune \u2013 1/True/true or 0/False/false, default false Status Codes: 200 \u2013 no error 404 \u2013 no such image 409 \u2013 conflict 500 \u2013 server error Search images GET /images/search Search for an image on Docker Hub . Note :\nThe response keys have changed from API v1.6 to reflect the JSON\nsent by the registry server to the docker daemon's request. Example request : GET /images/search?term=sshd HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n [\n {\n \"description\": \"\",\n \"is_official\": false,\n \"is_automated\": false,\n \"name\": \"wma55/u1210sshd\",\n \"star_count\": 0\n },\n {\n \"description\": \"\",\n \"is_official\": false,\n \"is_automated\": false,\n \"name\": \"jdswinbank/sshd\",\n \"star_count\": 0\n },\n {\n \"description\": \"\",\n \"is_official\": false,\n \"is_automated\": false,\n \"name\": \"vgauthier/sshd\",\n \"star_count\": 0\n }\n ...\n ] Query Parameters: term \u2013 term to search Status Codes: 200 \u2013 no error 500 \u2013 server error",
|
|
"title": "2.2 Images"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api_v1.13#23-misc",
|
|
"tags": "",
|
|
"text": "Build an image from Dockerfile via stdin POST /build Build an image from Dockerfile via stdin Example request : POST /build HTTP/1.1\n\n {{ TAR STREAM }} Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\"stream\": \"Step 1...\"}\n {\"stream\": \"...\"}\n {\"error\": \"Error...\", \"errorDetail\": {\"code\": 123, \"message\": \"Error...\"}}\n\nThe stream must be a tar archive compressed with one of the\nfollowing algorithms: identity (no compression), gzip, bzip2, xz.\n\nThe archive must include a file called `Dockerfile`\nat its root. It may include any number of other files,\nwhich will be accessible in the build context (See the [*ADD build\ncommand*](/reference/builder/#dockerbuilder)). Query Parameters: t \u2013 repository name (and optionally a tag) to be applied to\n the resulting image in case of success remote \u2013 git or HTTP/HTTPS URI build source q \u2013 suppress verbose build output nocache \u2013 do not use the cache when building the image rm - remove intermediate containers after a successful build (default behavior) forcerm - always remove intermediate containers (includes rm) Request Headers: Content-type \u2013 should be set to \"application/tar\" . X-Registry-Config \u2013 base64-encoded ConfigFile objec Status Codes: 200 \u2013 no error 500 \u2013 server error Check auth configuration POST /auth Get the default username and email Example request : POST /auth HTTP/1.1\n Content-Type: application/json\n\n {\n \"username\":\" hannibal\",\n \"password: \"xxxx\",\n \"email\": \"hannibal@a-team.com\",\n \"serveraddress\": \"https://index.docker.io/v1/\"\n } Example response : HTTP/1.1 200 OK Status Codes: 200 \u2013 no error 204 \u2013 no error 500 \u2013 server error Display system-wide information GET /info Display system-wide information Example request : GET /info HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\n \"Containers\": 11,\n \"Images\": 16,\n \"Driver\": \"btrfs\",\n \"ExecutionDriver\": \"native-0.1\",\n \"KernelVersion\": \"3.12.0-1-amd64\"\n \"Debug\": false,\n \"NFd\": 11,\n \"NGoroutines\": 21,\n \"NEventsListener\": 0,\n \"InitPath\": \"/usr/bin/docker\",\n \"IndexServerAddress\": [\"https://index.docker.io/v1/\"],\n \"MemoryLimit\": true,\n \"SwapLimit\": false,\n \"IPv4Forwarding\": true\n } Status Codes: 200 \u2013 no error 500 \u2013 server error Show the docker version information GET /version Show the docker version information Example request : GET /version HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\n \"ApiVersion\": \"1.12\",\n \"Version\": \"0.2.2\",\n \"GitCommit\": \"5a2a5cc+CHANGES\",\n \"GoVersion\": \"go1.0.3\"\n } Status Codes: 200 \u2013 no error 500 \u2013 server error Ping the docker server GET /_ping Ping the docker server Example request : GET /_ping HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: text/plain\n\n OK Status Codes: 200 - no error 500 - server error Create a new image from a container's changes POST /commit Create a new image from a container's changes Example request : POST /commit?container=44c004db4b17 comment=message repo=myrepo HTTP/1.1\n Content-Type: application/json\n\n {\n \"Hostname\": \"\",\n \"Domainname\": \"\",\n \"User\": \"\",\n \"Memory\": 0,\n \"MemorySwap\": 0,\n \"CpuShares\": 512,\n \"Cpuset\": \"0,1\",\n \"AttachStdin\": false,\n \"AttachStdout\": true,\n \"AttachStderr\": true,\n \"PortSpecs\": null,\n \"Tty\": false,\n \"OpenStdin\": false,\n \"StdinOnce\": false,\n \"Env\": null,\n \"Cmd\": [\n \"date\"\n ],\n \"Volumes\": {\n \"/tmp\": {}\n },\n \"WorkingDir\": \"\",\n \"NetworkDisabled\": false,\n \"ExposedPorts\": {\n \"22/tcp\": {}\n }\n } Example response : HTTP/1.1 201 Created\n Content-Type: application/vnd.docker.raw-stream\n\n {\"Id\": \"596069db4bf5\"} Json Parameters: config - the container's configuration Query Parameters: container \u2013 source container repo \u2013 repository tag \u2013 tag comment \u2013 commit message author \u2013 author (e.g., \"John Hannibal Smith\n hannibal@a-team.com \") Status Codes: 201 \u2013 no error 404 \u2013 no such container 500 \u2013 server error Monitor Docker's events GET /events Get container events from docker, either in real time via streaming, or via\npolling (using since). Docker containers will report the following events: create, destroy, die, export, kill, pause, restart, start, stop, unpause and Docker images will report: untag, delete Example request : GET /events?since=1374067924 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\"status\": \"create\", \"id\": \"dfdf82bd3881\",\"from\": \"ubuntu:latest\", \"time\":1374067924}\n {\"status\": \"start\", \"id\": \"dfdf82bd3881\",\"from\": \"ubuntu:latest\", \"time\":1374067924}\n {\"status\": \"stop\", \"id\": \"dfdf82bd3881\",\"from\": \"ubuntu:latest\", \"time\":1374067966}\n {\"status\": \"destroy\", \"id\": \"dfdf82bd3881\",\"from\": \"ubuntu:latest\", \"time\":1374067970} Query Parameters: since \u2013 timestamp used for polling until \u2013 timestamp used for polling Status Codes: 200 \u2013 no error 500 \u2013 server error Get a tarball containing all images and tags in a repository GET /images/(name)/get Get a tarball containing all images and metadata for the repository\nspecified by name . See the image tarball format for more details. Example request GET /images/ubuntu/get Example response : HTTP/1.1 200 OK\n Content-Type: application/x-tar\n\n Binary data stream Status Codes: 200 \u2013 no error 500 \u2013 server error Load a tarball with a set of images and tags into docker POST /images/load Load a set of images and tags into the docker repository. See the image tarball format for more details. Example request POST /images/load\n\n Tarball in body Example response : HTTP/1.1 200 OK Status Codes: 200 \u2013 no error 500 \u2013 server error Image tarball format An image tarball contains one directory per image layer (named using its long ID),\neach containing three files: VERSION : currently 1.0 - the file format version json : detailed layer information, similar to docker inspect layer_id layer.tar : A tarfile containing the filesystem changes in this layer The layer.tar file will contain aufs style .wh..wh.aufs files and directories\nfor storing attribute changes and deletions. If the tarball defines a repository, there will also be a repositories file at\nthe root that contains a list of repository and tag names mapped to layer IDs. { hello-world :\n { latest : 565a9d68a73f6706862bfe8409a7f659776d4d60a8d096eb4a3cbce6999cc2a1 }\n}",
|
|
"title": "2.3 Misc"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api_v1.13#3-going-further",
|
|
"tags": "",
|
|
"text": "",
|
|
"title": "3. Going further"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api_v1.13#31-inside-docker-run",
|
|
"tags": "",
|
|
"text": "As an example, the docker run command line makes the following API calls: Create the container If the status code is 404, it means the image doesn't exist: Try to pull it Then retry to create the container Start the container If you are not in detached mode: Attach to the container, using logs=1 (to have stdout and\n stderr from the container's start) and stream=1 If in detached mode or only stdin is attached: Display the container's id",
|
|
"title": "3.1 Inside docker run"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api_v1.13#32-hijacking",
|
|
"tags": "",
|
|
"text": "In this version of the API, /attach, uses hijacking to transport stdin,\nstdout and stderr on the same socket. This might change in the future.",
|
|
"title": "3.2 Hijacking"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api_v1.13#33-cors-requests",
|
|
"tags": "",
|
|
"text": "To enable cross origin requests to the remote api add the flag\n\"--api-enable-cors\" when running docker in daemon mode. $ docker -d -H=\"192.168.1.9:2375\" --api-enable-cors",
|
|
"title": "3.3 CORS Requests"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api_v1.12/",
|
|
"tags": "",
|
|
"text": "Docker Remote API v1.12\n1. Brief introduction\n\nThe Remote API has replaced rcli.\nThe daemon listens on unix:///var/run/docker.sock but you can\n Bind Docker to another host/port or a Unix socket.\nThe API tends to be REST, but for some complex commands, like attach\n or pull, the HTTP connection is hijacked to transport STDOUT,\n STDIN and STDERR.\n\n2. Endpoints\n2.1 Containers\nList containers\nGET /containers/json\nList containers\nExample request:\n GET /containers/json?all=1before=8dfafdbc3a40size=1 HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n [\n {\n \"Id\": \"8dfafdbc3a40\",\n \"Image\": \"ubuntu:latest\",\n \"Command\": \"echo 1\",\n \"Created\": 1367854155,\n \"Status\": \"Exit 0\",\n \"Ports\": [{\"PrivatePort\": 2222, \"PublicPort\": 3333, \"Type\": \"tcp\"}],\n \"SizeRw\": 12288,\n \"SizeRootFs\": 0\n },\n {\n \"Id\": \"9cd87474be90\",\n \"Image\": \"ubuntu:latest\",\n \"Command\": \"echo 222222\",\n \"Created\": 1367854155,\n \"Status\": \"Exit 0\",\n \"Ports\": [],\n \"SizeRw\": 12288,\n \"SizeRootFs\": 0\n },\n {\n \"Id\": \"3176a2479c92\",\n \"Image\": \"ubuntu:latest\",\n \"Command\": \"echo 3333333333333333\",\n \"Created\": 1367854154,\n \"Status\": \"Exit 0\",\n \"Ports\":[],\n \"SizeRw\":12288,\n \"SizeRootFs\":0\n },\n {\n \"Id\": \"4cb07b47f9fb\",\n \"Image\": \"ubuntu:latest\",\n \"Command\": \"echo 444444444444444444444444444444444\",\n \"Created\": 1367854152,\n \"Status\": \"Exit 0\",\n \"Ports\": [],\n \"SizeRw\": 12288,\n \"SizeRootFs\": 0\n }\n ]\n\nQuery Parameters:\n\nall \u2013 1/True/true or 0/False/false, Show all containers.\n Only running containers are shown by defaul\nlimit \u2013 Show limit last created\n containers, include non-running ones.\nsince \u2013 Show only containers created since Id, include\n non-running ones.\nbefore \u2013 Show only containers created before Id, include\n non-running ones.\nsize \u2013 1/True/true or 0/False/false, Show the containers\n sizes\nfilters \u2013 a JSON encoded value of the filters (a map[string][]string)\n to process on the images list.\n\nStatus Codes:\n\n200 \u2013 no error\n400 \u2013 bad parameter\n500 \u2013 server error\n\nCreate a container\nPOST /containers/create\nCreate a container\nExample request:\n POST /containers/create HTTP/1.1\n Content-Type: application/json\n\n {\n \"Hostname\":\"\",\n \"Domainname\": \"\",\n \"User\":\"\",\n \"Memory\":0,\n \"MemorySwap\":0,\n \"CpuShares\": 512,\n \"Cpuset\": \"0,1\",\n \"AttachStdin\":false,\n \"AttachStdout\":true,\n \"AttachStderr\":true,\n \"PortSpecs\":null,\n \"Tty\":false,\n \"OpenStdin\":false,\n \"StdinOnce\":false,\n \"Env\":null,\n \"Cmd\":[\n \"date\"\n ],\n \"Image\":\"ubuntu\",\n \"Volumes\":{\n \"/tmp\": {}\n },\n \"WorkingDir\":\"\",\n \"NetworkDisabled\": false,\n \"ExposedPorts\":{\n \"22/tcp\": {}\n }\n }\n\nExample response:\n HTTP/1.1 201 Created\n Content-Type: application/json\n\n {\n \"Id\":\"e90e34656806\"\n \"Warnings\":[]\n }\n\nJson Parameters:\n\nconfig \u2013 the container's configuration\n\nQuery Parameters:\n\n\n\nname \u2013 Assign the specified name to the container. Mus\n match /?[a-zA-Z0-9_-]+.\n\nStatus Codes:\n\n201 \u2013 no error\n404 \u2013 no such container\n406 \u2013 impossible to attach (container not running)\n500 \u2013 server error\n\nInspect a container\nGET /containers/(id)/json\nReturn low-level information on the container id\nExample request:\n GET /containers/4fa6e0f0c678/json HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\n \"Id\": \"4fa6e0f0c6786287e131c3852c58a2e01cc697a68231826813597e4994f1d6e2\",\n \"Created\": \"2013-05-07T14:51:42.041847+02:00\",\n \"Path\": \"date\",\n \"Args\": [],\n \"Config\": {\n \"Hostname\": \"4fa6e0f0c678\",\n \"User\": \"\",\n \"Memory\": 0,\n \"MemorySwap\": 0,\n \"AttachStdin\": false,\n \"AttachStdout\": true,\n \"AttachStderr\": true,\n \"PortSpecs\": null,\n \"Tty\": false,\n \"OpenStdin\": false,\n \"StdinOnce\": false,\n \"Env\": null,\n \"Cmd\": [\n \"date\"\n ],\n \"Dns\": null,\n \"Image\": \"ubuntu\",\n \"Volumes\": {},\n \"VolumesFrom\": \"\",\n \"WorkingDir\": \"\"\n },\n \"State\": {\n \"Running\": false,\n \"Pid\": 0,\n \"ExitCode\": 0,\n \"StartedAt\": \"2013-05-07T14:51:42.087658+02:01360\",\n \"Ghost\": false\n },\n \"Image\": \"b750fe79269d2ec9a3c593ef05b4332b1d1a02a62b4accb2c21d589ff2f5f2dc\",\n \"NetworkSettings\": {\n \"IpAddress\": \"\",\n \"IpPrefixLen\": 0,\n \"Gateway\": \"\",\n \"Bridge\": \"\",\n \"PortMapping\": null\n },\n \"SysInitPath\": \"/home/kitty/go/src/github.com/docker/docker/bin/docker\",\n \"ResolvConfPath\": \"/etc/resolv.conf\",\n \"Volumes\": {},\n \"HostConfig\": {\n \"Binds\": null,\n \"ContainerIDFile\": \"\",\n \"LxcConf\": [],\n \"Privileged\": false,\n \"PortBindings\": {\n \"80/tcp\": [\n {\n \"HostIp\": \"0.0.0.0\",\n \"HostPort\": \"49153\"\n }\n ]\n },\n \"Links\": null,\n \"PublishAllPorts\": false\n }\n }\n\nStatus Codes:\n\n200 \u2013 no error\n404 \u2013 no such container\n500 \u2013 server error\n\nList processes running inside a container\nGET /containers/(id)/top\nList processes running inside the container id\nExample request:\n GET /containers/4fa6e0f0c678/top HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\n \"Titles\": [\n \"USER\",\n \"PID\",\n \"%CPU\",\n \"%MEM\",\n \"VSZ\",\n \"RSS\",\n \"TTY\",\n \"STAT\",\n \"START\",\n \"TIME\",\n \"COMMAND\"\n ],\n \"Processes\": [\n [\"root\",\"20147\",\"0.0\",\"0.1\",\"18060\",\"1864\",\"pts/4\",\"S\",\"10:06\",\"0:00\",\"bash\"],\n [\"root\",\"20271\",\"0.0\",\"0.0\",\"4312\",\"352\",\"pts/4\",\"S+\",\"10:07\",\"0:00\",\"sleep\",\"10\"]\n ]\n }\n\nQuery Parameters:\n\nps_args \u2013 ps arguments to use (e.g., aux)\n\nStatus Codes:\n\n200 \u2013 no error\n404 \u2013 no such container\n500 \u2013 server error\n\nGet container logs\nGET /containers/(id)/logs\nGet stdout and stderr logs from the container id\nExample request:\n GET /containers/4fa6e0f0c678/logs?stderr=1stdout=1timestamps=1follow=1 HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/vnd.docker.raw-stream\n\n {{ STREAM }}\n\nQuery Parameters:\n\n\n\nfollow \u2013 1/True/true or 0/False/false, return stream.\n Default false\nstdout \u2013 1/True/true or 0/False/false, if logs=true, return\n stdout log. Default false\nstderr \u2013 1/True/true or 0/False/false, if logs=true, return\n stderr log. Default false\ntimestamps \u2013 1/True/true or 0/False/false, if logs=true, prin\n timestamps for every log line. Default false\n\nStatus Codes:\n\n200 \u2013 no error\n404 \u2013 no such container\n500 \u2013 server error\n\nInspect changes on a container's filesystem\nGET /containers/(id)/changes\nInspect changes on container id's filesystem\nExample request:\n GET /containers/4fa6e0f0c678/changes HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n [\n {\n \"Path\": \"/dev\",\n \"Kind\": 0\n },\n {\n \"Path\": \"/dev/kmsg\",\n \"Kind\": 1\n },\n {\n \"Path\": \"/test\",\n \"Kind\": 1\n }\n ]\n\nStatus Codes:\n\n200 \u2013 no error\n404 \u2013 no such container\n500 \u2013 server error\n\nExport a container\nGET /containers/(id)/export\nExport the contents of container id\nExample request:\n GET /containers/4fa6e0f0c678/export HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/octet-stream\n\n {{ TAR STREAM }}\n\nStatus Codes:\n\n200 \u2013 no error\n404 \u2013 no such container\n500 \u2013 server error\n\nStart a container\nPOST /containers/(id)/start\nStart the container id\nExample request:\n POST /containers/(id)/start HTTP/1.1\n Content-Type: application/json\n\n {\n \"Binds\":[\"/tmp:/tmp\"],\n \"Links\":[\"redis3:redis\"],\n \"LxcConf\":[{\"Key\":\"lxc.utsname\",\"Value\":\"docker\"}],\n \"PortBindings\":{ \"22/tcp\": [{ \"HostPort\": \"11022\" }] },\n \"PublishAllPorts\":false,\n \"Privileged\":false,\n \"Dns\": [\"8.8.8.8\"],\n \"VolumesFrom\": [\"parent\", \"other:ro\"]\n }\n\nExample response:\n HTTP/1.1 204 No Content\n Content-Type: text/plain\n\nJson Parameters:\n\n\n\nhostConfig \u2013 the container's host configuration (optional)\n\nStatus Codes:\n\n204 \u2013 no error\n404 \u2013 no such container\n500 \u2013 server error\n\nStop a container\nPOST /containers/(id)/stop\nStop the container id\nExample request:\n POST /containers/e90e34656806/stop?t=5 HTTP/1.1\n\nExample response:\n HTTP/1.1 204 No Content\n\nQuery Parameters:\n\nt \u2013 number of seconds to wait before killing the container\n\nStatus Codes:\n\n204 \u2013 no error\n404 \u2013 no such container\n500 \u2013 server error\n\nRestart a container\nPOST /containers/(id)/restart\nRestart the container id\nExample request:\n POST /containers/e90e34656806/restart?t=5 HTTP/1.1\n\nExample response:\n HTTP/1.1 204 No Content\n\nQuery Parameters:\n\nt \u2013 number of seconds to wait before killing the container\n\nStatus Codes:\n\n204 \u2013 no error\n404 \u2013 no such container\n500 \u2013 server error\n\nKill a container\nPOST /containers/(id)/kill\nKill the container id\nExample request:\n POST /containers/e90e34656806/kill HTTP/1.1\n\nExample response:\n HTTP/1.1 204 No Content\n\nQuery Parameters\n\nsignal - Signal to send to the container: integer or string like \"SIGINT\".\n When not set, SIGKILL is assumed and the call will wait for the container to exit.\n\nStatus Codes:\n\n204 \u2013 no error\n404 \u2013 no such container\n500 \u2013 server error\n\nPause a container\nPOST /containers/(id)/pause\nPause the container id\nExample request:\n POST /containers/e90e34656806/pause HTTP/1.1\n\nExample response:\n HTTP/1.1 204 No Content\n\nStatus Codes:\n\n204 \u2013 no error\n404 \u2013 no such container\n500 \u2013 server error\n\nUnpause a container\nPOST /containers/(id)/unpause\nUnpause the container id\nExample request:\n POST /containers/e90e34656806/unpause HTTP/1.1\n\nExample response:\n HTTP/1.1 204 No Content\n\nStatus Codes:\n\n204 \u2013 no error\n404 \u2013 no such container\n500 \u2013 server error\n\nAttach to a container\nPOST /containers/(id)/attach\nAttach to the container id\nExample request:\n POST /containers/16253994b7c4/attach?logs=1stream=0stdout=1 HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/vnd.docker.raw-stream\n\n {{ STREAM }}\n\nQuery Parameters:\n\nlogs \u2013 1/True/true or 0/False/false, return logs. Default false\nstream \u2013 1/True/true or 0/False/false, return stream. Default false\nstdin \u2013 1/True/true or 0/False/false, if stream=true, attach to stdin.\n Default false\nstdout \u2013 1/True/true or 0/False/false, if logs=true, return\n stdout log, if stream=true, attach to stdout. Default false\nstderr \u2013 1/True/true or 0/False/false, if logs=true, return\n stderr log, if stream=true, attach to stderr. Default false\n\nStatus Codes:\n\n200 \u2013 no error\n400 \u2013 bad parameter\n404 \u2013 no such container\n\n500 \u2013 server error\nStream details:\nWhen using the TTY setting is enabled in\nPOST /containers/create\n,\nthe stream is the raw data from the process PTY and client's stdin.\nWhen the TTY is disabled, then the stream is multiplexed to separate\nstdout and stderr.\nThe format is a Header and a Payload (frame).\nHEADER\nThe header will contain the information on which stream write the\nstream (stdout or stderr). It also contain the size of the\nassociated frame encoded on the last 4 bytes (uint32).\nIt is encoded on the first 8 bytes like this:\nheader := [8]byte{STREAM_TYPE, 0, 0, 0, SIZE1, SIZE2, SIZE3, SIZE4}\n\nSTREAM_TYPE can be:\n\n\n0: stdin (will be written on stdout)\n\n1: stdout\n\n2: stderr\nSIZE1, SIZE2, SIZE3, SIZE4 are the 4 bytes of\nthe uint32 size encoded as big endian.\nPAYLOAD\nThe payload is the raw stream.\nIMPLEMENTATION\nThe simplest way to implement the Attach protocol is the following:\n\nRead 8 bytes\nchose stdout or stderr depending on the first byte\nExtract the frame size from the last 4 byets\nRead the extracted size and output it on the correct output\nGoto 1\n\n\n\nAttach to a container (websocket)\nGET /containers/(id)/attach/ws\nAttach to the container id via websocket\nImplements websocket protocol handshake according to RFC 6455\nExample request\n GET /containers/e90e34656806/attach/ws?logs=0stream=1stdin=1stdout=1stderr=1 HTTP/1.1\n\nExample response\n {{ STREAM }}\n\nQuery Parameters:\n\nlogs \u2013 1/True/true or 0/False/false, return logs. Default false\nstream \u2013 1/True/true or 0/False/false, return stream.\n Default false\nstdin \u2013 1/True/true or 0/False/false, if stream=true, attach\n to stdin. Default false\nstdout \u2013 1/True/true or 0/False/false, if logs=true, return\n stdout log, if stream=true, attach to stdout. Default false\nstderr \u2013 1/True/true or 0/False/false, if logs=true, return\n stderr log, if stream=true, attach to stderr. Default false\n\nStatus Codes:\n\n200 \u2013 no error\n400 \u2013 bad parameter\n404 \u2013 no such container\n500 \u2013 server error\n\nWait a container\nPOST /containers/(id)/wait\nBlock until container id stops, then returns the exit code\nExample request:\n POST /containers/16253994b7c4/wait HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\"StatusCode\": 0}\n\nStatus Codes:\n\n200 \u2013 no error\n404 \u2013 no such container\n500 \u2013 server error\n\nRemove a container\nDELETE /containers/(id)\nRemove the container id from the filesystem\nExample request:\n DELETE /containers/16253994b7c4?v=1 HTTP/1.1\n\nExample response:\n HTTP/1.1 204 No Content\n\nQuery Parameters:\n\nv \u2013 1/True/true or 0/False/false, Remove the volumes\n associated to the container. Default false\nforce \u2013 1/True/true or 0/False/false, Removes the container\n even if it was running. Default false\n\nStatus Codes:\n\n204 \u2013 no error\n400 \u2013 bad parameter\n404 \u2013 no such container\n500 \u2013 server error\n\nCopy files or folders from a container\nPOST /containers/(id)/copy\nCopy files or folders of container id\nExample request:\n POST /containers/4fa6e0f0c678/copy HTTP/1.1\n Content-Type: application/json\n\n {\n \"Resource\": \"test.txt\"\n }\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/octet-stream\n\n {{ TAR STREAM }}\n\nStatus Codes:\n\n200 \u2013 no error\n404 \u2013 no such container\n500 \u2013 server error\n\n2.2 Images\nList Images\nGET /images/json\nExample request:\n GET /images/json?all=0 HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n [\n {\n \"RepoTags\": [\n \"ubuntu:12.04\",\n \"ubuntu:precise\",\n \"ubuntu:latest\"\n ],\n \"Id\": \"8dbd9e392a964056420e5d58ca5cc376ef18e2de93b5cc90e868a1bbc8318c1c\",\n \"Created\": 1365714795,\n \"Size\": 131506275,\n \"VirtualSize\": 131506275\n },\n {\n \"RepoTags\": [\n \"ubuntu:12.10\",\n \"ubuntu:quantal\"\n ],\n \"ParentId\": \"27cf784147099545\",\n \"Id\": \"b750fe79269d2ec9a3c593ef05b4332b1d1a02a62b4accb2c21d589ff2f5f2dc\",\n \"Created\": 1364102658,\n \"Size\": 24653,\n \"VirtualSize\": 180116135\n }\n ]\n\nQuery Parameters:\n\n\n\nall \u2013 1/True/true or 0/False/false, default false\nfilters \u2013 a json encoded value of the filters (a map[string][]string) to process on the images list. Available filters:\ndangling=true\n\nCreate an image\nPOST /images/create\nCreate an image, either by pull it from the registry or by importing i\nExample request:\n POST /images/create?fromImage=ubuntu HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\"status\": \"Pulling...\"}\n {\"status\": \"Pulling\", \"progress\": \"1 B/ 100 B\", \"progressDetail\": {\"current\": 1, \"total\": 100}}\n {\"error\": \"Invalid...\"}\n ...\n\nWhen using this endpoint to pull an image from the registry, the\n`X-Registry-Auth` header can be used to include\na base64-encoded AuthConfig object.\n\nQuery Parameters:\n\nfromImage \u2013 name of the image to pull\nfromSrc \u2013 source to import, - means stdin\nrepo \u2013 repository\ntag \u2013 tag\nregistry \u2013 the registry to pull from\n\nRequest Headers:\n\nX-Registry-Auth \u2013 base64-encoded AuthConfig object\n\nStatus Codes:\n\n200 \u2013 no error\n500 \u2013 server error\n\nInspect an image\nGET /images/(name)/json\nReturn low-level information on the image name\nExample request:\n GET /images/ubuntu/json HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\n \"Created\": \"2013-03-23T22:24:18.818426-07:00\",\n \"Container\": \"3d67245a8d72ecf13f33dffac9f79dcdf70f75acb84d308770391510e0c23ad0\",\n \"ContainerConfig\":\n {\n \"Hostname\": \"\",\n \"User\": \"\",\n \"Memory\": 0,\n \"MemorySwap\": 0,\n \"AttachStdin\": false,\n \"AttachStdout\": false,\n \"AttachStderr\": false,\n \"PortSpecs\": null,\n \"Tty\": true,\n \"OpenStdin\": true,\n \"StdinOnce\": false,\n \"Env\": null,\n \"Cmd\": [\"/bin/bash\"],\n \"Dns\": null,\n \"Image\": \"ubuntu\",\n \"Volumes\": null,\n \"VolumesFrom\": \"\",\n \"WorkingDir\": \"\"\n },\n \"Id\": \"b750fe79269d2ec9a3c593ef05b4332b1d1a02a62b4accb2c21d589ff2f5f2dc\",\n \"Parent\": \"27cf784147099545\",\n \"Size\": 6824592\n }\n\nStatus Codes:\n\n200 \u2013 no error\n404 \u2013 no such image\n500 \u2013 server error\n\nGet the history of an image\nGET /images/(name)/history\nReturn the history of the image name\nExample request:\n GET /images/ubuntu/history HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n [\n {\n \"Id\": \"b750fe79269d\",\n \"Created\": 1364102658,\n \"CreatedBy\": \"/bin/bash\"\n },\n {\n \"Id\": \"27cf78414709\",\n \"Created\": 1364068391,\n \"CreatedBy\": \"\"\n }\n ]\n\nStatus Codes:\n\n200 \u2013 no error\n404 \u2013 no such image\n500 \u2013 server error\n\nPush an image on the registry\nPOST /images/(name)/push\nPush the image name on the registry\nExample request:\n POST /images/test/push HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\"status\": \"Pushing...\"}\n {\"status\": \"Pushing\", \"progress\": \"1/? (n/a)\", \"progressDetail\": {\"current\": 1}}}\n {\"error\": \"Invalid...\"}\n ...\n\nIf you wish to push an image on to a private registry, that image must already have been tagged\ninto a repository which references that registry host name and port. This repository name should\nthen be used in the URL. This mirrors the flow of the CLI.\n\nExample request:\n POST /images/registry.acme.com:5000/test/push HTTP/1.1\n\nQuery Parameters:\n\ntag \u2013 the tag to associate with the image on the registry, optional\n\nRequest Headers:\n\nX-Registry-Auth \u2013 include a base64-encoded AuthConfig object.\n\nStatus Codes:\n\n200 \u2013 no error\n404 \u2013 no such image\n500 \u2013 server error\n\nTag an image into a repository\nPOST /images/(name)/tag\nTag the image name into a repository\nExample request:\n POST /images/test/tag?repo=myrepoforce=0tag=v42 HTTP/1.1\n\nExample response:\n HTTP/1.1 201 OK\n\nQuery Parameters:\n\nrepo \u2013 The repository to tag in\nforce \u2013 1/True/true or 0/False/false, default false\ntag - The new tag name\n\nStatus Codes:\n\n201 \u2013 no error\n400 \u2013 bad parameter\n404 \u2013 no such image\n409 \u2013 conflict\n500 \u2013 server error\n\nRemove an image\nDELETE /images/(name)\nRemove the image name from the filesystem\nExample request:\n DELETE /images/test HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-type: application/json\n\n [\n {\"Untagged\": \"3e2f21a89f\"},\n {\"Deleted\": \"3e2f21a89f\"},\n {\"Deleted\": \"53b4f83ac9\"}\n ]\n\nQuery Parameters:\n\nforce \u2013 1/True/true or 0/False/false, default false\nnoprune \u2013 1/True/true or 0/False/false, default false\n\nStatus Codes:\n\n200 \u2013 no error\n404 \u2013 no such image\n409 \u2013 conflict\n500 \u2013 server error\n\nSearch images\nGET /images/search\nSearch for an image on Docker Hub.\n\nNote:\nThe response keys have changed from API v1.6 to reflect the JSON\nsent by the registry server to the docker daemon's request.\n\nExample request:\n GET /images/search?term=sshd HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n [\n {\n \"description\": \"\",\n \"is_official\": false,\n \"is_automated\": false,\n \"name\": \"wma55/u1210sshd\",\n \"star_count\": 0\n },\n {\n \"description\": \"\",\n \"is_official\": false,\n \"is_automated\": false,\n \"name\": \"jdswinbank/sshd\",\n \"star_count\": 0\n },\n {\n \"description\": \"\",\n \"is_official\": false,\n \"is_automated\": false,\n \"name\": \"vgauthier/sshd\",\n \"star_count\": 0\n }\n ...\n ]\n\nQuery Parameters:\n\nterm \u2013 term to search\n\nStatus Codes:\n\n200 \u2013 no error\n500 \u2013 server error\n\n2.3 Misc\nBuild an image from Dockerfile via stdin\nPOST /build\nBuild an image from Dockerfile via stdin\nExample request:\n POST /build HTTP/1.1\n\n {{ TAR STREAM }}\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\"stream\": \"Step 1...\"}\n {\"stream\": \"...\"}\n {\"error\": \"Error...\", \"errorDetail\": {\"code\": 123, \"message\": \"Error...\"}}\n\nThe stream must be a tar archive compressed with one of the\nfollowing algorithms: identity (no compression), gzip, bzip2, xz.\n\nThe archive must include a file called `Dockerfile`\nat its root. It may include any number of other files,\nwhich will be accessible in the build context (See the [*ADD build\ncommand*](/reference/builder/#dockerbuilder)).\n\nQuery Parameters:\n\nt \u2013 repository name (and optionally a tag) to be applied to\n the resulting image in case of success\nremote \u2013 git or HTTP/HTTPS URI build source\nq \u2013 suppress verbose build output\nnocache \u2013 do not use the cache when building the image\nrm - remove intermediate containers after a successful build (default behavior)\n\nforcerm - always remove intermediate containers (includes rm)\nRequest Headers:\n\n\nContent-type \u2013 should be set to \"application/tar\".\n\nX-Registry-Config \u2013 base64-encoded ConfigFile objec\n\nStatus Codes:\n\n200 \u2013 no error\n500 \u2013 server error\n\nCheck auth configuration\nPOST /auth\nGet the default username and email\nExample request:\n POST /auth HTTP/1.1\n Content-Type: application/json\n\n {\n \"username\":\" hannibal\",\n \"password: \"xxxx\",\n \"email\": \"hannibal@a-team.com\",\n \"serveraddress\": \"https://index.docker.io/v1/\"\n }\n\nExample response:\n HTTP/1.1 200 OK\n\nStatus Codes:\n\n200 \u2013 no error\n204 \u2013 no error\n500 \u2013 server error\n\nDisplay system-wide information\nGET /info\nDisplay system-wide information\nExample request:\n GET /info HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\n \"Containers\": 11,\n \"Images\": 16,\n \"Driver\": \"btrfs\",\n \"ExecutionDriver\": \"native-0.1\",\n \"KernelVersion\": \"3.12.0-1-amd64\"\n \"Debug\": false,\n \"NFd\": 11,\n \"NGoroutines\": 21,\n \"NEventsListener\": 0,\n \"InitPath\": \"/usr/bin/docker\",\n \"IndexServerAddress\": [\"https://index.docker.io/v1/\"],\n \"MemoryLimit\": true,\n \"SwapLimit\": false,\n \"IPv4Forwarding\": true\n }\n\nStatus Codes:\n\n200 \u2013 no error\n500 \u2013 server error\n\nShow the docker version information\nGET /version\nShow the docker version information\nExample request:\n GET /version HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\n \"ApiVersion\": \"1.12\",\n \"Version\": \"0.2.2\",\n \"GitCommit\": \"5a2a5cc+CHANGES\",\n \"GoVersion\": \"go1.0.3\"\n }\n\nStatus Codes:\n\n200 \u2013 no error\n500 \u2013 server error\n\nPing the docker server\nGET /_ping\nPing the docker server\nExample request:\n GET /_ping HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: text/plain\n\n OK\n\nStatus Codes:\n\n200 - no error\n500 - server error\n\nCreate a new image from a container's changes\nPOST /commit\nCreate a new image from a container's changes\nExample request:\n POST /commit?container=44c004db4b17comment=messagerepo=myrepo HTTP/1.1\n Content-Type: application/json\n\n {\n \"Hostname\": \"\",\n \"Domainname\": \"\",\n \"User\": \"\",\n \"Memory\": 0,\n \"MemorySwap\": 0,\n \"CpuShares\": 512,\n \"Cpuset\": \"0,1\",\n \"AttachStdin\": false,\n \"AttachStdout\": true,\n \"AttachStderr\": true,\n \"PortSpecs\": null,\n \"Tty\": false,\n \"OpenStdin\": false,\n \"StdinOnce\": false,\n \"Env\": null,\n \"Cmd\": [\n \"date\"\n ],\n \"Volumes\": {\n \"/tmp\": {}\n },\n \"WorkingDir\": \"\",\n \"NetworkDisabled\": false,\n \"ExposedPorts\": {\n \"22/tcp\": {}\n }\n }\n\nExample response:\n HTTP/1.1 201 Created\n Content-Type: application/vnd.docker.raw-stream\n\n {\"Id\": \"596069db4bf5\"}\n\nJson Parameters:\n\nconfig - the container's configuration\n\nQuery Parameters:\n\ncontainer \u2013 source container\nrepo \u2013 repository\ntag \u2013 tag\ncomment \u2013 commit message\nauthor \u2013 author (e.g., \"John Hannibal Smith\n hannibal@a-team.com\")\n\nStatus Codes:\n\n201 \u2013 no error\n404 \u2013 no such container\n500 \u2013 server error\n\nMonitor Docker's events\nGET /events\nGet container events from docker, either in real time via streaming, or via\npolling (using since).\nDocker containers will report the following events:\ncreate, destroy, die, export, kill, pause, restart, start, stop, unpause\n\nand Docker images will report:\nuntag, delete\n\nExample request:\n GET /events?since=1374067924\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\"status\": \"create\", \"id\": \"dfdf82bd3881\",\"from\": \"ubuntu:latest\", \"time\":1374067924}\n {\"status\": \"start\", \"id\": \"dfdf82bd3881\",\"from\": \"ubuntu:latest\", \"time\":1374067924}\n {\"status\": \"stop\", \"id\": \"dfdf82bd3881\",\"from\": \"ubuntu:latest\", \"time\":1374067966}\n {\"status\": \"destroy\", \"id\": \"dfdf82bd3881\",\"from\": \"ubuntu:latest\", \"time\":1374067970}\n\nQuery Parameters:\n\nsince \u2013 timestamp used for polling\nuntil \u2013 timestamp used for polling\n\nStatus Codes:\n\n200 \u2013 no error\n500 \u2013 server error\n\nGet a tarball containing all images and tags in a repository\nGET /images/(name)/get\nGet a tarball containing all images and metadata for the repository\nspecified by name.\nSee the image tarball format for more details.\nExample request\n GET /images/ubuntu/get\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/x-tar\n\n Binary data stream\n\nStatus Codes:\n\n200 \u2013 no error\n500 \u2013 server error\n\nLoad a tarball with a set of images and tags into docker\nPOST /images/load\nLoad a set of images and tags into the docker repository.\nSee the image tarball format for more details.\nExample request\n POST /images/load\n\n Tarball in body\n\nExample response:\n HTTP/1.1 200 OK\n\nStatus Codes:\n\n200 \u2013 no error\n500 \u2013 server error\n\nImage tarball format\nAn image tarball contains one directory per image layer (named using its long ID),\neach containing three files:\n\nVERSION: currently 1.0 - the file format version\njson: detailed layer information, similar to docker inspect layer_id\nlayer.tar: A tarfile containing the filesystem changes in this layer\n\nThe layer.tar file will contain aufs style .wh..wh.aufs files and directories\nfor storing attribute changes and deletions.\nIf the tarball defines a repository, there will also be a repositories file at\nthe root that contains a list of repository and tag names mapped to layer IDs.\n{hello-world:\n {latest: 565a9d68a73f6706862bfe8409a7f659776d4d60a8d096eb4a3cbce6999cc2a1}\n}\n\n\n3. Going further\n3.1 Inside docker run\nAs an example, the docker run command line makes the following API calls:\n\n\nCreate the container\n\n\nIf the status code is 404, it means the image doesn't exist:\n\nTry to pull it\nThen retry to create the container\n\n\n\nStart the container\n\n\nIf you are not in detached mode:\n\nAttach to the container, using logs=1 (to have stdout and\n stderr from the container's start) and stream=1\n\n\n\nIf in detached mode or only stdin is attached:\n\nDisplay the container's id\n\n\n\n3.2 Hijacking\nIn this version of the API, /attach, uses hijacking to transport stdin,\nstdout and stderr on the same socket. This might change in the future.\n3.3 CORS Requests\nTo enable cross origin requests to the remote api add the flag\n\"--api-enable-cors\" when running docker in daemon mode.\n$ docker -d -H=\"192.168.1.9:2375\" --api-enable-cors",
|
|
"title": "**HIDDEN**"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api_v1.12#docker-remote-api-v112",
|
|
"tags": "",
|
|
"text": "",
|
|
"title": "Docker Remote API v1.12"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api_v1.12#1-brief-introduction",
|
|
"tags": "",
|
|
"text": "The Remote API has replaced rcli . The daemon listens on unix:///var/run/docker.sock but you can\n Bind Docker to another host/port or a Unix socket . The API tends to be REST, but for some complex commands, like attach \n or pull , the HTTP connection is hijacked to transport STDOUT ,\n STDIN and STDERR .",
|
|
"title": "1. Brief introduction"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api_v1.12#2-endpoints",
|
|
"tags": "",
|
|
"text": "",
|
|
"title": "2. Endpoints"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api_v1.12#21-containers",
|
|
"tags": "",
|
|
"text": "List containers GET /containers/json List containers Example request : GET /containers/json?all=1 before=8dfafdbc3a40 size=1 HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n [\n {\n \"Id\": \"8dfafdbc3a40\",\n \"Image\": \"ubuntu:latest\",\n \"Command\": \"echo 1\",\n \"Created\": 1367854155,\n \"Status\": \"Exit 0\",\n \"Ports\": [{\"PrivatePort\": 2222, \"PublicPort\": 3333, \"Type\": \"tcp\"}],\n \"SizeRw\": 12288,\n \"SizeRootFs\": 0\n },\n {\n \"Id\": \"9cd87474be90\",\n \"Image\": \"ubuntu:latest\",\n \"Command\": \"echo 222222\",\n \"Created\": 1367854155,\n \"Status\": \"Exit 0\",\n \"Ports\": [],\n \"SizeRw\": 12288,\n \"SizeRootFs\": 0\n },\n {\n \"Id\": \"3176a2479c92\",\n \"Image\": \"ubuntu:latest\",\n \"Command\": \"echo 3333333333333333\",\n \"Created\": 1367854154,\n \"Status\": \"Exit 0\",\n \"Ports\":[],\n \"SizeRw\":12288,\n \"SizeRootFs\":0\n },\n {\n \"Id\": \"4cb07b47f9fb\",\n \"Image\": \"ubuntu:latest\",\n \"Command\": \"echo 444444444444444444444444444444444\",\n \"Created\": 1367854152,\n \"Status\": \"Exit 0\",\n \"Ports\": [],\n \"SizeRw\": 12288,\n \"SizeRootFs\": 0\n }\n ] Query Parameters: all \u2013 1/True/true or 0/False/false, Show all containers.\n Only running containers are shown by defaul limit \u2013 Show limit last created\n containers, include non-running ones. since \u2013 Show only containers created since Id, include\n non-running ones. before \u2013 Show only containers created before Id, include\n non-running ones. size \u2013 1/True/true or 0/False/false, Show the containers\n sizes filters \u2013 a JSON encoded value of the filters (a map[string][]string)\n to process on the images list. Status Codes: 200 \u2013 no error 400 \u2013 bad parameter 500 \u2013 server error Create a container POST /containers/create Create a container Example request : POST /containers/create HTTP/1.1\n Content-Type: application/json\n\n {\n \"Hostname\":\"\",\n \"Domainname\": \"\",\n \"User\":\"\",\n \"Memory\":0,\n \"MemorySwap\":0,\n \"CpuShares\": 512,\n \"Cpuset\": \"0,1\",\n \"AttachStdin\":false,\n \"AttachStdout\":true,\n \"AttachStderr\":true,\n \"PortSpecs\":null,\n \"Tty\":false,\n \"OpenStdin\":false,\n \"StdinOnce\":false,\n \"Env\":null,\n \"Cmd\":[\n \"date\"\n ],\n \"Image\":\"ubuntu\",\n \"Volumes\":{\n \"/tmp\": {}\n },\n \"WorkingDir\":\"\",\n \"NetworkDisabled\": false,\n \"ExposedPorts\":{\n \"22/tcp\": {}\n }\n } Example response : HTTP/1.1 201 Created\n Content-Type: application/json\n\n {\n \"Id\":\"e90e34656806\"\n \"Warnings\":[]\n } Json Parameters: config \u2013 the container's configuration Query Parameters: name \u2013 Assign the specified name to the container. Mus\n match /?[a-zA-Z0-9_-]+ . Status Codes: 201 \u2013 no error 404 \u2013 no such container 406 \u2013 impossible to attach (container not running) 500 \u2013 server error Inspect a container GET /containers/(id)/json Return low-level information on the container id Example request : GET /containers/4fa6e0f0c678/json HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\n \"Id\": \"4fa6e0f0c6786287e131c3852c58a2e01cc697a68231826813597e4994f1d6e2\",\n \"Created\": \"2013-05-07T14:51:42.041847+02:00\",\n \"Path\": \"date\",\n \"Args\": [],\n \"Config\": {\n \"Hostname\": \"4fa6e0f0c678\",\n \"User\": \"\",\n \"Memory\": 0,\n \"MemorySwap\": 0,\n \"AttachStdin\": false,\n \"AttachStdout\": true,\n \"AttachStderr\": true,\n \"PortSpecs\": null,\n \"Tty\": false,\n \"OpenStdin\": false,\n \"StdinOnce\": false,\n \"Env\": null,\n \"Cmd\": [\n \"date\"\n ],\n \"Dns\": null,\n \"Image\": \"ubuntu\",\n \"Volumes\": {},\n \"VolumesFrom\": \"\",\n \"WorkingDir\": \"\"\n },\n \"State\": {\n \"Running\": false,\n \"Pid\": 0,\n \"ExitCode\": 0,\n \"StartedAt\": \"2013-05-07T14:51:42.087658+02:01360\",\n \"Ghost\": false\n },\n \"Image\": \"b750fe79269d2ec9a3c593ef05b4332b1d1a02a62b4accb2c21d589ff2f5f2dc\",\n \"NetworkSettings\": {\n \"IpAddress\": \"\",\n \"IpPrefixLen\": 0,\n \"Gateway\": \"\",\n \"Bridge\": \"\",\n \"PortMapping\": null\n },\n \"SysInitPath\": \"/home/kitty/go/src/github.com/docker/docker/bin/docker\",\n \"ResolvConfPath\": \"/etc/resolv.conf\",\n \"Volumes\": {},\n \"HostConfig\": {\n \"Binds\": null,\n \"ContainerIDFile\": \"\",\n \"LxcConf\": [],\n \"Privileged\": false,\n \"PortBindings\": {\n \"80/tcp\": [\n {\n \"HostIp\": \"0.0.0.0\",\n \"HostPort\": \"49153\"\n }\n ]\n },\n \"Links\": null,\n \"PublishAllPorts\": false\n }\n } Status Codes: 200 \u2013 no error 404 \u2013 no such container 500 \u2013 server error List processes running inside a container GET /containers/(id)/top List processes running inside the container id Example request : GET /containers/4fa6e0f0c678/top HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\n \"Titles\": [\n \"USER\",\n \"PID\",\n \"%CPU\",\n \"%MEM\",\n \"VSZ\",\n \"RSS\",\n \"TTY\",\n \"STAT\",\n \"START\",\n \"TIME\",\n \"COMMAND\"\n ],\n \"Processes\": [\n [\"root\",\"20147\",\"0.0\",\"0.1\",\"18060\",\"1864\",\"pts/4\",\"S\",\"10:06\",\"0:00\",\"bash\"],\n [\"root\",\"20271\",\"0.0\",\"0.0\",\"4312\",\"352\",\"pts/4\",\"S+\",\"10:07\",\"0:00\",\"sleep\",\"10\"]\n ]\n } Query Parameters: ps_args \u2013 ps arguments to use (e.g., aux) Status Codes: 200 \u2013 no error 404 \u2013 no such container 500 \u2013 server error Get container logs GET /containers/(id)/logs Get stdout and stderr logs from the container id Example request : GET /containers/4fa6e0f0c678/logs?stderr=1 stdout=1 timestamps=1 follow=1 HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/vnd.docker.raw-stream\n\n {{ STREAM }} Query Parameters: follow \u2013 1/True/true or 0/False/false, return stream.\n Default false stdout \u2013 1/True/true or 0/False/false, if logs=true, return\n stdout log. Default false stderr \u2013 1/True/true or 0/False/false, if logs=true, return\n stderr log. Default false timestamps \u2013 1/True/true or 0/False/false, if logs=true, prin\n timestamps for every log line. Default false Status Codes: 200 \u2013 no error 404 \u2013 no such container 500 \u2013 server error Inspect changes on a container's filesystem GET /containers/(id)/changes Inspect changes on container id 's filesystem Example request : GET /containers/4fa6e0f0c678/changes HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n [\n {\n \"Path\": \"/dev\",\n \"Kind\": 0\n },\n {\n \"Path\": \"/dev/kmsg\",\n \"Kind\": 1\n },\n {\n \"Path\": \"/test\",\n \"Kind\": 1\n }\n ] Status Codes: 200 \u2013 no error 404 \u2013 no such container 500 \u2013 server error Export a container GET /containers/(id)/export Export the contents of container id Example request : GET /containers/4fa6e0f0c678/export HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/octet-stream\n\n {{ TAR STREAM }} Status Codes: 200 \u2013 no error 404 \u2013 no such container 500 \u2013 server error Start a container POST /containers/(id)/start Start the container id Example request : POST /containers/(id)/start HTTP/1.1\n Content-Type: application/json\n\n {\n \"Binds\":[\"/tmp:/tmp\"],\n \"Links\":[\"redis3:redis\"],\n \"LxcConf\":[{\"Key\":\"lxc.utsname\",\"Value\":\"docker\"}],\n \"PortBindings\":{ \"22/tcp\": [{ \"HostPort\": \"11022\" }] },\n \"PublishAllPorts\":false,\n \"Privileged\":false,\n \"Dns\": [\"8.8.8.8\"],\n \"VolumesFrom\": [\"parent\", \"other:ro\"]\n } Example response : HTTP/1.1 204 No Content\n Content-Type: text/plain Json Parameters: hostConfig \u2013 the container's host configuration (optional) Status Codes: 204 \u2013 no error 404 \u2013 no such container 500 \u2013 server error Stop a container POST /containers/(id)/stop Stop the container id Example request : POST /containers/e90e34656806/stop?t=5 HTTP/1.1 Example response : HTTP/1.1 204 No Content Query Parameters: t \u2013 number of seconds to wait before killing the container Status Codes: 204 \u2013 no error 404 \u2013 no such container 500 \u2013 server error Restart a container POST /containers/(id)/restart Restart the container id Example request : POST /containers/e90e34656806/restart?t=5 HTTP/1.1 Example response : HTTP/1.1 204 No Content Query Parameters: t \u2013 number of seconds to wait before killing the container Status Codes: 204 \u2013 no error 404 \u2013 no such container 500 \u2013 server error Kill a container POST /containers/(id)/kill Kill the container id Example request : POST /containers/e90e34656806/kill HTTP/1.1 Example response : HTTP/1.1 204 No Content Query Parameters signal - Signal to send to the container: integer or string like \"SIGINT\".\n When not set, SIGKILL is assumed and the call will wait for the container to exit. Status Codes: 204 \u2013 no error 404 \u2013 no such container 500 \u2013 server error Pause a container POST /containers/(id)/pause Pause the container id Example request : POST /containers/e90e34656806/pause HTTP/1.1 Example response : HTTP/1.1 204 No Content Status Codes: 204 \u2013 no error 404 \u2013 no such container 500 \u2013 server error Unpause a container POST /containers/(id)/unpause Unpause the container id Example request : POST /containers/e90e34656806/unpause HTTP/1.1 Example response : HTTP/1.1 204 No Content Status Codes: 204 \u2013 no error 404 \u2013 no such container 500 \u2013 server error Attach to a container POST /containers/(id)/attach Attach to the container id Example request : POST /containers/16253994b7c4/attach?logs=1 stream=0 stdout=1 HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/vnd.docker.raw-stream\n\n {{ STREAM }} Query Parameters: logs \u2013 1/True/true or 0/False/false, return logs. Default false stream \u2013 1/True/true or 0/False/false, return stream. Default false stdin \u2013 1/True/true or 0/False/false, if stream=true, attach to stdin.\n Default false stdout \u2013 1/True/true or 0/False/false, if logs=true, return\n stdout log, if stream=true, attach to stdout. Default false stderr \u2013 1/True/true or 0/False/false, if logs=true, return\n stderr log, if stream=true, attach to stderr. Default false Status Codes: 200 \u2013 no error 400 \u2013 bad parameter 404 \u2013 no such container 500 \u2013 server error Stream details : When using the TTY setting is enabled in POST /containers/create ,\nthe stream is the raw data from the process PTY and client's stdin.\nWhen the TTY is disabled, then the stream is multiplexed to separate\nstdout and stderr. The format is a Header and a Payload (frame). HEADER The header will contain the information on which stream write the\nstream (stdout or stderr). It also contain the size of the\nassociated frame encoded on the last 4 bytes (uint32). It is encoded on the first 8 bytes like this: header := [8]byte{STREAM_TYPE, 0, 0, 0, SIZE1, SIZE2, SIZE3, SIZE4} STREAM_TYPE can be: 0: stdin (will be written on stdout) 1: stdout 2: stderr SIZE1, SIZE2, SIZE3, SIZE4 are the 4 bytes of\nthe uint32 size encoded as big endian. PAYLOAD The payload is the raw stream. IMPLEMENTATION The simplest way to implement the Attach protocol is the following: Read 8 bytes chose stdout or stderr depending on the first byte Extract the frame size from the last 4 byets Read the extracted size and output it on the correct output Goto 1 Attach to a container (websocket) GET /containers/(id)/attach/ws Attach to the container id via websocket Implements websocket protocol handshake according to RFC 6455 Example request GET /containers/e90e34656806/attach/ws?logs=0 stream=1 stdin=1 stdout=1 stderr=1 HTTP/1.1 Example response {{ STREAM }} Query Parameters: logs \u2013 1/True/true or 0/False/false, return logs. Default false stream \u2013 1/True/true or 0/False/false, return stream.\n Default false stdin \u2013 1/True/true or 0/False/false, if stream=true, attach\n to stdin. Default false stdout \u2013 1/True/true or 0/False/false, if logs=true, return\n stdout log, if stream=true, attach to stdout. Default false stderr \u2013 1/True/true or 0/False/false, if logs=true, return\n stderr log, if stream=true, attach to stderr. Default false Status Codes: 200 \u2013 no error 400 \u2013 bad parameter 404 \u2013 no such container 500 \u2013 server error Wait a container POST /containers/(id)/wait Block until container id stops, then returns the exit code Example request : POST /containers/16253994b7c4/wait HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\"StatusCode\": 0} Status Codes: 200 \u2013 no error 404 \u2013 no such container 500 \u2013 server error Remove a container DELETE /containers/(id) Remove the container id from the filesystem Example request : DELETE /containers/16253994b7c4?v=1 HTTP/1.1 Example response : HTTP/1.1 204 No Content Query Parameters: v \u2013 1/True/true or 0/False/false, Remove the volumes\n associated to the container. Default false force \u2013 1/True/true or 0/False/false, Removes the container\n even if it was running. Default false Status Codes: 204 \u2013 no error 400 \u2013 bad parameter 404 \u2013 no such container 500 \u2013 server error Copy files or folders from a container POST /containers/(id)/copy Copy files or folders of container id Example request : POST /containers/4fa6e0f0c678/copy HTTP/1.1\n Content-Type: application/json\n\n {\n \"Resource\": \"test.txt\"\n } Example response : HTTP/1.1 200 OK\n Content-Type: application/octet-stream\n\n {{ TAR STREAM }} Status Codes: 200 \u2013 no error 404 \u2013 no such container 500 \u2013 server error",
|
|
"title": "2.1 Containers"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api_v1.12#22-images",
|
|
"tags": "",
|
|
"text": "List Images GET /images/json Example request : GET /images/json?all=0 HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n [\n {\n \"RepoTags\": [\n \"ubuntu:12.04\",\n \"ubuntu:precise\",\n \"ubuntu:latest\"\n ],\n \"Id\": \"8dbd9e392a964056420e5d58ca5cc376ef18e2de93b5cc90e868a1bbc8318c1c\",\n \"Created\": 1365714795,\n \"Size\": 131506275,\n \"VirtualSize\": 131506275\n },\n {\n \"RepoTags\": [\n \"ubuntu:12.10\",\n \"ubuntu:quantal\"\n ],\n \"ParentId\": \"27cf784147099545\",\n \"Id\": \"b750fe79269d2ec9a3c593ef05b4332b1d1a02a62b4accb2c21d589ff2f5f2dc\",\n \"Created\": 1364102658,\n \"Size\": 24653,\n \"VirtualSize\": 180116135\n }\n ] Query Parameters: all \u2013 1/True/true or 0/False/false, default false filters \u2013 a json encoded value of the filters (a map[string][]string) to process on the images list. Available filters: dangling=true Create an image POST /images/create Create an image, either by pull it from the registry or by importing i Example request : POST /images/create?fromImage=ubuntu HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\"status\": \"Pulling...\"}\n {\"status\": \"Pulling\", \"progress\": \"1 B/ 100 B\", \"progressDetail\": {\"current\": 1, \"total\": 100}}\n {\"error\": \"Invalid...\"}\n ...\n\nWhen using this endpoint to pull an image from the registry, the\n`X-Registry-Auth` header can be used to include\na base64-encoded AuthConfig object. Query Parameters: fromImage \u2013 name of the image to pull fromSrc \u2013 source to import, - means stdin repo \u2013 repository tag \u2013 tag registry \u2013 the registry to pull from Request Headers: X-Registry-Auth \u2013 base64-encoded AuthConfig object Status Codes: 200 \u2013 no error 500 \u2013 server error Inspect an image GET /images/(name)/json Return low-level information on the image name Example request : GET /images/ubuntu/json HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\n \"Created\": \"2013-03-23T22:24:18.818426-07:00\",\n \"Container\": \"3d67245a8d72ecf13f33dffac9f79dcdf70f75acb84d308770391510e0c23ad0\",\n \"ContainerConfig\":\n {\n \"Hostname\": \"\",\n \"User\": \"\",\n \"Memory\": 0,\n \"MemorySwap\": 0,\n \"AttachStdin\": false,\n \"AttachStdout\": false,\n \"AttachStderr\": false,\n \"PortSpecs\": null,\n \"Tty\": true,\n \"OpenStdin\": true,\n \"StdinOnce\": false,\n \"Env\": null,\n \"Cmd\": [\"/bin/bash\"],\n \"Dns\": null,\n \"Image\": \"ubuntu\",\n \"Volumes\": null,\n \"VolumesFrom\": \"\",\n \"WorkingDir\": \"\"\n },\n \"Id\": \"b750fe79269d2ec9a3c593ef05b4332b1d1a02a62b4accb2c21d589ff2f5f2dc\",\n \"Parent\": \"27cf784147099545\",\n \"Size\": 6824592\n } Status Codes: 200 \u2013 no error 404 \u2013 no such image 500 \u2013 server error Get the history of an image GET /images/(name)/history Return the history of the image name Example request : GET /images/ubuntu/history HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n [\n {\n \"Id\": \"b750fe79269d\",\n \"Created\": 1364102658,\n \"CreatedBy\": \"/bin/bash\"\n },\n {\n \"Id\": \"27cf78414709\",\n \"Created\": 1364068391,\n \"CreatedBy\": \"\"\n }\n ] Status Codes: 200 \u2013 no error 404 \u2013 no such image 500 \u2013 server error Push an image on the registry POST /images/(name)/push Push the image name on the registry Example request : POST /images/test/push HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\"status\": \"Pushing...\"}\n {\"status\": \"Pushing\", \"progress\": \"1/? (n/a)\", \"progressDetail\": {\"current\": 1}}}\n {\"error\": \"Invalid...\"}\n ...\n\nIf you wish to push an image on to a private registry, that image must already have been tagged\ninto a repository which references that registry host name and port. This repository name should\nthen be used in the URL. This mirrors the flow of the CLI. Example request : POST /images/registry.acme.com:5000/test/push HTTP/1.1 Query Parameters: tag \u2013 the tag to associate with the image on the registry, optional Request Headers: X-Registry-Auth \u2013 include a base64-encoded AuthConfig object. Status Codes: 200 \u2013 no error 404 \u2013 no such image 500 \u2013 server error Tag an image into a repository POST /images/(name)/tag Tag the image name into a repository Example request : POST /images/test/tag?repo=myrepo force=0 tag=v42 HTTP/1.1 Example response : HTTP/1.1 201 OK Query Parameters: repo \u2013 The repository to tag in force \u2013 1/True/true or 0/False/false, default false tag - The new tag name Status Codes: 201 \u2013 no error 400 \u2013 bad parameter 404 \u2013 no such image 409 \u2013 conflict 500 \u2013 server error Remove an image DELETE /images/(name) Remove the image name from the filesystem Example request : DELETE /images/test HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-type: application/json\n\n [\n {\"Untagged\": \"3e2f21a89f\"},\n {\"Deleted\": \"3e2f21a89f\"},\n {\"Deleted\": \"53b4f83ac9\"}\n ] Query Parameters: force \u2013 1/True/true or 0/False/false, default false noprune \u2013 1/True/true or 0/False/false, default false Status Codes: 200 \u2013 no error 404 \u2013 no such image 409 \u2013 conflict 500 \u2013 server error Search images GET /images/search Search for an image on Docker Hub . Note :\nThe response keys have changed from API v1.6 to reflect the JSON\nsent by the registry server to the docker daemon's request. Example request : GET /images/search?term=sshd HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n [\n {\n \"description\": \"\",\n \"is_official\": false,\n \"is_automated\": false,\n \"name\": \"wma55/u1210sshd\",\n \"star_count\": 0\n },\n {\n \"description\": \"\",\n \"is_official\": false,\n \"is_automated\": false,\n \"name\": \"jdswinbank/sshd\",\n \"star_count\": 0\n },\n {\n \"description\": \"\",\n \"is_official\": false,\n \"is_automated\": false,\n \"name\": \"vgauthier/sshd\",\n \"star_count\": 0\n }\n ...\n ] Query Parameters: term \u2013 term to search Status Codes: 200 \u2013 no error 500 \u2013 server error",
|
|
"title": "2.2 Images"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api_v1.12#23-misc",
|
|
"tags": "",
|
|
"text": "Build an image from Dockerfile via stdin POST /build Build an image from Dockerfile via stdin Example request : POST /build HTTP/1.1\n\n {{ TAR STREAM }} Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\"stream\": \"Step 1...\"}\n {\"stream\": \"...\"}\n {\"error\": \"Error...\", \"errorDetail\": {\"code\": 123, \"message\": \"Error...\"}}\n\nThe stream must be a tar archive compressed with one of the\nfollowing algorithms: identity (no compression), gzip, bzip2, xz.\n\nThe archive must include a file called `Dockerfile`\nat its root. It may include any number of other files,\nwhich will be accessible in the build context (See the [*ADD build\ncommand*](/reference/builder/#dockerbuilder)). Query Parameters: t \u2013 repository name (and optionally a tag) to be applied to\n the resulting image in case of success remote \u2013 git or HTTP/HTTPS URI build source q \u2013 suppress verbose build output nocache \u2013 do not use the cache when building the image rm - remove intermediate containers after a successful build (default behavior) forcerm - always remove intermediate containers (includes rm) Request Headers: Content-type \u2013 should be set to \"application/tar\" . X-Registry-Config \u2013 base64-encoded ConfigFile objec Status Codes: 200 \u2013 no error 500 \u2013 server error Check auth configuration POST /auth Get the default username and email Example request : POST /auth HTTP/1.1\n Content-Type: application/json\n\n {\n \"username\":\" hannibal\",\n \"password: \"xxxx\",\n \"email\": \"hannibal@a-team.com\",\n \"serveraddress\": \"https://index.docker.io/v1/\"\n } Example response : HTTP/1.1 200 OK Status Codes: 200 \u2013 no error 204 \u2013 no error 500 \u2013 server error Display system-wide information GET /info Display system-wide information Example request : GET /info HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\n \"Containers\": 11,\n \"Images\": 16,\n \"Driver\": \"btrfs\",\n \"ExecutionDriver\": \"native-0.1\",\n \"KernelVersion\": \"3.12.0-1-amd64\"\n \"Debug\": false,\n \"NFd\": 11,\n \"NGoroutines\": 21,\n \"NEventsListener\": 0,\n \"InitPath\": \"/usr/bin/docker\",\n \"IndexServerAddress\": [\"https://index.docker.io/v1/\"],\n \"MemoryLimit\": true,\n \"SwapLimit\": false,\n \"IPv4Forwarding\": true\n } Status Codes: 200 \u2013 no error 500 \u2013 server error Show the docker version information GET /version Show the docker version information Example request : GET /version HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\n \"ApiVersion\": \"1.12\",\n \"Version\": \"0.2.2\",\n \"GitCommit\": \"5a2a5cc+CHANGES\",\n \"GoVersion\": \"go1.0.3\"\n } Status Codes: 200 \u2013 no error 500 \u2013 server error Ping the docker server GET /_ping Ping the docker server Example request : GET /_ping HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: text/plain\n\n OK Status Codes: 200 - no error 500 - server error Create a new image from a container's changes POST /commit Create a new image from a container's changes Example request : POST /commit?container=44c004db4b17 comment=message repo=myrepo HTTP/1.1\n Content-Type: application/json\n\n {\n \"Hostname\": \"\",\n \"Domainname\": \"\",\n \"User\": \"\",\n \"Memory\": 0,\n \"MemorySwap\": 0,\n \"CpuShares\": 512,\n \"Cpuset\": \"0,1\",\n \"AttachStdin\": false,\n \"AttachStdout\": true,\n \"AttachStderr\": true,\n \"PortSpecs\": null,\n \"Tty\": false,\n \"OpenStdin\": false,\n \"StdinOnce\": false,\n \"Env\": null,\n \"Cmd\": [\n \"date\"\n ],\n \"Volumes\": {\n \"/tmp\": {}\n },\n \"WorkingDir\": \"\",\n \"NetworkDisabled\": false,\n \"ExposedPorts\": {\n \"22/tcp\": {}\n }\n } Example response : HTTP/1.1 201 Created\n Content-Type: application/vnd.docker.raw-stream\n\n {\"Id\": \"596069db4bf5\"} Json Parameters: config - the container's configuration Query Parameters: container \u2013 source container repo \u2013 repository tag \u2013 tag comment \u2013 commit message author \u2013 author (e.g., \"John Hannibal Smith\n hannibal@a-team.com \") Status Codes: 201 \u2013 no error 404 \u2013 no such container 500 \u2013 server error Monitor Docker's events GET /events Get container events from docker, either in real time via streaming, or via\npolling (using since). Docker containers will report the following events: create, destroy, die, export, kill, pause, restart, start, stop, unpause and Docker images will report: untag, delete Example request : GET /events?since=1374067924 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\"status\": \"create\", \"id\": \"dfdf82bd3881\",\"from\": \"ubuntu:latest\", \"time\":1374067924}\n {\"status\": \"start\", \"id\": \"dfdf82bd3881\",\"from\": \"ubuntu:latest\", \"time\":1374067924}\n {\"status\": \"stop\", \"id\": \"dfdf82bd3881\",\"from\": \"ubuntu:latest\", \"time\":1374067966}\n {\"status\": \"destroy\", \"id\": \"dfdf82bd3881\",\"from\": \"ubuntu:latest\", \"time\":1374067970} Query Parameters: since \u2013 timestamp used for polling until \u2013 timestamp used for polling Status Codes: 200 \u2013 no error 500 \u2013 server error Get a tarball containing all images and tags in a repository GET /images/(name)/get Get a tarball containing all images and metadata for the repository\nspecified by name . See the image tarball format for more details. Example request GET /images/ubuntu/get Example response : HTTP/1.1 200 OK\n Content-Type: application/x-tar\n\n Binary data stream Status Codes: 200 \u2013 no error 500 \u2013 server error Load a tarball with a set of images and tags into docker POST /images/load Load a set of images and tags into the docker repository.\nSee the image tarball format for more details. Example request POST /images/load\n\n Tarball in body Example response : HTTP/1.1 200 OK Status Codes: 200 \u2013 no error 500 \u2013 server error Image tarball format An image tarball contains one directory per image layer (named using its long ID),\neach containing three files: VERSION : currently 1.0 - the file format version json : detailed layer information, similar to docker inspect layer_id layer.tar : A tarfile containing the filesystem changes in this layer The layer.tar file will contain aufs style .wh..wh.aufs files and directories\nfor storing attribute changes and deletions. If the tarball defines a repository, there will also be a repositories file at\nthe root that contains a list of repository and tag names mapped to layer IDs. { hello-world :\n { latest : 565a9d68a73f6706862bfe8409a7f659776d4d60a8d096eb4a3cbce6999cc2a1 }\n}",
|
|
"title": "2.3 Misc"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api_v1.12#3-going-further",
|
|
"tags": "",
|
|
"text": "",
|
|
"title": "3. Going further"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api_v1.12#31-inside-docker-run",
|
|
"tags": "",
|
|
"text": "As an example, the docker run command line makes the following API calls: Create the container If the status code is 404, it means the image doesn't exist: Try to pull it Then retry to create the container Start the container If you are not in detached mode: Attach to the container, using logs=1 (to have stdout and\n stderr from the container's start) and stream=1 If in detached mode or only stdin is attached: Display the container's id",
|
|
"title": "3.1 Inside docker run"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api_v1.12#32-hijacking",
|
|
"tags": "",
|
|
"text": "In this version of the API, /attach, uses hijacking to transport stdin,\nstdout and stderr on the same socket. This might change in the future.",
|
|
"title": "3.2 Hijacking"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api_v1.12#33-cors-requests",
|
|
"tags": "",
|
|
"text": "To enable cross origin requests to the remote api add the flag\n\"--api-enable-cors\" when running docker in daemon mode. $ docker -d -H=\"192.168.1.9:2375\" --api-enable-cors",
|
|
"title": "3.3 CORS Requests"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api_v1.11/",
|
|
"tags": "",
|
|
"text": "Docker Remote API v1.11\n1. Brief introduction\n\nThe Remote API has replaced rcli.\nThe daemon listens on unix:///var/run/docker.sock but you can bind\n Docker to another host/port or a Unix socket.\nThe API tends to be REST, but for some complex commands, like attach\n or pull, the HTTP connection is hijacked to transport STDOUT, STDIN\n and STDERR.\n\n2. Endpoints\n2.1 Containers\nList containers\nGET /containers/json\nList containers\nExample request:\n GET /containers/json?all=1before=8dfafdbc3a40size=1 HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n [\n {\n \"Id\": \"8dfafdbc3a40\",\n \"Image\": \"ubuntu:latest\",\n \"Command\": \"echo 1\",\n \"Created\": 1367854155,\n \"Status\": \"Exit 0\",\n \"Ports\": [{\"PrivatePort\": 2222, \"PublicPort\": 3333, \"Type\": \"tcp\"}],\n \"SizeRw\": 12288,\n \"SizeRootFs\": 0\n },\n {\n \"Id\": \"9cd87474be90\",\n \"Image\": \"ubuntu:latest\",\n \"Command\": \"echo 222222\",\n \"Created\": 1367854155,\n \"Status\": \"Exit 0\",\n \"Ports\": [],\n \"SizeRw\": 12288,\n \"SizeRootFs\": 0\n },\n {\n \"Id\": \"3176a2479c92\",\n \"Image\": \"ubuntu:latest\",\n \"Command\": \"echo 3333333333333333\",\n \"Created\": 1367854154,\n \"Status\": \"Exit 0\",\n \"Ports\":[],\n \"SizeRw\":12288,\n \"SizeRootFs\":0\n },\n {\n \"Id\": \"4cb07b47f9fb\",\n \"Image\": \"ubuntu:latest\",\n \"Command\": \"echo 444444444444444444444444444444444\",\n \"Created\": 1367854152,\n \"Status\": \"Exit 0\",\n \"Ports\": [],\n \"SizeRw\": 12288,\n \"SizeRootFs\": 0\n }\n ]\n\nQuery Parameters:\n\n\n\nall \u2013 1/True/true or 0/False/false, Show all containers.\n Only running containers are shown by default (i.e., this defaults to false)\nlimit \u2013 Show limit last created containers, include non-running ones.\nsince \u2013 Show only containers created since Id, include non-running ones.\nbefore \u2013 Show only containers created before Id, include non-running ones.\nsize \u2013 1/True/true or 0/False/false, Show the containers sizes\n\nStatus Codes:\n\n200 \u2013 no error\n400 \u2013 bad parameter\n500 \u2013 server error\n\nCreate a container\nPOST /containers/create\nCreate a container\nExample request:\n POST /containers/create HTTP/1.1\n Content-Type: application/json\n\n {\n \"Hostname\":\"\",\n \"User\":\"\",\n \"Memory\":0,\n \"MemorySwap\":0,\n \"AttachStdin\":false,\n \"AttachStdout\":true,\n \"AttachStderr\":true,\n \"PortSpecs\":null,\n \"Tty\":false,\n \"OpenStdin\":false,\n \"StdinOnce\":false,\n \"Env\":null,\n \"Cmd\":[\n \"date\"\n ],\n \"Image\":\"ubuntu\",\n \"Volumes\":{\n \"/tmp\": {}\n },\n \"VolumesFrom\":\"\",\n \"WorkingDir\":\"\",\n \"DisableNetwork\": false,\n \"ExposedPorts\":{\n \"22/tcp\": {}\n }\n }\n\nExample response:\n HTTP/1.1 201 Created\n Content-Type: application/json\n\n {\n \"Id\":\"e90e34656806\"\n \"Warnings\":[]\n }\n\nJson Parameters:\n\nconfig \u2013 the container's configuration\n\nQuery Parameters:\n\nname \u2013 Assign the specified name to the container. Mus\n match /?[a-zA-Z0-9_-]+.\n\nStatus Codes:\n\n201 \u2013 no error\n404 \u2013 no such container\n406 \u2013 impossible to attach (container not running)\n500 \u2013 server error\n\nInspect a container\nGET /containers/(id)/json\nReturn low-level information on the container id\nExample request:\n GET /containers/4fa6e0f0c678/json HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\n \"Id\": \"4fa6e0f0c6786287e131c3852c58a2e01cc697a68231826813597e4994f1d6e2\",\n \"Created\": \"2013-05-07T14:51:42.041847+02:00\",\n \"Path\": \"date\",\n \"Args\": [],\n \"Config\": {\n \"Hostname\": \"4fa6e0f0c678\",\n \"User\": \"\",\n \"Memory\": 0,\n \"MemorySwap\": 0,\n \"AttachStdin\": false,\n \"AttachStdout\": true,\n \"AttachStderr\": true,\n \"PortSpecs\": null,\n \"Tty\": false,\n \"OpenStdin\": false,\n \"StdinOnce\": false,\n \"Env\": null,\n \"Cmd\": [\n \"date\"\n ],\n \"Dns\": null,\n \"Image\": \"ubuntu\",\n \"Volumes\": {},\n \"VolumesFrom\": \"\",\n \"WorkingDir\": \"\"\n },\n \"State\": {\n \"Running\": false,\n \"Pid\": 0,\n \"ExitCode\": 0,\n \"StartedAt\": \"2013-05-07T14:51:42.087658+02:01360\",\n \"Ghost\": false\n },\n \"Image\": \"b750fe79269d2ec9a3c593ef05b4332b1d1a02a62b4accb2c21d589ff2f5f2dc\",\n \"NetworkSettings\": {\n \"IpAddress\": \"\",\n \"IpPrefixLen\": 0,\n \"Gateway\": \"\",\n \"Bridge\": \"\",\n \"PortMapping\": null\n },\n \"SysInitPath\": \"/home/kitty/go/src/github.com/docker/docker/bin/docker\",\n \"ResolvConfPath\": \"/etc/resolv.conf\",\n \"Volumes\": {},\n \"HostConfig\": {\n \"Binds\": null,\n \"ContainerIDFile\": \"\",\n \"LxcConf\": [],\n \"Privileged\": false,\n \"PortBindings\": {\n \"80/tcp\": [\n {\n \"HostIp\": \"0.0.0.0\",\n \"HostPort\": \"49153\"\n }\n ]\n },\n \"Links\": null,\n \"PublishAllPorts\": false\n }\n }\n\nStatus Codes:\n\n200 \u2013 no error\n404 \u2013 no such container\n500 \u2013 server error\n\nList processes running inside a container\nGET /containers/(id)/top\nList processes running inside the container id\nExample request:\n GET /containers/4fa6e0f0c678/top HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\n \"Titles\": [\n \"USER\",\n \"PID\",\n \"%CPU\",\n \"%MEM\",\n \"VSZ\",\n \"RSS\",\n \"TTY\",\n \"STAT\",\n \"START\",\n \"TIME\",\n \"COMMAND\"\n ],\n \"Processes\": [\n [\"root\",\"20147\",\"0.0\",\"0.1\",\"18060\",\"1864\",\"pts/4\",\"S\",\"10:06\",\"0:00\",\"bash\"],\n [\"root\",\"20271\",\"0.0\",\"0.0\",\"4312\",\"352\",\"pts/4\",\"S+\",\"10:07\",\"0:00\",\"sleep\",\"10\"]\n ]\n }\n\nQuery Parameters:\n\nps_args \u2013 ps arguments to use (e.g., aux)\n\nStatus Codes:\n\n200 \u2013 no error\n404 \u2013 no such container\n500 \u2013 server error\n\nGet container logs\nGET /containers/(id)/logs\nGet stdout and stderr logs from the container id\nExample request:\n GET /containers/4fa6e0f0c678/logs?stderr=1stdout=1timestamps=1follow=1 HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/vnd.docker.raw-stream\n\n {{ STREAM }}\n\nQuery Parameters:\n\n\n\nfollow \u2013 1/True/true or 0/False/false, return stream.\n Default false\nstdout \u2013 1/True/true or 0/False/false, if logs=true, return\n stdout log. Default false\nstderr \u2013 1/True/true or 0/False/false, if logs=true, return\n stderr log. Default false\ntimestamps \u2013 1/True/true or 0/False/false, if logs=true, prin\n timestamps for every log line. Default false\n\nStatus Codes:\n\n200 \u2013 no error\n404 \u2013 no such container\n500 \u2013 server error\n\nInspect changes on a container's filesystem\nGET /containers/(id)/changes\nInspect changes on container id's filesystem\nExample request:\n GET /containers/4fa6e0f0c678/changes HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n [\n {\n \"Path\": \"/dev\",\n \"Kind\": 0\n },\n {\n \"Path\": \"/dev/kmsg\",\n \"Kind\": 1\n },\n {\n \"Path\": \"/test\",\n \"Kind\": 1\n }\n ]\n\nStatus Codes:\n\n200 \u2013 no error\n404 \u2013 no such container\n500 \u2013 server error\n\nExport a container\nGET /containers/(id)/export\nExport the contents of container id\nExample request:\n GET /containers/4fa6e0f0c678/export HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/octet-stream\n\n {{ TAR STREAM }}\n\nStatus Codes:\n\n200 \u2013 no error\n404 \u2013 no such container\n500 \u2013 server error\n\nStart a container\nPOST /containers/(id)/start\nStart the container id\nExample request:\n POST /containers/(id)/start HTTP/1.1\n Content-Type: application/json\n\n {\n \"Binds\":[\"/tmp:/tmp\"],\n \"LxcConf\":[{\"Key\":\"lxc.utsname\",\"Value\":\"docker\"}],\n \"PortBindings\":{ \"22/tcp\": [{ \"HostPort\": \"11022\" }] },\n \"PublishAllPorts\":false,\n \"Privileged\":false,\n \"Dns\": [\"8.8.8.8\"],\n \"VolumesFrom\": [\"parent\", \"other:ro\"]\n }\n\nExample response:\n HTTP/1.1 204 No Content\n Content-Type: text/plain\n\nJson Parameters:\n\n\n\nhostConfig \u2013 the container's host configuration (optional)\n\nStatus Codes:\n\n204 \u2013 no error\n404 \u2013 no such container\n500 \u2013 server error\n\nStop a container\nPOST /containers/(id)/stop\nStop the container id\nExample request:\n POST /containers/e90e34656806/stop?t=5 HTTP/1.1\n\nExample response:\n HTTP/1.1 204 No Content\n\nQuery Parameters:\n\nt \u2013 number of seconds to wait before killing the container\n\nStatus Codes:\n\n204 \u2013 no error\n404 \u2013 no such container\n500 \u2013 server error\n\nRestart a container\nPOST /containers/(id)/restart\nRestart the container id\nExample request:\n POST /containers/e90e34656806/restart?t=5 HTTP/1.1\n\nExample response:\n HTTP/1.1 204 No Content\n\nQuery Parameters:\n\nt \u2013 number of seconds to wait before killing the container\n\nStatus Codes:\n\n204 \u2013 no error\n404 \u2013 no such container\n500 \u2013 server error\n\nKill a container\nPOST /containers/(id)/kill\nKill the container id\nExample request:\n POST /containers/e90e34656806/kill HTTP/1.1\n\nExample response:\n HTTP/1.1 204 No Content\n\nQuery Parameters\n\nsignal - Signal to send to the container: integer or string like \"SIGINT\".\n When not set, SIGKILL is assumed and the call will wait for the container to exit.\n\nStatus Codes:\n\n204 \u2013 no error\n404 \u2013 no such container\n500 \u2013 server error\n\nAttach to a container\nPOST /containers/(id)/attach\nAttach to the container id\nExample request:\n POST /containers/16253994b7c4/attach?logs=1stream=0stdout=1 HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/vnd.docker.raw-stream\n\n {{ STREAM }}\n\nQuery Parameters:\n\nlogs \u2013 1/True/true or 0/False/false, return logs. Defaul\n false\nstream \u2013 1/True/true or 0/False/false, return stream.\n Default false\nstdin \u2013 1/True/true or 0/False/false, if stream=true, attach\n to stdin. Default false\nstdout \u2013 1/True/true or 0/False/false, if logs=true, return\n stdout log, if stream=true, attach to stdout. Default false\nstderr \u2013 1/True/true or 0/False/false, if logs=true, return\n stderr log, if stream=true, attach to stderr. Default false\n\nStatus Codes:\n\n200 \u2013 no error\n400 \u2013 bad parameter\n404 \u2013 no such container\n\n500 \u2013 server error\nStream details:\nWhen using the TTY setting is enabled in\nPOST /containers/create\n,\nthe stream is the raw data from the process PTY and client's stdin.\nWhen the TTY is disabled, then the stream is multiplexed to separate\nstdout and stderr.\nThe format is a Header and a Payload (frame).\nHEADER\nThe header will contain the information on which stream write the\nstream (stdout or stderr). It also contain the size of the\nassociated frame encoded on the last 4 bytes (uint32).\nIt is encoded on the first 8 bytes like this:\nheader := [8]byte{STREAM_TYPE, 0, 0, 0, SIZE1, SIZE2, SIZE3, SIZE4}\n\nSTREAM_TYPE can be:\n\n\n0: stdin (will be written on stdout)\n\n1: stdout\n\n2: stderr\nSIZE1, SIZE2, SIZE3, SIZE4 are the 4 bytes of\nthe uint32 size encoded as big endian.\nPAYLOAD\nThe payload is the raw stream.\nIMPLEMENTATION\nThe simplest way to implement the Attach protocol is the following:\n\nRead 8 bytes\nchose stdout or stderr depending on the first byte\nExtract the frame size from the last 4 byets\nRead the extracted size and output it on the correct output\nGoto 1)\n\n\n\nAttach to a container (websocket)\nGET /containers/(id)/attach/ws\nAttach to the container id via websocket\nImplements websocket protocol handshake according to RFC 6455\nExample request\n GET /containers/e90e34656806/attach/ws?logs=0stream=1stdin=1stdout=1stderr=1 HTTP/1.1\n\nExample response\n {{ STREAM }}\n\nQuery Parameters:\n\nlogs \u2013 1/True/true or 0/False/false, return logs. Default false\nstream \u2013 1/True/true or 0/False/false, return stream.\n Default false\nstdin \u2013 1/True/true or 0/False/false, if stream=true, attach\n to stdin. Default false\nstdout \u2013 1/True/true or 0/False/false, if logs=true, return\n stdout log, if stream=true, attach to stdout. Default false\nstderr \u2013 1/True/true or 0/False/false, if logs=true, return\n stderr log, if stream=true, attach to stderr. Default false\n\nStatus Codes:\n\n200 \u2013 no error\n400 \u2013 bad parameter\n404 \u2013 no such container\n500 \u2013 server error\n\nWait a container\nPOST /containers/(id)/wait\nBlock until container id stops, then returns the exit code\nExample request:\n POST /containers/16253994b7c4/wait HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\"StatusCode\": 0}\n\nStatus Codes:\n\n200 \u2013 no error\n404 \u2013 no such container\n500 \u2013 server error\n\nRemove a container\nDELETE /containers/(id)\nRemove the container id from the filesystem\nExample request:\n DELETE /containers/16253994b7c4?v=1 HTTP/1.1\n\nExample response:\n HTTP/1.1 204 No Content\n\nQuery Parameters:\n\nv \u2013 1/True/true or 0/False/false, Remove the volumes\n associated to the container. Default false\nforce \u2013 1/True/true or 0/False/false, Removes the container\n even if it was running. Default false\n\nStatus Codes:\n\n204 \u2013 no error\n400 \u2013 bad parameter\n404 \u2013 no such container\n500 \u2013 server error\n\nCopy files or folders from a container\nPOST /containers/(id)/copy\nCopy files or folders of container id\nExample request:\n POST /containers/4fa6e0f0c678/copy HTTP/1.1\n Content-Type: application/json\n\n {\n \"Resource\": \"test.txt\"\n }\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/octet-stream\n\n {{ TAR STREAM }}\n\nStatus Codes:\n\n200 \u2013 no error\n404 \u2013 no such container\n500 \u2013 server error\n\n2.2 Images\nList Images\nGET /images/json\nExample request:\n GET /images/json?all=0 HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n [\n {\n \"RepoTags\": [\n \"ubuntu:12.04\",\n \"ubuntu:precise\",\n \"ubuntu:latest\"\n ],\n \"Id\": \"8dbd9e392a964056420e5d58ca5cc376ef18e2de93b5cc90e868a1bbc8318c1c\",\n \"Created\": 1365714795,\n \"Size\": 131506275,\n \"VirtualSize\": 131506275\n },\n {\n \"RepoTags\": [\n \"ubuntu:12.10\",\n \"ubuntu:quantal\"\n ],\n \"ParentId\": \"27cf784147099545\",\n \"Id\": \"b750fe79269d2ec9a3c593ef05b4332b1d1a02a62b4accb2c21d589ff2f5f2dc\",\n \"Created\": 1364102658,\n \"Size\": 24653,\n \"VirtualSize\": 180116135\n }\n ]\n\nCreate an image\nPOST /images/create\nCreate an image, either by pull it from the registry or by importing i\nExample request:\n POST /images/create?fromImage=ubuntu HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\"status\": \"Pulling...\"}\n {\"status\": \"Pulling\", \"progress\": \"1 B/ 100 B\", \"progressDetail\": {\"current\": 1, \"total\": 100}}\n {\"error\": \"Invalid...\"}\n ...\n\nWhen using this endpoint to pull an image from the registry, the\n`X-Registry-Auth` header can be used to include\na base64-encoded AuthConfig object.\n\nQuery Parameters:\n\nfromImage \u2013 name of the image to pull\nfromSrc \u2013 source to import, - means stdin\nrepo \u2013 repository\ntag \u2013 tag\nregistry \u2013 the registry to pull from\n\nRequest Headers:\n\nX-Registry-Auth \u2013 base64-encoded AuthConfig object\n\nStatus Codes:\n\n200 \u2013 no error\n500 \u2013 server error\n\nInspect an image\nGET /images/(name)/json\nReturn low-level information on the image name\nExample request:\n GET /images/ubuntu/json HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\n \"id\":\"b750fe79269d2ec9a3c593ef05b4332b1d1a02a62b4accb2c21d589ff2f5f2dc\",\n \"parent\":\"27cf784147099545\",\n \"created\":\"2013-03-23T22:24:18.818426-07:00\",\n \"container\":\"3d67245a8d72ecf13f33dffac9f79dcdf70f75acb84d308770391510e0c23ad0\",\n \"container_config\":\n {\n \"Hostname\":\"\",\n \"User\":\"\",\n \"Memory\":0,\n \"MemorySwap\":0,\n \"AttachStdin\":false,\n \"AttachStdout\":false,\n \"AttachStderr\":false,\n \"PortSpecs\":null,\n \"Tty\":true,\n \"OpenStdin\":true,\n \"StdinOnce\":false,\n \"Env\":null,\n \"Cmd\": [\"/bin/bash\"],\n \"Dns\":null,\n \"Image\":\"ubuntu\",\n \"Volumes\":null,\n \"VolumesFrom\":\"\",\n \"WorkingDir\":\"\"\n },\n \"Size\": 6824592\n }\n\nStatus Codes:\n\n200 \u2013 no error\n404 \u2013 no such image\n500 \u2013 server error\n\nGet the history of an image\nGET /images/(name)/history\nReturn the history of the image name\nExample request:\n GET /images/ubuntu/history HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n [\n {\n \"Id\": \"b750fe79269d\",\n \"Created\": 1364102658,\n \"CreatedBy\": \"/bin/bash\"\n },\n {\n \"Id\": \"27cf78414709\",\n \"Created\": 1364068391,\n \"CreatedBy\": \"\"\n }\n ]\n\nStatus Codes:\n\n200 \u2013 no error\n404 \u2013 no such image\n500 \u2013 server error\n\nPush an image on the registry\nPOST /images/(name)/push\nPush the image name on the registry\nExample request:\n POST /images/test/push HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\"status\": \"Pushing...\"}\n {\"status\": \"Pushing\", \"progress\": \"1/? (n/a)\", \"progressDetail\": {\"current\": 1}}}\n {\"error\": \"Invalid...\"}\n ...\n\nIf you wish to push an image on to a private registry, that image must already have been tagged\ninto a repository which references that registry host name and port. This repository name should\nthen be used in the URL. This mirrors the flow of the CLI.\n\nExample request:\n POST /images/registry.acme.com:5000/test/push HTTP/1.1\n\nQuery Parameters:\n\ntag \u2013 the tag to associate with the image on the registry, optional\n\nRequest Headers:\n\nX-Registry-Auth \u2013 include a base64-encoded AuthConfig object.\n\nStatus Codes:\n\n200 \u2013 no error\n404 \u2013 no such image\n500 \u2013 server error\n\nTag an image into a repository\nPOST /images/(name)/tag\nTag the image name into a repository\nExample request:\n POST /images/test/tag?repo=myrepoforce=0tag=v42 HTTP/1.1\n\nExample response:\n HTTP/1.1 201 OK\n\nQuery Parameters:\n\nrepo \u2013 The repository to tag in\nforce \u2013 1/True/true or 0/False/false, default false\ntag - The new tag name\n\nStatus Codes:\n\n201 \u2013 no error\n400 \u2013 bad parameter\n404 \u2013 no such image\n409 \u2013 conflict\n500 \u2013 server error\n\nRemove an image\nDELETE /images/(name)\nRemove the image name from the filesystem\nExample request:\n DELETE /images/test HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-type: application/json\n\n [\n {\"Untagged\": \"3e2f21a89f\"},\n {\"Deleted\": \"3e2f21a89f\"},\n {\"Deleted\": \"53b4f83ac9\"}\n ]\n\nQuery Parameters:\n\nforce \u2013 1/True/true or 0/False/false, default false\nnoprune \u2013 1/True/true or 0/False/false, default false\n\nStatus Codes:\n\n200 \u2013 no error\n404 \u2013 no such image\n409 \u2013 conflict\n500 \u2013 server error\n\nSearch images\nGET /images/search\nSearch for an image on Docker Hub.\n\nNote:\nThe response keys have changed from API v1.6 to reflect the JSON\nsent by the registry server to the docker daemon's request.\n\nExample request:\n GET /images/search?term=sshd HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n [\n {\n \"description\": \"\",\n \"is_official\": false,\n \"is_trusted\": false,\n \"name\": \"wma55/u1210sshd\",\n \"star_count\": 0\n },\n {\n \"description\": \"\",\n \"is_official\": false,\n \"is_trusted\": false,\n \"name\": \"jdswinbank/sshd\",\n \"star_count\": 0\n },\n {\n \"description\": \"\",\n \"is_official\": false,\n \"is_trusted\": false,\n \"name\": \"vgauthier/sshd\",\n \"star_count\": 0\n }\n ...\n ]\n\nQuery Parameters:\n\nterm \u2013 term to search\n\nStatus Codes:\n\n200 \u2013 no error\n500 \u2013 server error\n\n2.3 Misc\nBuild an image from Dockerfile via stdin\nPOST /build\nBuild an image from Dockerfile via stdin\nExample request:\n POST /build HTTP/1.1\n\n {{ TAR STREAM }}\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\"stream\": \"Step 1...\"}\n {\"stream\": \"...\"}\n {\"error\": \"Error...\", \"errorDetail\": {\"code\": 123, \"message\": \"Error...\"}}\n\nThe stream must be a tar archive compressed with one of the\nfollowing algorithms: identity (no compression), gzip, bzip2, xz.\n\nThe archive must include a file called `Dockerfile`\nat its root. It may include any number of other files,\nwhich will be accessible in the build context (See the [*ADD build\ncommand*](/reference/builder/#dockerbuilder)).\n\nQuery Parameters:\n\nt \u2013 repository name (and optionally a tag) to be applied to\n the resulting image in case of success\nremote \u2013 git or HTTP/HTTPS URI build source\nq \u2013 suppress verbose build output\nnocache \u2013 do not use the cache when building the image\n\nrm - remove intermediate containers after a successful build\nRequest Headers:\n\n\nContent-type \u2013 should be set to \"application/tar\".\n\nX-Registry-Config \u2013 base64-encoded ConfigFile objec\n\nStatus Codes:\n\n200 \u2013 no error\n500 \u2013 server error\n\nCheck auth configuration\nPOST /auth\nGet the default username and email\nExample request:\n POST /auth HTTP/1.1\n Content-Type: application/json\n\n {\n \"username\":\" hannibal\",\n \"password: \"xxxx\",\n \"email\": \"hannibal@a-team.com\",\n \"serveraddress\": \"https://index.docker.io/v1/\"\n }\n\nExample response:\n HTTP/1.1 200 OK\n\nStatus Codes:\n\n200 \u2013 no error\n204 \u2013 no error\n500 \u2013 server error\n\nDisplay system-wide information\nGET /info\nDisplay system-wide information\nExample request:\n GET /info HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\n \"Containers\": 11,\n \"Images\": 16,\n \"Driver\": \"btrfs\",\n \"ExecutionDriver\": \"native-0.1\",\n \"KernelVersion\": \"3.12.0-1-amd64\"\n \"Debug\": false,\n \"NFd\": 11,\n \"NGoroutines\": 21,\n \"NEventsListener\": 0,\n \"InitPath\": \"/usr/bin/docker\",\n \"IndexServerAddress\": [\"https://index.docker.io/v1/\"],\n \"MemoryLimit\": true,\n \"SwapLimit\": false,\n \"IPv4Forwarding\": true\n }\n\nStatus Codes:\n\n200 \u2013 no error\n500 \u2013 server error\n\nShow the docker version information\nGET /version\nShow the docker version information\nExample request:\n GET /version HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\n \"Version\":\"0.2.2\",\n \"GitCommit\":\"5a2a5cc+CHANGES\",\n \"GoVersion\":\"go1.0.3\"\n }\n\nStatus Codes:\n\n200 \u2013 no error\n500 \u2013 server error\n\nPing the docker server\nGET /_ping\nPing the docker server\nExample request:\n GET /_ping HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: text/plain\n\n OK\n\nStatus Codes:\n\n200 - no error\n500 - server error\n\nCreate a new image from a container's changes\nPOST /commit\nCreate a new image from a container's changes\nExample request:\n POST /commit?container=44c004db4b17m=messagerepo=myrepo HTTP/1.1\n Content-Type: application/json\n\n {\n \"Hostname\":\"\",\n \"User\":\"\",\n \"Memory\":0,\n \"MemorySwap\":0,\n \"AttachStdin\":false,\n \"AttachStdout\":true,\n \"AttachStderr\":true,\n \"PortSpecs\":null,\n \"Tty\":false,\n \"OpenStdin\":false,\n \"StdinOnce\":false,\n \"Env\":null,\n \"Cmd\":[\n \"date\"\n ],\n \"Volumes\":{\n \"/tmp\": {}\n },\n \"WorkingDir\":\"\",\n \"DisableNetwork\": false,\n \"ExposedPorts\":{\n \"22/tcp\": {}\n }\n }\n\nExample response:\n HTTP/1.1 201 Created\n Content-Type: application/vnd.docker.raw-stream\n\n {\"Id\": \"596069db4bf5\"}\n\nJson Parameters:\n\nconfig - the container's configuration\n\nQuery Parameters:\n\ncontainer \u2013 source container\nrepo \u2013 repository\ntag \u2013 tag\nm \u2013 commit message\nauthor \u2013 author (e.g., \"John Hannibal Smith\n hannibal@a-team.com\")\n\nStatus Codes:\n\n201 \u2013 no error\n404 \u2013 no such container\n500 \u2013 server error\n\nMonitor Docker's events\nGET /events\nGet container events from docker, either in real time via streaming, or via\npolling (using since).\nDocker containers will report the following events:\ncreate, destroy, die, export, kill, pause, restart, start, stop, unpause\n\nand Docker images will report:\nuntag, delete\n\nExample request:\n GET /events?since=1374067924\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\"status\": \"create\", \"id\": \"dfdf82bd3881\",\"from\": \"ubuntu:latest\", \"time\":1374067924}\n {\"status\": \"start\", \"id\": \"dfdf82bd3881\",\"from\": \"ubuntu:latest\", \"time\":1374067924}\n {\"status\": \"stop\", \"id\": \"dfdf82bd3881\",\"from\": \"ubuntu:latest\", \"time\":1374067966}\n {\"status\": \"destroy\", \"id\": \"dfdf82bd3881\",\"from\": \"ubuntu:latest\", \"time\":1374067970}\n\nQuery Parameters:\n\nsince \u2013 timestamp used for polling\nuntil \u2013 timestamp used for polling\n\nStatus Codes:\n\n200 \u2013 no error\n500 \u2013 server error\n\nGet a tarball containing all images and tags in a repository\nGET /images/(name)/get\nGet a tarball containing all images and metadata for the repository\nspecified by name.\nSee the image tarball format for more details.\nExample request\n GET /images/ubuntu/get\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/x-tar\n\n Binary data stream\n\nStatus Codes:\n\n200 \u2013 no error\n500 \u2013 server error\n\nLoad a tarball with a set of images and tags into docker\nPOST /images/load\nLoad a set of images and tags into the docker repository.\nSee the image tarball format for more details.\nExample request\n POST /images/load\n\n Tarball in body\n\nExample response:\n HTTP/1.1 200 OK\n\nStatus Codes:\n\n200 \u2013 no error\n500 \u2013 server error\n\nImage tarball format\nAn image tarball contains one directory per image layer (named using its long ID),\neach containing three files:\n\nVERSION: currently 1.0 - the file format version\njson: detailed layer information, similar to docker inspect layer_id\nlayer.tar: A tarfile containing the filesystem changes in this layer\n\nThe layer.tar file will contain aufs style .wh..wh.aufs files and directories\nfor storing attribute changes and deletions.\nIf the tarball defines a repository, there will also be a repositories file at\nthe root that contains a list of repository and tag names mapped to layer IDs.\n{hello-world:\n {latest: 565a9d68a73f6706862bfe8409a7f659776d4d60a8d096eb4a3cbce6999cc2a1}\n}\n\n\n3. Going further\n3.1 Inside docker run\nAs an example, the docker run command line makes the following API calls:\n\n\nCreate the container\n\n\nIf the status code is 404, it means the image doesn't exist:\n\nTry to pull it\nThen retry to create the container\n\n\n\nStart the container\n\n\nIf you are not in detached mode:\n\nAttach to the container, using logs=1 (to have stdout and\n stderr from the container's start) and stream=1\n\n\n\nIf in detached mode or only stdin is attached:\n\nDisplay the container's id\n\n\n\n3.2 Hijacking\nIn this version of the API, /attach, uses hijacking to transport stdin,\nstdout and stderr on the same socket. This might change in the future.\n3.3 CORS Requests\nTo enable cross origin requests to the remote api add the flag\n\"--api-enable-cors\" when running docker in daemon mode.\n$ docker -d -H=\"192.168.1.9:2375\" --api-enable-cors",
|
|
"title": "**HIDDEN**"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api_v1.11#docker-remote-api-v111",
|
|
"tags": "",
|
|
"text": "",
|
|
"title": "Docker Remote API v1.11"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api_v1.11#1-brief-introduction",
|
|
"tags": "",
|
|
"text": "The Remote API has replaced rcli . The daemon listens on unix:///var/run/docker.sock but you can bind\n Docker to another host/port or a Unix socket. The API tends to be REST, but for some complex commands, like attach \n or pull , the HTTP connection is hijacked to transport STDOUT , STDIN \n and STDERR .",
|
|
"title": "1. Brief introduction"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api_v1.11#2-endpoints",
|
|
"tags": "",
|
|
"text": "",
|
|
"title": "2. Endpoints"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api_v1.11#21-containers",
|
|
"tags": "",
|
|
"text": "List containers GET /containers/json List containers Example request : GET /containers/json?all=1 before=8dfafdbc3a40 size=1 HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n [\n {\n \"Id\": \"8dfafdbc3a40\",\n \"Image\": \"ubuntu:latest\",\n \"Command\": \"echo 1\",\n \"Created\": 1367854155,\n \"Status\": \"Exit 0\",\n \"Ports\": [{\"PrivatePort\": 2222, \"PublicPort\": 3333, \"Type\": \"tcp\"}],\n \"SizeRw\": 12288,\n \"SizeRootFs\": 0\n },\n {\n \"Id\": \"9cd87474be90\",\n \"Image\": \"ubuntu:latest\",\n \"Command\": \"echo 222222\",\n \"Created\": 1367854155,\n \"Status\": \"Exit 0\",\n \"Ports\": [],\n \"SizeRw\": 12288,\n \"SizeRootFs\": 0\n },\n {\n \"Id\": \"3176a2479c92\",\n \"Image\": \"ubuntu:latest\",\n \"Command\": \"echo 3333333333333333\",\n \"Created\": 1367854154,\n \"Status\": \"Exit 0\",\n \"Ports\":[],\n \"SizeRw\":12288,\n \"SizeRootFs\":0\n },\n {\n \"Id\": \"4cb07b47f9fb\",\n \"Image\": \"ubuntu:latest\",\n \"Command\": \"echo 444444444444444444444444444444444\",\n \"Created\": 1367854152,\n \"Status\": \"Exit 0\",\n \"Ports\": [],\n \"SizeRw\": 12288,\n \"SizeRootFs\": 0\n }\n ] Query Parameters: all \u2013 1/True/true or 0/False/false, Show all containers.\n Only running containers are shown by default (i.e., this defaults to false) limit \u2013 Show limit last created containers, include non-running ones. since \u2013 Show only containers created since Id, include non-running ones. before \u2013 Show only containers created before Id, include non-running ones. size \u2013 1/True/true or 0/False/false, Show the containers sizes Status Codes: 200 \u2013 no error 400 \u2013 bad parameter 500 \u2013 server error Create a container POST /containers/create Create a container Example request : POST /containers/create HTTP/1.1\n Content-Type: application/json\n\n {\n \"Hostname\":\"\",\n \"User\":\"\",\n \"Memory\":0,\n \"MemorySwap\":0,\n \"AttachStdin\":false,\n \"AttachStdout\":true,\n \"AttachStderr\":true,\n \"PortSpecs\":null,\n \"Tty\":false,\n \"OpenStdin\":false,\n \"StdinOnce\":false,\n \"Env\":null,\n \"Cmd\":[\n \"date\"\n ],\n \"Image\":\"ubuntu\",\n \"Volumes\":{\n \"/tmp\": {}\n },\n \"VolumesFrom\":\"\",\n \"WorkingDir\":\"\",\n \"DisableNetwork\": false,\n \"ExposedPorts\":{\n \"22/tcp\": {}\n }\n } Example response : HTTP/1.1 201 Created\n Content-Type: application/json\n\n {\n \"Id\":\"e90e34656806\"\n \"Warnings\":[]\n } Json Parameters: config \u2013 the container's configuration Query Parameters: name \u2013 Assign the specified name to the container. Mus\n match /?[a-zA-Z0-9_-]+ . Status Codes: 201 \u2013 no error 404 \u2013 no such container 406 \u2013 impossible to attach (container not running) 500 \u2013 server error Inspect a container GET /containers/(id)/json Return low-level information on the container id Example request : GET /containers/4fa6e0f0c678/json HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\n \"Id\": \"4fa6e0f0c6786287e131c3852c58a2e01cc697a68231826813597e4994f1d6e2\",\n \"Created\": \"2013-05-07T14:51:42.041847+02:00\",\n \"Path\": \"date\",\n \"Args\": [],\n \"Config\": {\n \"Hostname\": \"4fa6e0f0c678\",\n \"User\": \"\",\n \"Memory\": 0,\n \"MemorySwap\": 0,\n \"AttachStdin\": false,\n \"AttachStdout\": true,\n \"AttachStderr\": true,\n \"PortSpecs\": null,\n \"Tty\": false,\n \"OpenStdin\": false,\n \"StdinOnce\": false,\n \"Env\": null,\n \"Cmd\": [\n \"date\"\n ],\n \"Dns\": null,\n \"Image\": \"ubuntu\",\n \"Volumes\": {},\n \"VolumesFrom\": \"\",\n \"WorkingDir\": \"\"\n },\n \"State\": {\n \"Running\": false,\n \"Pid\": 0,\n \"ExitCode\": 0,\n \"StartedAt\": \"2013-05-07T14:51:42.087658+02:01360\",\n \"Ghost\": false\n },\n \"Image\": \"b750fe79269d2ec9a3c593ef05b4332b1d1a02a62b4accb2c21d589ff2f5f2dc\",\n \"NetworkSettings\": {\n \"IpAddress\": \"\",\n \"IpPrefixLen\": 0,\n \"Gateway\": \"\",\n \"Bridge\": \"\",\n \"PortMapping\": null\n },\n \"SysInitPath\": \"/home/kitty/go/src/github.com/docker/docker/bin/docker\",\n \"ResolvConfPath\": \"/etc/resolv.conf\",\n \"Volumes\": {},\n \"HostConfig\": {\n \"Binds\": null,\n \"ContainerIDFile\": \"\",\n \"LxcConf\": [],\n \"Privileged\": false,\n \"PortBindings\": {\n \"80/tcp\": [\n {\n \"HostIp\": \"0.0.0.0\",\n \"HostPort\": \"49153\"\n }\n ]\n },\n \"Links\": null,\n \"PublishAllPorts\": false\n }\n } Status Codes: 200 \u2013 no error 404 \u2013 no such container 500 \u2013 server error List processes running inside a container GET /containers/(id)/top List processes running inside the container id Example request : GET /containers/4fa6e0f0c678/top HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\n \"Titles\": [\n \"USER\",\n \"PID\",\n \"%CPU\",\n \"%MEM\",\n \"VSZ\",\n \"RSS\",\n \"TTY\",\n \"STAT\",\n \"START\",\n \"TIME\",\n \"COMMAND\"\n ],\n \"Processes\": [\n [\"root\",\"20147\",\"0.0\",\"0.1\",\"18060\",\"1864\",\"pts/4\",\"S\",\"10:06\",\"0:00\",\"bash\"],\n [\"root\",\"20271\",\"0.0\",\"0.0\",\"4312\",\"352\",\"pts/4\",\"S+\",\"10:07\",\"0:00\",\"sleep\",\"10\"]\n ]\n } Query Parameters: ps_args \u2013 ps arguments to use (e.g., aux) Status Codes: 200 \u2013 no error 404 \u2013 no such container 500 \u2013 server error Get container logs GET /containers/(id)/logs Get stdout and stderr logs from the container id Example request : GET /containers/4fa6e0f0c678/logs?stderr=1 stdout=1 timestamps=1 follow=1 HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/vnd.docker.raw-stream\n\n {{ STREAM }} Query Parameters: follow \u2013 1/True/true or 0/False/false, return stream.\n Default false stdout \u2013 1/True/true or 0/False/false, if logs=true, return\n stdout log. Default false stderr \u2013 1/True/true or 0/False/false, if logs=true, return\n stderr log. Default false timestamps \u2013 1/True/true or 0/False/false, if logs=true, prin\n timestamps for every log line. Default false Status Codes: 200 \u2013 no error 404 \u2013 no such container 500 \u2013 server error Inspect changes on a container's filesystem GET /containers/(id)/changes Inspect changes on container id 's filesystem Example request : GET /containers/4fa6e0f0c678/changes HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n [\n {\n \"Path\": \"/dev\",\n \"Kind\": 0\n },\n {\n \"Path\": \"/dev/kmsg\",\n \"Kind\": 1\n },\n {\n \"Path\": \"/test\",\n \"Kind\": 1\n }\n ] Status Codes: 200 \u2013 no error 404 \u2013 no such container 500 \u2013 server error Export a container GET /containers/(id)/export Export the contents of container id Example request : GET /containers/4fa6e0f0c678/export HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/octet-stream\n\n {{ TAR STREAM }} Status Codes: 200 \u2013 no error 404 \u2013 no such container 500 \u2013 server error Start a container POST /containers/(id)/start Start the container id Example request : POST /containers/(id)/start HTTP/1.1\n Content-Type: application/json\n\n {\n \"Binds\":[\"/tmp:/tmp\"],\n \"LxcConf\":[{\"Key\":\"lxc.utsname\",\"Value\":\"docker\"}],\n \"PortBindings\":{ \"22/tcp\": [{ \"HostPort\": \"11022\" }] },\n \"PublishAllPorts\":false,\n \"Privileged\":false,\n \"Dns\": [\"8.8.8.8\"],\n \"VolumesFrom\": [\"parent\", \"other:ro\"]\n } Example response : HTTP/1.1 204 No Content\n Content-Type: text/plain Json Parameters: hostConfig \u2013 the container's host configuration (optional) Status Codes: 204 \u2013 no error 404 \u2013 no such container 500 \u2013 server error Stop a container POST /containers/(id)/stop Stop the container id Example request : POST /containers/e90e34656806/stop?t=5 HTTP/1.1 Example response : HTTP/1.1 204 No Content Query Parameters: t \u2013 number of seconds to wait before killing the container Status Codes: 204 \u2013 no error 404 \u2013 no such container 500 \u2013 server error Restart a container POST /containers/(id)/restart Restart the container id Example request : POST /containers/e90e34656806/restart?t=5 HTTP/1.1 Example response : HTTP/1.1 204 No Content Query Parameters: t \u2013 number of seconds to wait before killing the container Status Codes: 204 \u2013 no error 404 \u2013 no such container 500 \u2013 server error Kill a container POST /containers/(id)/kill Kill the container id Example request : POST /containers/e90e34656806/kill HTTP/1.1 Example response : HTTP/1.1 204 No Content Query Parameters signal - Signal to send to the container: integer or string like \"SIGINT\".\n When not set, SIGKILL is assumed and the call will wait for the container to exit. Status Codes: 204 \u2013 no error 404 \u2013 no such container 500 \u2013 server error Attach to a container POST /containers/(id)/attach Attach to the container id Example request : POST /containers/16253994b7c4/attach?logs=1 stream=0 stdout=1 HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/vnd.docker.raw-stream\n\n {{ STREAM }} Query Parameters: logs \u2013 1/True/true or 0/False/false, return logs. Defaul\n false stream \u2013 1/True/true or 0/False/false, return stream.\n Default false stdin \u2013 1/True/true or 0/False/false, if stream=true, attach\n to stdin. Default false stdout \u2013 1/True/true or 0/False/false, if logs=true, return\n stdout log, if stream=true, attach to stdout. Default false stderr \u2013 1/True/true or 0/False/false, if logs=true, return\n stderr log, if stream=true, attach to stderr. Default false Status Codes: 200 \u2013 no error 400 \u2013 bad parameter 404 \u2013 no such container 500 \u2013 server error Stream details : When using the TTY setting is enabled in POST /containers/create ,\nthe stream is the raw data from the process PTY and client's stdin.\nWhen the TTY is disabled, then the stream is multiplexed to separate\nstdout and stderr. The format is a Header and a Payload (frame). HEADER The header will contain the information on which stream write the\nstream (stdout or stderr). It also contain the size of the\nassociated frame encoded on the last 4 bytes (uint32). It is encoded on the first 8 bytes like this: header := [8]byte{STREAM_TYPE, 0, 0, 0, SIZE1, SIZE2, SIZE3, SIZE4} STREAM_TYPE can be: 0: stdin (will be written on stdout) 1: stdout 2: stderr SIZE1, SIZE2, SIZE3, SIZE4 are the 4 bytes of\nthe uint32 size encoded as big endian. PAYLOAD The payload is the raw stream. IMPLEMENTATION The simplest way to implement the Attach protocol is the following: Read 8 bytes chose stdout or stderr depending on the first byte Extract the frame size from the last 4 byets Read the extracted size and output it on the correct output Goto 1) Attach to a container (websocket) GET /containers/(id)/attach/ws Attach to the container id via websocket Implements websocket protocol handshake according to RFC 6455 Example request GET /containers/e90e34656806/attach/ws?logs=0 stream=1 stdin=1 stdout=1 stderr=1 HTTP/1.1 Example response {{ STREAM }} Query Parameters: logs \u2013 1/True/true or 0/False/false, return logs. Default false stream \u2013 1/True/true or 0/False/false, return stream.\n Default false stdin \u2013 1/True/true or 0/False/false, if stream=true, attach\n to stdin. Default false stdout \u2013 1/True/true or 0/False/false, if logs=true, return\n stdout log, if stream=true, attach to stdout. Default false stderr \u2013 1/True/true or 0/False/false, if logs=true, return\n stderr log, if stream=true, attach to stderr. Default false Status Codes: 200 \u2013 no error 400 \u2013 bad parameter 404 \u2013 no such container 500 \u2013 server error Wait a container POST /containers/(id)/wait Block until container id stops, then returns the exit code Example request : POST /containers/16253994b7c4/wait HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\"StatusCode\": 0} Status Codes: 200 \u2013 no error 404 \u2013 no such container 500 \u2013 server error Remove a container DELETE /containers/(id) Remove the container id from the filesystem Example request : DELETE /containers/16253994b7c4?v=1 HTTP/1.1 Example response : HTTP/1.1 204 No Content Query Parameters: v \u2013 1/True/true or 0/False/false, Remove the volumes\n associated to the container. Default false force \u2013 1/True/true or 0/False/false, Removes the container\n even if it was running. Default false Status Codes: 204 \u2013 no error 400 \u2013 bad parameter 404 \u2013 no such container 500 \u2013 server error Copy files or folders from a container POST /containers/(id)/copy Copy files or folders of container id Example request : POST /containers/4fa6e0f0c678/copy HTTP/1.1\n Content-Type: application/json\n\n {\n \"Resource\": \"test.txt\"\n } Example response : HTTP/1.1 200 OK\n Content-Type: application/octet-stream\n\n {{ TAR STREAM }} Status Codes: 200 \u2013 no error 404 \u2013 no such container 500 \u2013 server error",
|
|
"title": "2.1 Containers"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api_v1.11#22-images",
|
|
"tags": "",
|
|
"text": "List Images GET /images/json Example request : GET /images/json?all=0 HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n [\n {\n \"RepoTags\": [\n \"ubuntu:12.04\",\n \"ubuntu:precise\",\n \"ubuntu:latest\"\n ],\n \"Id\": \"8dbd9e392a964056420e5d58ca5cc376ef18e2de93b5cc90e868a1bbc8318c1c\",\n \"Created\": 1365714795,\n \"Size\": 131506275,\n \"VirtualSize\": 131506275\n },\n {\n \"RepoTags\": [\n \"ubuntu:12.10\",\n \"ubuntu:quantal\"\n ],\n \"ParentId\": \"27cf784147099545\",\n \"Id\": \"b750fe79269d2ec9a3c593ef05b4332b1d1a02a62b4accb2c21d589ff2f5f2dc\",\n \"Created\": 1364102658,\n \"Size\": 24653,\n \"VirtualSize\": 180116135\n }\n ] Create an image POST /images/create Create an image, either by pull it from the registry or by importing i Example request : POST /images/create?fromImage=ubuntu HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\"status\": \"Pulling...\"}\n {\"status\": \"Pulling\", \"progress\": \"1 B/ 100 B\", \"progressDetail\": {\"current\": 1, \"total\": 100}}\n {\"error\": \"Invalid...\"}\n ...\n\nWhen using this endpoint to pull an image from the registry, the\n`X-Registry-Auth` header can be used to include\na base64-encoded AuthConfig object. Query Parameters: fromImage \u2013 name of the image to pull fromSrc \u2013 source to import, - means stdin repo \u2013 repository tag \u2013 tag registry \u2013 the registry to pull from Request Headers: X-Registry-Auth \u2013 base64-encoded AuthConfig object Status Codes: 200 \u2013 no error 500 \u2013 server error Inspect an image GET /images/(name)/json Return low-level information on the image name Example request : GET /images/ubuntu/json HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\n \"id\":\"b750fe79269d2ec9a3c593ef05b4332b1d1a02a62b4accb2c21d589ff2f5f2dc\",\n \"parent\":\"27cf784147099545\",\n \"created\":\"2013-03-23T22:24:18.818426-07:00\",\n \"container\":\"3d67245a8d72ecf13f33dffac9f79dcdf70f75acb84d308770391510e0c23ad0\",\n \"container_config\":\n {\n \"Hostname\":\"\",\n \"User\":\"\",\n \"Memory\":0,\n \"MemorySwap\":0,\n \"AttachStdin\":false,\n \"AttachStdout\":false,\n \"AttachStderr\":false,\n \"PortSpecs\":null,\n \"Tty\":true,\n \"OpenStdin\":true,\n \"StdinOnce\":false,\n \"Env\":null,\n \"Cmd\": [\"/bin/bash\"],\n \"Dns\":null,\n \"Image\":\"ubuntu\",\n \"Volumes\":null,\n \"VolumesFrom\":\"\",\n \"WorkingDir\":\"\"\n },\n \"Size\": 6824592\n } Status Codes: 200 \u2013 no error 404 \u2013 no such image 500 \u2013 server error Get the history of an image GET /images/(name)/history Return the history of the image name Example request : GET /images/ubuntu/history HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n [\n {\n \"Id\": \"b750fe79269d\",\n \"Created\": 1364102658,\n \"CreatedBy\": \"/bin/bash\"\n },\n {\n \"Id\": \"27cf78414709\",\n \"Created\": 1364068391,\n \"CreatedBy\": \"\"\n }\n ] Status Codes: 200 \u2013 no error 404 \u2013 no such image 500 \u2013 server error Push an image on the registry POST /images/(name)/push Push the image name on the registry Example request : POST /images/test/push HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\"status\": \"Pushing...\"}\n {\"status\": \"Pushing\", \"progress\": \"1/? (n/a)\", \"progressDetail\": {\"current\": 1}}}\n {\"error\": \"Invalid...\"}\n ...\n\nIf you wish to push an image on to a private registry, that image must already have been tagged\ninto a repository which references that registry host name and port. This repository name should\nthen be used in the URL. This mirrors the flow of the CLI. Example request : POST /images/registry.acme.com:5000/test/push HTTP/1.1 Query Parameters: tag \u2013 the tag to associate with the image on the registry, optional Request Headers: X-Registry-Auth \u2013 include a base64-encoded AuthConfig object. Status Codes: 200 \u2013 no error 404 \u2013 no such image 500 \u2013 server error Tag an image into a repository POST /images/(name)/tag Tag the image name into a repository Example request : POST /images/test/tag?repo=myrepo force=0 tag=v42 HTTP/1.1 Example response : HTTP/1.1 201 OK Query Parameters: repo \u2013 The repository to tag in force \u2013 1/True/true or 0/False/false, default false tag - The new tag name Status Codes: 201 \u2013 no error 400 \u2013 bad parameter 404 \u2013 no such image 409 \u2013 conflict 500 \u2013 server error Remove an image DELETE /images/(name) Remove the image name from the filesystem Example request : DELETE /images/test HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-type: application/json\n\n [\n {\"Untagged\": \"3e2f21a89f\"},\n {\"Deleted\": \"3e2f21a89f\"},\n {\"Deleted\": \"53b4f83ac9\"}\n ] Query Parameters: force \u2013 1/True/true or 0/False/false, default false noprune \u2013 1/True/true or 0/False/false, default false Status Codes: 200 \u2013 no error 404 \u2013 no such image 409 \u2013 conflict 500 \u2013 server error Search images GET /images/search Search for an image on Docker Hub . Note :\nThe response keys have changed from API v1.6 to reflect the JSON\nsent by the registry server to the docker daemon's request. Example request : GET /images/search?term=sshd HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n [\n {\n \"description\": \"\",\n \"is_official\": false,\n \"is_trusted\": false,\n \"name\": \"wma55/u1210sshd\",\n \"star_count\": 0\n },\n {\n \"description\": \"\",\n \"is_official\": false,\n \"is_trusted\": false,\n \"name\": \"jdswinbank/sshd\",\n \"star_count\": 0\n },\n {\n \"description\": \"\",\n \"is_official\": false,\n \"is_trusted\": false,\n \"name\": \"vgauthier/sshd\",\n \"star_count\": 0\n }\n ...\n ] Query Parameters: term \u2013 term to search Status Codes: 200 \u2013 no error 500 \u2013 server error",
|
|
"title": "2.2 Images"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api_v1.11#23-misc",
|
|
"tags": "",
|
|
"text": "Build an image from Dockerfile via stdin POST /build Build an image from Dockerfile via stdin Example request : POST /build HTTP/1.1\n\n {{ TAR STREAM }} Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\"stream\": \"Step 1...\"}\n {\"stream\": \"...\"}\n {\"error\": \"Error...\", \"errorDetail\": {\"code\": 123, \"message\": \"Error...\"}}\n\nThe stream must be a tar archive compressed with one of the\nfollowing algorithms: identity (no compression), gzip, bzip2, xz.\n\nThe archive must include a file called `Dockerfile`\nat its root. It may include any number of other files,\nwhich will be accessible in the build context (See the [*ADD build\ncommand*](/reference/builder/#dockerbuilder)). Query Parameters: t \u2013 repository name (and optionally a tag) to be applied to\n the resulting image in case of success remote \u2013 git or HTTP/HTTPS URI build source q \u2013 suppress verbose build output nocache \u2013 do not use the cache when building the image rm - remove intermediate containers after a successful build Request Headers: Content-type \u2013 should be set to \"application/tar\" . X-Registry-Config \u2013 base64-encoded ConfigFile objec Status Codes: 200 \u2013 no error 500 \u2013 server error Check auth configuration POST /auth Get the default username and email Example request : POST /auth HTTP/1.1\n Content-Type: application/json\n\n {\n \"username\":\" hannibal\",\n \"password: \"xxxx\",\n \"email\": \"hannibal@a-team.com\",\n \"serveraddress\": \"https://index.docker.io/v1/\"\n } Example response : HTTP/1.1 200 OK Status Codes: 200 \u2013 no error 204 \u2013 no error 500 \u2013 server error Display system-wide information GET /info Display system-wide information Example request : GET /info HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\n \"Containers\": 11,\n \"Images\": 16,\n \"Driver\": \"btrfs\",\n \"ExecutionDriver\": \"native-0.1\",\n \"KernelVersion\": \"3.12.0-1-amd64\"\n \"Debug\": false,\n \"NFd\": 11,\n \"NGoroutines\": 21,\n \"NEventsListener\": 0,\n \"InitPath\": \"/usr/bin/docker\",\n \"IndexServerAddress\": [\"https://index.docker.io/v1/\"],\n \"MemoryLimit\": true,\n \"SwapLimit\": false,\n \"IPv4Forwarding\": true\n } Status Codes: 200 \u2013 no error 500 \u2013 server error Show the docker version information GET /version Show the docker version information Example request : GET /version HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\n \"Version\":\"0.2.2\",\n \"GitCommit\":\"5a2a5cc+CHANGES\",\n \"GoVersion\":\"go1.0.3\"\n } Status Codes: 200 \u2013 no error 500 \u2013 server error Ping the docker server GET /_ping Ping the docker server Example request : GET /_ping HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: text/plain\n\n OK Status Codes: 200 - no error 500 - server error Create a new image from a container's changes POST /commit Create a new image from a container's changes Example request : POST /commit?container=44c004db4b17 m=message repo=myrepo HTTP/1.1\n Content-Type: application/json\n\n {\n \"Hostname\":\"\",\n \"User\":\"\",\n \"Memory\":0,\n \"MemorySwap\":0,\n \"AttachStdin\":false,\n \"AttachStdout\":true,\n \"AttachStderr\":true,\n \"PortSpecs\":null,\n \"Tty\":false,\n \"OpenStdin\":false,\n \"StdinOnce\":false,\n \"Env\":null,\n \"Cmd\":[\n \"date\"\n ],\n \"Volumes\":{\n \"/tmp\": {}\n },\n \"WorkingDir\":\"\",\n \"DisableNetwork\": false,\n \"ExposedPorts\":{\n \"22/tcp\": {}\n }\n } Example response : HTTP/1.1 201 Created\n Content-Type: application/vnd.docker.raw-stream\n\n {\"Id\": \"596069db4bf5\"} Json Parameters: config - the container's configuration Query Parameters: container \u2013 source container repo \u2013 repository tag \u2013 tag m \u2013 commit message author \u2013 author (e.g., \"John Hannibal Smith\n hannibal@a-team.com \") Status Codes: 201 \u2013 no error 404 \u2013 no such container 500 \u2013 server error Monitor Docker's events GET /events Get container events from docker, either in real time via streaming, or via\npolling (using since). Docker containers will report the following events: create, destroy, die, export, kill, pause, restart, start, stop, unpause and Docker images will report: untag, delete Example request : GET /events?since=1374067924 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\"status\": \"create\", \"id\": \"dfdf82bd3881\",\"from\": \"ubuntu:latest\", \"time\":1374067924}\n {\"status\": \"start\", \"id\": \"dfdf82bd3881\",\"from\": \"ubuntu:latest\", \"time\":1374067924}\n {\"status\": \"stop\", \"id\": \"dfdf82bd3881\",\"from\": \"ubuntu:latest\", \"time\":1374067966}\n {\"status\": \"destroy\", \"id\": \"dfdf82bd3881\",\"from\": \"ubuntu:latest\", \"time\":1374067970} Query Parameters: since \u2013 timestamp used for polling until \u2013 timestamp used for polling Status Codes: 200 \u2013 no error 500 \u2013 server error Get a tarball containing all images and tags in a repository GET /images/(name)/get Get a tarball containing all images and metadata for the repository\nspecified by name . See the image tarball format for more details. Example request GET /images/ubuntu/get Example response : HTTP/1.1 200 OK\n Content-Type: application/x-tar\n\n Binary data stream Status Codes: 200 \u2013 no error 500 \u2013 server error Load a tarball with a set of images and tags into docker POST /images/load Load a set of images and tags into the docker repository. See the image tarball format for more details. Example request POST /images/load\n\n Tarball in body Example response : HTTP/1.1 200 OK Status Codes: 200 \u2013 no error 500 \u2013 server error Image tarball format An image tarball contains one directory per image layer (named using its long ID),\neach containing three files: VERSION : currently 1.0 - the file format version json : detailed layer information, similar to docker inspect layer_id layer.tar : A tarfile containing the filesystem changes in this layer The layer.tar file will contain aufs style .wh..wh.aufs files and directories\nfor storing attribute changes and deletions. If the tarball defines a repository, there will also be a repositories file at\nthe root that contains a list of repository and tag names mapped to layer IDs. { hello-world :\n { latest : 565a9d68a73f6706862bfe8409a7f659776d4d60a8d096eb4a3cbce6999cc2a1 }\n}",
|
|
"title": "2.3 Misc"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api_v1.11#3-going-further",
|
|
"tags": "",
|
|
"text": "",
|
|
"title": "3. Going further"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api_v1.11#31-inside-docker-run",
|
|
"tags": "",
|
|
"text": "As an example, the docker run command line makes the following API calls: Create the container If the status code is 404, it means the image doesn't exist: Try to pull it Then retry to create the container Start the container If you are not in detached mode: Attach to the container, using logs=1 (to have stdout and\n stderr from the container's start) and stream=1 If in detached mode or only stdin is attached: Display the container's id",
|
|
"title": "3.1 Inside docker run"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api_v1.11#32-hijacking",
|
|
"tags": "",
|
|
"text": "In this version of the API, /attach, uses hijacking to transport stdin,\nstdout and stderr on the same socket. This might change in the future.",
|
|
"title": "3.2 Hijacking"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api_v1.11#33-cors-requests",
|
|
"tags": "",
|
|
"text": "To enable cross origin requests to the remote api add the flag\n\"--api-enable-cors\" when running docker in daemon mode. $ docker -d -H=\"192.168.1.9:2375\" --api-enable-cors",
|
|
"title": "3.3 CORS Requests"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api_v1.10/",
|
|
"tags": "",
|
|
"text": "Docker Remote API v1.10\n1. Brief introduction\n\nThe Remote API has replaced rcli\nThe daemon listens on unix:///var/run/docker.sock but you can bind\n Docker to another host/port or a Unix socket.\nThe API tends to be REST, but for some complex commands, like attach\n or pull, the HTTP connection is hijacked to transport stdout, stdin\n and stderr\n\n2. Endpoints\n2.1 Containers\nList containers\nGET /containers/json\nList containers\nExample request:\n GET /containers/json?all=1before=8dfafdbc3a40size=1 HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n [\n {\n \"Id\": \"8dfafdbc3a40\",\n \"Image\": \"ubuntu:latest\",\n \"Command\": \"echo 1\",\n \"Created\": 1367854155,\n \"Status\": \"Exit 0\",\n \"Ports\": [{\"PrivatePort\": 2222, \"PublicPort\": 3333, \"Type\": \"tcp\"}],\n \"SizeRw\": 12288,\n \"SizeRootFs\": 0\n },\n {\n \"Id\": \"9cd87474be90\",\n \"Image\": \"ubuntu:latest\",\n \"Command\": \"echo 222222\",\n \"Created\": 1367854155,\n \"Status\": \"Exit 0\",\n \"Ports\": [],\n \"SizeRw\": 12288,\n \"SizeRootFs\": 0\n },\n {\n \"Id\": \"3176a2479c92\",\n \"Image\": \"ubuntu:latest\",\n \"Command\": \"echo 3333333333333333\",\n \"Created\": 1367854154,\n \"Status\": \"Exit 0\",\n \"Ports\":[],\n \"SizeRw\":12288,\n \"SizeRootFs\":0\n },\n {\n \"Id\": \"4cb07b47f9fb\",\n \"Image\": \"ubuntu:latest\",\n \"Command\": \"echo 444444444444444444444444444444444\",\n \"Created\": 1367854152,\n \"Status\": \"Exit 0\",\n \"Ports\": [],\n \"SizeRw\": 12288,\n \"SizeRootFs\": 0\n }\n ]\n\nQuery Parameters:\n\n\n\nall \u2013 1/True/true or 0/False/false, Show all containers.\n Only running containers are shown by default (i.e., this defaults to false)\nlimit \u2013 Show limit last created containers, include non-running ones.\nsince \u2013 Show only containers created since Id, include non-running ones.\nbefore \u2013 Show only containers created before Id, include non-running ones.\nsize \u2013 1/True/true or 0/False/false, Show the containers sizes\n\nStatus Codes:\n\n200 \u2013 no error\n400 \u2013 bad parameter\n500 \u2013 server error\n\nCreate a container\nPOST /containers/create\nCreate a container\nExample request:\n POST /containers/create HTTP/1.1\n Content-Type: application/json\n\n {\n \"Hostname\":\"\",\n \"User\":\"\",\n \"Memory\":0,\n \"MemorySwap\":0,\n \"AttachStdin\":false,\n \"AttachStdout\":true,\n \"AttachStderr\":true,\n \"PortSpecs\":null,\n \"Tty\":false,\n \"OpenStdin\":false,\n \"StdinOnce\":false,\n \"Env\":null,\n \"Cmd\":[\n \"date\"\n ],\n \"Image\":\"ubuntu\",\n \"Volumes\":{\n \"/tmp\": {}\n },\n \"WorkingDir\":\"\",\n \"NetworkDisabled\": false,\n \"ExposedPorts\":{\n \"22/tcp\": {}\n }\n }\n\nExample response:\n HTTP/1.1 201 Created\n Content-Type: application/json\n\n {\n \"Id\":\"e90e34656806\"\n \"Warnings\":[]\n }\n\nJson Parameters:\n\nconfig \u2013 the container's configuration\n\nQuery Parameters:\n\n\n\nname \u2013 Assign the specified name to the container. Mus\n match /?[a-zA-Z0-9_-]+.\n\nStatus Codes:\n\n201 \u2013 no error\n404 \u2013 no such container\n406 \u2013 impossible to attach (container not running)\n500 \u2013 server error\n\nInspect a container\nGET /containers/(id)/json\nReturn low-level information on the container id\nExample request:\n GET /containers/4fa6e0f0c678/json HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\n \"Id\": \"4fa6e0f0c6786287e131c3852c58a2e01cc697a68231826813597e4994f1d6e2\",\n \"Created\": \"2013-05-07T14:51:42.041847+02:00\",\n \"Path\": \"date\",\n \"Args\": [],\n \"Config\": {\n \"Hostname\": \"4fa6e0f0c678\",\n \"User\": \"\",\n \"Memory\": 0,\n \"MemorySwap\": 0,\n \"AttachStdin\": false,\n \"AttachStdout\": true,\n \"AttachStderr\": true,\n \"PortSpecs\": null,\n \"Tty\": false,\n \"OpenStdin\": false,\n \"StdinOnce\": false,\n \"Env\": null,\n \"Cmd\": [\n \"date\"\n ],\n \"Image\": \"ubuntu\",\n \"Volumes\": {},\n \"WorkingDir\":\"\"\n\n },\n \"State\": {\n \"Running\": false,\n \"Pid\": 0,\n \"ExitCode\": 0,\n \"StartedAt\": \"2013-05-07T14:51:42.087658+02:01360\",\n \"Ghost\": false\n },\n \"Image\": \"b750fe79269d2ec9a3c593ef05b4332b1d1a02a62b4accb2c21d589ff2f5f2dc\",\n \"NetworkSettings\": {\n \"IpAddress\": \"\",\n \"IpPrefixLen\": 0,\n \"Gateway\": \"\",\n \"Bridge\": \"\",\n \"PortMapping\": null\n },\n \"SysInitPath\": \"/home/kitty/go/src/github.com/docker/docker/bin/docker\",\n \"ResolvConfPath\": \"/etc/resolv.conf\",\n \"Volumes\": {},\n \"HostConfig\": {\n \"Binds\": null,\n \"ContainerIDFile\": \"\",\n \"LxcConf\": [],\n \"Privileged\": false,\n \"PortBindings\": {\n \"80/tcp\": [\n {\n \"HostIp\": \"0.0.0.0\",\n \"HostPort\": \"49153\"\n }\n ]\n },\n \"Links\": null,\n \"PublishAllPorts\": false\n }\n }\n\nStatus Codes:\n\n200 \u2013 no error\n404 \u2013 no such container\n500 \u2013 server error\n\nList processes running inside a container\nGET /containers/(id)/top\nList processes running inside the container id\nExample request:\n GET /containers/4fa6e0f0c678/top HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\n \"Titles\": [\n \"USER\",\n \"PID\",\n \"%CPU\",\n \"%MEM\",\n \"VSZ\",\n \"RSS\",\n \"TTY\",\n \"STAT\",\n \"START\",\n \"TIME\",\n \"COMMAND\"\n ],\n \"Processes\": [\n [\"root\",\"20147\",\"0.0\",\"0.1\",\"18060\",\"1864\",\"pts/4\",\"S\",\"10:06\",\"0:00\",\"bash\"],\n [\"root\",\"20271\",\"0.0\",\"0.0\",\"4312\",\"352\",\"pts/4\",\"S+\",\"10:07\",\"0:00\",\"sleep\",\"10\"]\n ]\n }\n\nQuery Parameters:\n\n\n\nps_args \u2013 ps arguments to use (e.g., aux)\n\nStatus Codes:\n\n200 \u2013 no error\n404 \u2013 no such container\n500 \u2013 server error\n\nInspect changes on a container's filesystem\nGET /containers/(id)/changes\nInspect changes on container id 's filesystem\nExample request:\n GET /containers/4fa6e0f0c678/changes HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n [\n {\n \"Path\": \"/dev\",\n \"Kind\": 0\n },\n {\n \"Path\": \"/dev/kmsg\",\n \"Kind\": 1\n },\n {\n \"Path\": \"/test\",\n \"Kind\": 1\n }\n ]\n\nStatus Codes:\n\n200 \u2013 no error\n404 \u2013 no such container\n500 \u2013 server error\n\nExport a container\nGET /containers/(id)/export\nExport the contents of container id\nExample request:\n GET /containers/4fa6e0f0c678/export HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/octet-stream\n\n {{ TAR STREAM }}\n\nStatus Codes:\n\n200 \u2013 no error\n404 \u2013 no such container\n500 \u2013 server error\n\nStart a container\nPOST /containers/(id)/start\nStart the container id\nExample request:\n POST /containers/(id)/start HTTP/1.1\n Content-Type: application/json\n\n {\n \"Binds\":[\"/tmp:/tmp\"],\n \"LxcConf\":[{\"Key\":\"lxc.utsname\",\"Value\":\"docker\"}],\n \"PortBindings\":{ \"22/tcp\": [{ \"HostPort\": \"11022\" }] },\n \"PublishAllPorts\":false,\n \"Privileged\":false,\n \"Dns\": [\"8.8.8.8\"],\n \"VolumesFrom\": [\"parent\", \"other:ro\"]\n }\n\nExample response:\n HTTP/1.1 204 No Content\n Content-Type: text/plain\n\nJson Parameters:\n\n\n\nhostConfig \u2013 the container's host configuration (optional)\n\nStatus Codes:\n\n204 \u2013 no error\n404 \u2013 no such container\n500 \u2013 server error\n\nStop a container\nPOST /containers/(id)/stop\nStop the container id\nExample request:\n POST /containers/e90e34656806/stop?t=5 HTTP/1.1\n\nExample response:\n HTTP/1.1 204 OK\n\nQuery Parameters:\n\nt \u2013 number of seconds to wait before killing the container\n\nStatus Codes:\n\n204 \u2013 no error\n404 \u2013 no such container\n500 \u2013 server error\n\nRestart a container\nPOST /containers/(id)/restart\nRestart the container id\nExample request:\n POST /containers/e90e34656806/restart?t=5 HTTP/1.1\n\nExample response:\n HTTP/1.1 204 No Content\n\nQuery Parameters:\n\nt \u2013 number of seconds to wait before killing the container\n\nStatus Codes:\n\n204 \u2013 no error\n404 \u2013 no such container\n500 \u2013 server error\n\nKill a container\nPOST /containers/(id)/kill\nKill the container id\nExample request:\n POST /containers/e90e34656806/kill HTTP/1.1\n\nExample response:\n HTTP/1.1 204 No Content\n\nQuery Parameters\n\nsignal - Signal to send to the container: integer or string like \"SIGINT\".\n When not set, SIGKILL is assumed and the call will wait for the container to exit.\n\nStatus Codes:\n\n204 \u2013 no error\n404 \u2013 no such container\n500 \u2013 server error\n\nAttach to a container\nPOST /containers/(id)/attach\nAttach to the container id\nExample request:\n POST /containers/16253994b7c4/attach?logs=1stream=0stdout=1 HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/vnd.docker.raw-stream\n\n {{ STREAM }}\n\nQuery Parameters:\n\nlogs \u2013 1/True/true or 0/False/false, return logs. Defaul\n false\nstream \u2013 1/True/true or 0/False/false, return stream.\n Default false\nstdin \u2013 1/True/true or 0/False/false, if stream=true, attach\n to stdin. Default false\nstdout \u2013 1/True/true or 0/False/false, if logs=true, return\n stdout log, if stream=true, attach to stdout. Default false\nstderr \u2013 1/True/true or 0/False/false, if logs=true, return\n stderr log, if stream=true, attach to stderr. Default false\n\nStatus Codes:\n\n200 \u2013 no error\n400 \u2013 bad parameter\n404 \u2013 no such container\n\n500 \u2013 server error\nStream details:\nWhen using the TTY setting is enabled in\nPOST /containers/create\n,\nthe stream is the raw data from the process PTY and client's stdin.\nWhen the TTY is disabled, then the stream is multiplexed to separate\nstdout and stderr.\nThe format is a Header and a Payload (frame).\nHEADER\nThe header will contain the information on which stream write the\nstream (stdout or stderr). It also contain the size of the\nassociated frame encoded on the last 4 bytes (uint32).\nIt is encoded on the first 8 bytes like this:\nheader := [8]byte{STREAM_TYPE, 0, 0, 0, SIZE1, SIZE2, SIZE3, SIZE4}\n\nSTREAM_TYPE can be:\n\n\n0: stdin (will be written on stdout)\n\n1: stdout\n\n2: stderr\nSIZE1, SIZE2, SIZE3, SIZE4 are the 4 bytes of\nthe uint32 size encoded as big endian.\nPAYLOAD\nThe payload is the raw stream.\nIMPLEMENTATION\nThe simplest way to implement the Attach protocol is the following:\n\nRead 8 bytes\nchose stdout or stderr depending on the first byte\nExtract the frame size from the last 4 byets\nRead the extracted size and output it on the correct output\nGoto 1)\n\n\n\nAttach to a container (websocket)\nGET /containers/(id)/attach/ws\nAttach to the container id via websocket\nImplements websocket protocol handshake according to RFC 6455\nExample request\n GET /containers/e90e34656806/attach/ws?logs=0stream=1stdin=1stdout=1stderr=1 HTTP/1.1\n\nExample response\n {{ STREAM }}\n\nQuery Parameters:\n\nlogs \u2013 1/True/true or 0/False/false, return logs. Default false\nstream \u2013 1/True/true or 0/False/false, return stream.\n Default false\nstdin \u2013 1/True/true or 0/False/false, if stream=true, attach\n to stdin. Default false\nstdout \u2013 1/True/true or 0/False/false, if logs=true, return\n stdout log, if stream=true, attach to stdout. Default false\nstderr \u2013 1/True/true or 0/False/false, if logs=true, return\n stderr log, if stream=true, attach to stderr. Default false\n\nStatus Codes:\n\n200 \u2013 no error\n400 \u2013 bad parameter\n404 \u2013 no such container\n500 \u2013 server error\n\nWait a container\nPOST /containers/(id)/wait\nBlock until container id stops, then returns\n the exit code\nExample request:\n POST /containers/16253994b7c4/wait HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\"StatusCode\": 0}\n\nStatus Codes:\n\n200 \u2013 no error\n404 \u2013 no such container\n500 \u2013 server error\n\nRemove a container\nDELETE /containers/(id*)\n: Remove the containerid` from the filesystem\nExample request:\n DELETE /containers/16253994b7c4?v=1 HTTP/1.1\n\nExample response:\n HTTP/1.1 204 No Content\n\nQuery Parameters:\n\nv \u2013 1/True/true or 0/False/false, Remove the volumes\n associated to the container. Default false\nforce \u2013 1/True/true or 0/False/false, Removes the container\n even if it was running. Default false\n\nStatus Codes:\n\n204 \u2013 no error\n400 \u2013 bad parameter\n404 \u2013 no such container\n500 \u2013 server error\n\nCopy files or folders from a container\nPOST /containers/(id)/copy\nCopy files or folders of container id\nExample request:\n POST /containers/4fa6e0f0c678/copy HTTP/1.1\n Content-Type: application/json\n\n {\n \"Resource\": \"test.txt\"\n }\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/octet-stream\n\n {{ TAR STREAM }}\n\nStatus Codes:\n\n200 \u2013 no error\n404 \u2013 no such container\n500 \u2013 server error\n\n2.2 Images\nList Images\nGET /images/json\nExample request:\n GET /images/json?all=0 HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n [\n {\n \"RepoTags\": [\n \"ubuntu:12.04\",\n \"ubuntu:precise\",\n \"ubuntu:latest\"\n ],\n \"Id\": \"8dbd9e392a964056420e5d58ca5cc376ef18e2de93b5cc90e868a1bbc8318c1c\",\n \"Created\": 1365714795,\n \"Size\": 131506275,\n \"VirtualSize\": 131506275\n },\n {\n \"RepoTags\": [\n \"ubuntu:12.10\",\n \"ubuntu:quantal\"\n ],\n \"ParentId\": \"27cf784147099545\",\n \"Id\": \"b750fe79269d2ec9a3c593ef05b4332b1d1a02a62b4accb2c21d589ff2f5f2dc\",\n \"Created\": 1364102658,\n \"Size\": 24653,\n \"VirtualSize\": 180116135\n }\n ]\n\nCreate an image\nPOST /images/create\nCreate an image, either by pull it from the registry or by importing\n i\nExample request:\n POST /images/create?fromImage=ubuntu HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\"status\": \"Pulling...\"}\n {\"status\": \"Pulling\", \"progress\": \"1 B/ 100 B\", \"progressDetail\": {\"current\": 1, \"total\": 100}}\n {\"error\": \"Invalid...\"}\n ...\n\nWhen using this endpoint to pull an image from the registry, the\n`X-Registry-Auth` header can be used to include\na base64-encoded AuthConfig object.\n\nQuery Parameters:\n\nfromImage \u2013 name of the image to pull\nfromSrc \u2013 source to import, - means stdin\nrepo \u2013 repository\ntag \u2013 tag\nregistry \u2013 the registry to pull from\n\nRequest Headers:\n\nX-Registry-Auth \u2013 base64-encoded AuthConfig object\n\nStatus Codes:\n\n200 \u2013 no error\n500 \u2013 server error\n\nInsert a file in an image\nPOST /images/(name)/insert\nInsert a file from url in the image\n name at path\nExample request:\n POST /images/test/insert?path=/usrurl=myurl HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\"status\":\"Inserting...\"}\n {\"status\":\"Inserting\", \"progress\":\"1/? (n/a)\", \"progressDetail\":{\"current\":1}}\n {\"error\":\"Invalid...\"}\n ...\n\nQuery Parameters:\n\nurl \u2013 The url from where the file is taken\npath \u2013 The path where the file is stored\n\nStatus Codes:\n\n200 \u2013 no error\n500 \u2013 server error\n\nInspect an image\nGET /images/(name)/json\nReturn low-level information on the image name\nExample request:\n GET /images/ubuntu/json HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\n \"id\":\"b750fe79269d2ec9a3c593ef05b4332b1d1a02a62b4accb2c21d589ff2f5f2dc\",\n \"parent\":\"27cf784147099545\",\n \"created\":\"2013-03-23T22:24:18.818426-07:00\",\n \"container\":\"3d67245a8d72ecf13f33dffac9f79dcdf70f75acb84d308770391510e0c23ad0\",\n \"container_config\":\n {\n \"Hostname\":\"\",\n \"User\":\"\",\n \"Memory\":0,\n \"MemorySwap\":0,\n \"AttachStdin\":false,\n \"AttachStdout\":false,\n \"AttachStderr\":false,\n \"PortSpecs\":null,\n \"Tty\":true,\n \"OpenStdin\":true,\n \"StdinOnce\":false,\n \"Env\":null,\n \"Cmd\": [\"/bin/bash\"]\n \"Image\":\"ubuntu\",\n \"Volumes\":null,\n \"WorkingDir\":\"\"\n },\n \"Size\": 6824592\n }\n\nStatus Codes:\n\n200 \u2013 no error\n404 \u2013 no such image\n500 \u2013 server error\n\nGet the history of an image\nGET /images/(name)/history\nReturn the history of the image name\nExample request:\n GET /images/ubuntu/history HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n [\n {\n \"Id\": \"b750fe79269d\",\n \"Created\": 1364102658,\n \"CreatedBy\": \"/bin/bash\"\n },\n {\n \"Id\": \"27cf78414709\",\n \"Created\": 1364068391,\n \"CreatedBy\": \"\"\n }\n ]\n\nStatus Codes:\n\n200 \u2013 no error\n404 \u2013 no such image\n500 \u2013 server error\n\nPush an image on the registry\nPOST /images/(name)/push\nPush the image name on the registry\nExample request:\n POST /images/test/push HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\"status\": \"Pushing...\"}\n {\"status\": \"Pushing\", \"progress\": \"1/? (n/a)\", \"progressDetail\": {\"current\": 1}}}\n {\"error\": \"Invalid...\"}\n ...\n\nIf you wish to push an image on to a private registry, that image must already have been tagged\ninto a repository which references that registry host name and port. This repository name should\nthen be used in the URL. This mirrors the flow of the CLI.\n\nExample request:\n POST /images/registry.acme.com:5000/test/push HTTP/1.1\n\nQuery Parameters:\n\ntag \u2013 the tag to associate with the image on the registry, optional\n\nRequest Headers:\n\nX-Registry-Auth \u2013 include a base64-encoded AuthConfig object.\n\nStatus Codes:\n\n200 \u2013 no error\n404 \u2013 no such image\n500 \u2013 server error\n\nTag an image into a repository\nPOST /images/(name)/tag\nTag the image name into a repository\nExample request:\n POST /images/test/tag?repo=myrepoforce=0tag=v42 HTTP/1.1\n\nExample response:\n HTTP/1.1 201 OK\n\nQuery Parameters:\n\nrepo \u2013 The repository to tag in\nforce \u2013 1/True/true or 0/False/false, default false\ntag - The new tag name\n\nStatus Codes:\n\n201 \u2013 no error\n400 \u2013 bad parameter\n404 \u2013 no such image\n409 \u2013 conflict\n500 \u2013 server error\n\nRemove an image\nDELETE /images/(name*)\n: Remove the imagename` from the filesystem\nExample request:\n DELETE /images/test HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-type: application/json\n\n [\n {\"Untagged\": \"3e2f21a89f\"},\n {\"Deleted\": \"3e2f21a89f\"},\n {\"Deleted\": \"53b4f83ac9\"}\n ]\n\nQuery Parameters:\n\nforce \u2013 1/True/true or 0/False/false, default false\nnoprune \u2013 1/True/true or 0/False/false, default false\n\nStatus Codes:\n\n200 \u2013 no error\n404 \u2013 no such image\n409 \u2013 conflict\n500 \u2013 server error\n\nSearch images\nGET /images/search\nSearch for an image on Docker Hub.\n\nNote:\nThe response keys have changed from API v1.6 to reflect the JSON\nsent by the registry server to the docker daemon's request.\n\nExample request:\n GET /images/search?term=sshd HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n [\n {\n \"description\": \"\",\n \"is_official\": false,\n \"is_trusted\": false,\n \"name\": \"wma55/u1210sshd\",\n \"star_count\": 0\n },\n {\n \"description\": \"\",\n \"is_official\": false,\n \"is_trusted\": false,\n \"name\": \"jdswinbank/sshd\",\n \"star_count\": 0\n },\n {\n \"description\": \"\",\n \"is_official\": false,\n \"is_trusted\": false,\n \"name\": \"vgauthier/sshd\",\n \"star_count\": 0\n }\n ...\n ]\n\nQuery Parameters:\n\nterm \u2013 term to search\n\nStatus Codes:\n\n200 \u2013 no error\n500 \u2013 server error\n\n2.3 Misc\nBuild an image from Dockerfile via stdin\nPOST /build\nBuild an image from Dockerfile via stdin\nExample request:\n POST /build HTTP/1.1\n\n {{ TAR STREAM }}\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\"stream\": \"Step 1...\"}\n {\"stream\": \"...\"}\n {\"error\": \"Error...\", \"errorDetail\": {\"code\": 123, \"message\": \"Error...\"}}\n\nThe stream must be a tar archive compressed with one of the\nfollowing algorithms: identity (no compression), gzip, bzip2, xz.\n\nThe archive must include a file called `Dockerfile`\n\nat its root. It may include any number of other files,\n which will be accessible in the build context (See the ADD build\n command).\nQuery Parameters:\n\nt \u2013 repository name (and optionally a tag) to be applied to\n the resulting image in case of success\nremote \u2013 git or HTTP/HTTPS URI build source\nq \u2013 suppress verbose build output\nnocache \u2013 do not use the cache when building the image\n\nrm - remove intermediate containers after a successful build\nRequest Headers:\n\n\nContent-type \u2013 should be set to \"application/tar\".\n\nX-Registry-Config \u2013 base64-encoded ConfigFile objec\n\nStatus Codes:\n\n200 \u2013 no error\n500 \u2013 server error\n\nCheck auth configuration\nPOST /auth\nGet the default username and email\nExample request:\n POST /auth HTTP/1.1\n Content-Type: application/json\n\n {\n \"username\":\" hannibal\",\n \"password: \"xxxx\",\n \"email\": \"hannibal@a-team.com\",\n \"serveraddress\": \"https://index.docker.io/v1/\"\n }\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: text/plain\n\nStatus Codes:\n\n200 \u2013 no error\n204 \u2013 no error\n500 \u2013 server error\n\nDisplay system-wide information\nGET /info\nDisplay system-wide information\nExample request:\n GET /info HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\n \"Containers\":11,\n \"Images\":16,\n \"Debug\":false,\n \"NFd\": 11,\n \"NGoroutines\":21,\n \"MemoryLimit\":true,\n \"SwapLimit\":false,\n \"IPv4Forwarding\":true\n }\n\nStatus Codes:\n\n200 \u2013 no error\n500 \u2013 server error\n\nShow the docker version information\nGET /version\nShow the docker version information\nExample request:\n GET /version HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\n \"Version\":\"0.2.2\",\n \"GitCommit\":\"5a2a5cc+CHANGES\",\n \"GoVersion\":\"go1.0.3\"\n }\n\nStatus Codes:\n\n200 \u2013 no error\n500 \u2013 server error\n\nCreate a new image from a container's changes\nPOST /commit\nCreate a new image from a container's changes\nExample request:\n POST /commit?container=44c004db4b17m=messagerepo=myrepo HTTP/1.1\n Content-Type: application/json\n\n {\n \"Hostname\":\"\",\n \"User\":\"\",\n \"Memory\":0,\n \"MemorySwap\":0,\n \"AttachStdin\":false,\n \"AttachStdout\":true,\n \"AttachStderr\":true,\n \"PortSpecs\":null,\n \"Tty\":false,\n \"OpenStdin\":false,\n \"StdinOnce\":false,\n \"Env\":null,\n \"Cmd\":[\n \"date\"\n ],\n \"Volumes\":{\n \"/tmp\": {}\n },\n \"WorkingDir\":\"\",\n \"NetworkDisabled\": false,\n \"ExposedPorts\":{\n \"22/tcp\": {}\n }\n }\n\nExample response:\n HTTP/1.1 201 OK\n Content-Type: application/vnd.docker.raw-stream\n\n {\"Id\": \"596069db4bf5\"}\n\nJson Parameters:\n\nconfig - the container's configuration\n\nQuery Parameters:\n\ncontainer \u2013 source container\nrepo \u2013 repository\ntag \u2013 tag\nm \u2013 commit message\nauthor \u2013 author (e.g., \"John Hannibal Smith\n hannibal@a-team.com\")\n\nStatus Codes:\n\n201 \u2013 no error\n404 \u2013 no such container\n500 \u2013 server error\n\nMonitor Docker's events\nGET /events\nGet events from docker, either in real time via streaming, or via\npolling (using since).\nDocker containers will report the following events:\ncreate, destroy, die, export, kill, pause, restart, start, stop, unpause\n\nand Docker images will report:\nuntag, delete\n\nExample request:\n GET /events?since=1374067924\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\"status\": \"create\", \"id\": \"dfdf82bd3881\",\"from\": \"ubuntu:latest\", \"time\":1374067924}\n {\"status\": \"start\", \"id\": \"dfdf82bd3881\",\"from\": \"ubuntu:latest\", \"time\":1374067924}\n {\"status\": \"stop\", \"id\": \"dfdf82bd3881\",\"from\": \"ubuntu:latest\", \"time\":1374067966}\n {\"status\": \"destroy\", \"id\": \"dfdf82bd3881\",\"from\": \"ubuntu:latest\", \"time\":1374067970}\n\nQuery Parameters:\n\nsince \u2013 timestamp used for polling\n\nStatus Codes:\n\n200 \u2013 no error\n500 \u2013 server error\n\nGet a tarball containing all images and tags in a repository\nGET /images/(name)/get\nGet a tarball containing all images and metadata for the repository\n specified by name.\nSee the image tarball format for more details.\nExample request\n GET /images/ubuntu/get\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/x-tar\n\n Binary data stream\n\nStatus Codes:\n\n200 \u2013 no error\n500 \u2013 server error\n\nLoad a tarball with a set of images and tags into docker\nPOST /images/load\nLoad a set of images and tags into the docker repository.\nSee the image tarball format for more details.\nExample request\n POST /images/load\n\n Tarball in body\n\nExample response:\n HTTP/1.1 200 OK\n\nStatus Codes:\n\n200 \u2013 no error\n500 \u2013 server error\n\nImage tarball format\nAn image tarball contains one directory per image layer (named using its long ID),\neach containing three files:\n\nVERSION: currently 1.0 - the file format version\njson: detailed layer information, similar to docker inspect layer_id\nlayer.tar: A tarfile containing the filesystem changes in this layer\n\nThe layer.tar file will contain aufs style .wh..wh.aufs files and directories\nfor storing attribute changes and deletions.\nIf the tarball defines a repository, there will also be a repositories file at\nthe root that contains a list of repository and tag names mapped to layer IDs.\n{hello-world:\n {latest: 565a9d68a73f6706862bfe8409a7f659776d4d60a8d096eb4a3cbce6999cc2a1}\n}\n\n\n3. Going further\n3.1 Inside docker run\nHere are the steps of docker run :\n\n\nCreate the container\n\n\nIf the status code is 404, it means the image doesn't exist:\n - Try to pull it\n - Then retry to create the container\n\n\nStart the container\n\n\nIf you are not in detached mode:\n - Attach to the container, using logs=1 (to have stdout and\n stderr from the container's start) and stream=1\n\n\nIf in detached mode or only stdin is attached:\n - Display the container's id\n\n\n3.2 Hijacking\nIn this version of the API, /attach, uses hijacking to transport stdin,\nstdout and stderr on the same socket. This might change in the future.\n3.3 CORS Requests\nTo enable cross origin requests to the remote api add the flag\n\"--api-enable-cors\" when running docker in daemon mode.\n$ docker -d -H=\"192.168.1.9:2375\" --api-enable-cors",
|
|
"title": "**HIDDEN**"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api_v1.10#docker-remote-api-v110",
|
|
"tags": "",
|
|
"text": "",
|
|
"title": "Docker Remote API v1.10"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api_v1.10#1-brief-introduction",
|
|
"tags": "",
|
|
"text": "The Remote API has replaced rcli The daemon listens on unix:///var/run/docker.sock but you can bind\n Docker to another host/port or a Unix socket. The API tends to be REST, but for some complex commands, like attach \n or pull , the HTTP connection is hijacked to transport stdout, stdin \n and stderr",
|
|
"title": "1. Brief introduction"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api_v1.10#2-endpoints",
|
|
"tags": "",
|
|
"text": "",
|
|
"title": "2. Endpoints"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api_v1.10#21-containers",
|
|
"tags": "",
|
|
"text": "List containers GET /containers/json List containers Example request : GET /containers/json?all=1 before=8dfafdbc3a40 size=1 HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n [\n {\n \"Id\": \"8dfafdbc3a40\",\n \"Image\": \"ubuntu:latest\",\n \"Command\": \"echo 1\",\n \"Created\": 1367854155,\n \"Status\": \"Exit 0\",\n \"Ports\": [{\"PrivatePort\": 2222, \"PublicPort\": 3333, \"Type\": \"tcp\"}],\n \"SizeRw\": 12288,\n \"SizeRootFs\": 0\n },\n {\n \"Id\": \"9cd87474be90\",\n \"Image\": \"ubuntu:latest\",\n \"Command\": \"echo 222222\",\n \"Created\": 1367854155,\n \"Status\": \"Exit 0\",\n \"Ports\": [],\n \"SizeRw\": 12288,\n \"SizeRootFs\": 0\n },\n {\n \"Id\": \"3176a2479c92\",\n \"Image\": \"ubuntu:latest\",\n \"Command\": \"echo 3333333333333333\",\n \"Created\": 1367854154,\n \"Status\": \"Exit 0\",\n \"Ports\":[],\n \"SizeRw\":12288,\n \"SizeRootFs\":0\n },\n {\n \"Id\": \"4cb07b47f9fb\",\n \"Image\": \"ubuntu:latest\",\n \"Command\": \"echo 444444444444444444444444444444444\",\n \"Created\": 1367854152,\n \"Status\": \"Exit 0\",\n \"Ports\": [],\n \"SizeRw\": 12288,\n \"SizeRootFs\": 0\n }\n ] Query Parameters: all \u2013 1/True/true or 0/False/false, Show all containers.\n Only running containers are shown by default (i.e., this defaults to false) limit \u2013 Show limit last created containers, include non-running ones. since \u2013 Show only containers created since Id, include non-running ones. before \u2013 Show only containers created before Id, include non-running ones. size \u2013 1/True/true or 0/False/false, Show the containers sizes Status Codes: 200 \u2013 no error 400 \u2013 bad parameter 500 \u2013 server error Create a container POST /containers/create Create a container Example request : POST /containers/create HTTP/1.1\n Content-Type: application/json\n\n {\n \"Hostname\":\"\",\n \"User\":\"\",\n \"Memory\":0,\n \"MemorySwap\":0,\n \"AttachStdin\":false,\n \"AttachStdout\":true,\n \"AttachStderr\":true,\n \"PortSpecs\":null,\n \"Tty\":false,\n \"OpenStdin\":false,\n \"StdinOnce\":false,\n \"Env\":null,\n \"Cmd\":[\n \"date\"\n ],\n \"Image\":\"ubuntu\",\n \"Volumes\":{\n \"/tmp\": {}\n },\n \"WorkingDir\":\"\",\n \"NetworkDisabled\": false,\n \"ExposedPorts\":{\n \"22/tcp\": {}\n }\n } Example response : HTTP/1.1 201 Created\n Content-Type: application/json\n\n {\n \"Id\":\"e90e34656806\"\n \"Warnings\":[]\n } Json Parameters: config \u2013 the container's configuration Query Parameters: name \u2013 Assign the specified name to the container. Mus\n match /?[a-zA-Z0-9_-]+ . Status Codes: 201 \u2013 no error 404 \u2013 no such container 406 \u2013 impossible to attach (container not running) 500 \u2013 server error Inspect a container GET /containers/(id)/json Return low-level information on the container id Example request : GET /containers/4fa6e0f0c678/json HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\n \"Id\": \"4fa6e0f0c6786287e131c3852c58a2e01cc697a68231826813597e4994f1d6e2\",\n \"Created\": \"2013-05-07T14:51:42.041847+02:00\",\n \"Path\": \"date\",\n \"Args\": [],\n \"Config\": {\n \"Hostname\": \"4fa6e0f0c678\",\n \"User\": \"\",\n \"Memory\": 0,\n \"MemorySwap\": 0,\n \"AttachStdin\": false,\n \"AttachStdout\": true,\n \"AttachStderr\": true,\n \"PortSpecs\": null,\n \"Tty\": false,\n \"OpenStdin\": false,\n \"StdinOnce\": false,\n \"Env\": null,\n \"Cmd\": [\n \"date\"\n ],\n \"Image\": \"ubuntu\",\n \"Volumes\": {},\n \"WorkingDir\":\"\"\n\n },\n \"State\": {\n \"Running\": false,\n \"Pid\": 0,\n \"ExitCode\": 0,\n \"StartedAt\": \"2013-05-07T14:51:42.087658+02:01360\",\n \"Ghost\": false\n },\n \"Image\": \"b750fe79269d2ec9a3c593ef05b4332b1d1a02a62b4accb2c21d589ff2f5f2dc\",\n \"NetworkSettings\": {\n \"IpAddress\": \"\",\n \"IpPrefixLen\": 0,\n \"Gateway\": \"\",\n \"Bridge\": \"\",\n \"PortMapping\": null\n },\n \"SysInitPath\": \"/home/kitty/go/src/github.com/docker/docker/bin/docker\",\n \"ResolvConfPath\": \"/etc/resolv.conf\",\n \"Volumes\": {},\n \"HostConfig\": {\n \"Binds\": null,\n \"ContainerIDFile\": \"\",\n \"LxcConf\": [],\n \"Privileged\": false,\n \"PortBindings\": {\n \"80/tcp\": [\n {\n \"HostIp\": \"0.0.0.0\",\n \"HostPort\": \"49153\"\n }\n ]\n },\n \"Links\": null,\n \"PublishAllPorts\": false\n }\n } Status Codes: 200 \u2013 no error 404 \u2013 no such container 500 \u2013 server error List processes running inside a container GET /containers/(id)/top List processes running inside the container id Example request : GET /containers/4fa6e0f0c678/top HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\n \"Titles\": [\n \"USER\",\n \"PID\",\n \"%CPU\",\n \"%MEM\",\n \"VSZ\",\n \"RSS\",\n \"TTY\",\n \"STAT\",\n \"START\",\n \"TIME\",\n \"COMMAND\"\n ],\n \"Processes\": [\n [\"root\",\"20147\",\"0.0\",\"0.1\",\"18060\",\"1864\",\"pts/4\",\"S\",\"10:06\",\"0:00\",\"bash\"],\n [\"root\",\"20271\",\"0.0\",\"0.0\",\"4312\",\"352\",\"pts/4\",\"S+\",\"10:07\",\"0:00\",\"sleep\",\"10\"]\n ]\n } Query Parameters: ps_args \u2013 ps arguments to use (e.g., aux) Status Codes: 200 \u2013 no error 404 \u2013 no such container 500 \u2013 server error Inspect changes on a container's filesystem GET /containers/(id)/changes Inspect changes on container id 's filesystem Example request : GET /containers/4fa6e0f0c678/changes HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n [\n {\n \"Path\": \"/dev\",\n \"Kind\": 0\n },\n {\n \"Path\": \"/dev/kmsg\",\n \"Kind\": 1\n },\n {\n \"Path\": \"/test\",\n \"Kind\": 1\n }\n ] Status Codes: 200 \u2013 no error 404 \u2013 no such container 500 \u2013 server error Export a container GET /containers/(id)/export Export the contents of container id Example request : GET /containers/4fa6e0f0c678/export HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/octet-stream\n\n {{ TAR STREAM }} Status Codes: 200 \u2013 no error 404 \u2013 no such container 500 \u2013 server error Start a container POST /containers/(id)/start Start the container id Example request : POST /containers/(id)/start HTTP/1.1\n Content-Type: application/json\n\n {\n \"Binds\":[\"/tmp:/tmp\"],\n \"LxcConf\":[{\"Key\":\"lxc.utsname\",\"Value\":\"docker\"}],\n \"PortBindings\":{ \"22/tcp\": [{ \"HostPort\": \"11022\" }] },\n \"PublishAllPorts\":false,\n \"Privileged\":false,\n \"Dns\": [\"8.8.8.8\"],\n \"VolumesFrom\": [\"parent\", \"other:ro\"]\n } Example response : HTTP/1.1 204 No Content\n Content-Type: text/plain Json Parameters: hostConfig \u2013 the container's host configuration (optional) Status Codes: 204 \u2013 no error 404 \u2013 no such container 500 \u2013 server error Stop a container POST /containers/(id)/stop Stop the container id Example request : POST /containers/e90e34656806/stop?t=5 HTTP/1.1 Example response : HTTP/1.1 204 OK Query Parameters: t \u2013 number of seconds to wait before killing the container Status Codes: 204 \u2013 no error 404 \u2013 no such container 500 \u2013 server error Restart a container POST /containers/(id)/restart Restart the container id Example request : POST /containers/e90e34656806/restart?t=5 HTTP/1.1 Example response : HTTP/1.1 204 No Content Query Parameters: t \u2013 number of seconds to wait before killing the container Status Codes: 204 \u2013 no error 404 \u2013 no such container 500 \u2013 server error Kill a container POST /containers/(id)/kill Kill the container id Example request : POST /containers/e90e34656806/kill HTTP/1.1 Example response : HTTP/1.1 204 No Content Query Parameters signal - Signal to send to the container: integer or string like \"SIGINT\".\n When not set, SIGKILL is assumed and the call will wait for the container to exit. Status Codes: 204 \u2013 no error 404 \u2013 no such container 500 \u2013 server error Attach to a container POST /containers/(id)/attach Attach to the container id Example request : POST /containers/16253994b7c4/attach?logs=1 stream=0 stdout=1 HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/vnd.docker.raw-stream\n\n {{ STREAM }} Query Parameters: logs \u2013 1/True/true or 0/False/false, return logs. Defaul\n false stream \u2013 1/True/true or 0/False/false, return stream.\n Default false stdin \u2013 1/True/true or 0/False/false, if stream=true, attach\n to stdin. Default false stdout \u2013 1/True/true or 0/False/false, if logs=true, return\n stdout log, if stream=true, attach to stdout. Default false stderr \u2013 1/True/true or 0/False/false, if logs=true, return\n stderr log, if stream=true, attach to stderr. Default false Status Codes: 200 \u2013 no error 400 \u2013 bad parameter 404 \u2013 no such container 500 \u2013 server error Stream details : When using the TTY setting is enabled in POST /containers/create ,\nthe stream is the raw data from the process PTY and client's stdin.\nWhen the TTY is disabled, then the stream is multiplexed to separate\nstdout and stderr. The format is a Header and a Payload (frame). HEADER The header will contain the information on which stream write the\nstream (stdout or stderr). It also contain the size of the\nassociated frame encoded on the last 4 bytes (uint32). It is encoded on the first 8 bytes like this: header := [8]byte{STREAM_TYPE, 0, 0, 0, SIZE1, SIZE2, SIZE3, SIZE4} STREAM_TYPE can be: 0: stdin (will be written on stdout) 1: stdout 2: stderr SIZE1, SIZE2, SIZE3, SIZE4 are the 4 bytes of\nthe uint32 size encoded as big endian. PAYLOAD The payload is the raw stream. IMPLEMENTATION The simplest way to implement the Attach protocol is the following: Read 8 bytes chose stdout or stderr depending on the first byte Extract the frame size from the last 4 byets Read the extracted size and output it on the correct output Goto 1) Attach to a container (websocket) GET /containers/(id)/attach/ws Attach to the container id via websocket Implements websocket protocol handshake according to RFC 6455 Example request GET /containers/e90e34656806/attach/ws?logs=0 stream=1 stdin=1 stdout=1 stderr=1 HTTP/1.1 Example response {{ STREAM }} Query Parameters: logs \u2013 1/True/true or 0/False/false, return logs. Default false stream \u2013 1/True/true or 0/False/false, return stream.\n Default false stdin \u2013 1/True/true or 0/False/false, if stream=true, attach\n to stdin. Default false stdout \u2013 1/True/true or 0/False/false, if logs=true, return\n stdout log, if stream=true, attach to stdout. Default false stderr \u2013 1/True/true or 0/False/false, if logs=true, return\n stderr log, if stream=true, attach to stderr. Default false Status Codes: 200 \u2013 no error 400 \u2013 bad parameter 404 \u2013 no such container 500 \u2013 server error Wait a container POST /containers/(id)/wait Block until container id stops, then returns\n the exit code Example request : POST /containers/16253994b7c4/wait HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\"StatusCode\": 0} Status Codes: 200 \u2013 no error 404 \u2013 no such container 500 \u2013 server error Remove a container DELETE /containers/(id*)\n: Remove the container id` from the filesystem Example request : DELETE /containers/16253994b7c4?v=1 HTTP/1.1 Example response : HTTP/1.1 204 No Content Query Parameters: v \u2013 1/True/true or 0/False/false, Remove the volumes\n associated to the container. Default false force \u2013 1/True/true or 0/False/false, Removes the container\n even if it was running. Default false Status Codes: 204 \u2013 no error 400 \u2013 bad parameter 404 \u2013 no such container 500 \u2013 server error Copy files or folders from a container POST /containers/(id)/copy Copy files or folders of container id Example request : POST /containers/4fa6e0f0c678/copy HTTP/1.1\n Content-Type: application/json\n\n {\n \"Resource\": \"test.txt\"\n } Example response : HTTP/1.1 200 OK\n Content-Type: application/octet-stream\n\n {{ TAR STREAM }} Status Codes: 200 \u2013 no error 404 \u2013 no such container 500 \u2013 server error 2.2 Images List Images GET /images/json Example request : GET /images/json?all=0 HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n [\n {\n \"RepoTags\": [\n \"ubuntu:12.04\",\n \"ubuntu:precise\",\n \"ubuntu:latest\"\n ],\n \"Id\": \"8dbd9e392a964056420e5d58ca5cc376ef18e2de93b5cc90e868a1bbc8318c1c\",\n \"Created\": 1365714795,\n \"Size\": 131506275,\n \"VirtualSize\": 131506275\n },\n {\n \"RepoTags\": [\n \"ubuntu:12.10\",\n \"ubuntu:quantal\"\n ],\n \"ParentId\": \"27cf784147099545\",\n \"Id\": \"b750fe79269d2ec9a3c593ef05b4332b1d1a02a62b4accb2c21d589ff2f5f2dc\",\n \"Created\": 1364102658,\n \"Size\": 24653,\n \"VirtualSize\": 180116135\n }\n ] Create an image POST /images/create Create an image, either by pull it from the registry or by importing\n i Example request : POST /images/create?fromImage=ubuntu HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\"status\": \"Pulling...\"}\n {\"status\": \"Pulling\", \"progress\": \"1 B/ 100 B\", \"progressDetail\": {\"current\": 1, \"total\": 100}}\n {\"error\": \"Invalid...\"}\n ...\n\nWhen using this endpoint to pull an image from the registry, the\n`X-Registry-Auth` header can be used to include\na base64-encoded AuthConfig object. Query Parameters: fromImage \u2013 name of the image to pull fromSrc \u2013 source to import, - means stdin repo \u2013 repository tag \u2013 tag registry \u2013 the registry to pull from Request Headers: X-Registry-Auth \u2013 base64-encoded AuthConfig object Status Codes: 200 \u2013 no error 500 \u2013 server error Insert a file in an image POST /images/(name)/insert Insert a file from url in the image\n name at path Example request : POST /images/test/insert?path=/usr url=myurl HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\"status\":\"Inserting...\"}\n {\"status\":\"Inserting\", \"progress\":\"1/? (n/a)\", \"progressDetail\":{\"current\":1}}\n {\"error\":\"Invalid...\"}\n ... Query Parameters: url \u2013 The url from where the file is taken path \u2013 The path where the file is stored Status Codes: 200 \u2013 no error 500 \u2013 server error Inspect an image GET /images/(name)/json Return low-level information on the image name Example request : GET /images/ubuntu/json HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\n \"id\":\"b750fe79269d2ec9a3c593ef05b4332b1d1a02a62b4accb2c21d589ff2f5f2dc\",\n \"parent\":\"27cf784147099545\",\n \"created\":\"2013-03-23T22:24:18.818426-07:00\",\n \"container\":\"3d67245a8d72ecf13f33dffac9f79dcdf70f75acb84d308770391510e0c23ad0\",\n \"container_config\":\n {\n \"Hostname\":\"\",\n \"User\":\"\",\n \"Memory\":0,\n \"MemorySwap\":0,\n \"AttachStdin\":false,\n \"AttachStdout\":false,\n \"AttachStderr\":false,\n \"PortSpecs\":null,\n \"Tty\":true,\n \"OpenStdin\":true,\n \"StdinOnce\":false,\n \"Env\":null,\n \"Cmd\": [\"/bin/bash\"]\n \"Image\":\"ubuntu\",\n \"Volumes\":null,\n \"WorkingDir\":\"\"\n },\n \"Size\": 6824592\n } Status Codes: 200 \u2013 no error 404 \u2013 no such image 500 \u2013 server error Get the history of an image GET /images/(name)/history Return the history of the image name Example request : GET /images/ubuntu/history HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n [\n {\n \"Id\": \"b750fe79269d\",\n \"Created\": 1364102658,\n \"CreatedBy\": \"/bin/bash\"\n },\n {\n \"Id\": \"27cf78414709\",\n \"Created\": 1364068391,\n \"CreatedBy\": \"\"\n }\n ] Status Codes: 200 \u2013 no error 404 \u2013 no such image 500 \u2013 server error Push an image on the registry POST /images/(name)/push Push the image name on the registry Example request : POST /images/test/push HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\"status\": \"Pushing...\"}\n {\"status\": \"Pushing\", \"progress\": \"1/? (n/a)\", \"progressDetail\": {\"current\": 1}}}\n {\"error\": \"Invalid...\"}\n ...\n\nIf you wish to push an image on to a private registry, that image must already have been tagged\ninto a repository which references that registry host name and port. This repository name should\nthen be used in the URL. This mirrors the flow of the CLI. Example request : POST /images/registry.acme.com:5000/test/push HTTP/1.1 Query Parameters: tag \u2013 the tag to associate with the image on the registry, optional Request Headers: X-Registry-Auth \u2013 include a base64-encoded AuthConfig object. Status Codes: 200 \u2013 no error 404 \u2013 no such image 500 \u2013 server error Tag an image into a repository POST /images/(name)/tag Tag the image name into a repository Example request : POST /images/test/tag?repo=myrepo force=0 tag=v42 HTTP/1.1 Example response : HTTP/1.1 201 OK Query Parameters: repo \u2013 The repository to tag in force \u2013 1/True/true or 0/False/false, default false tag - The new tag name Status Codes: 201 \u2013 no error 400 \u2013 bad parameter 404 \u2013 no such image 409 \u2013 conflict 500 \u2013 server error Remove an image DELETE /images/(name*)\n: Remove the image name` from the filesystem Example request : DELETE /images/test HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-type: application/json\n\n [\n {\"Untagged\": \"3e2f21a89f\"},\n {\"Deleted\": \"3e2f21a89f\"},\n {\"Deleted\": \"53b4f83ac9\"}\n ] Query Parameters: force \u2013 1/True/true or 0/False/false, default false noprune \u2013 1/True/true or 0/False/false, default false Status Codes: 200 \u2013 no error 404 \u2013 no such image 409 \u2013 conflict 500 \u2013 server error Search images GET /images/search Search for an image on Docker Hub . Note :\nThe response keys have changed from API v1.6 to reflect the JSON\nsent by the registry server to the docker daemon's request. Example request : GET /images/search?term=sshd HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n [\n {\n \"description\": \"\",\n \"is_official\": false,\n \"is_trusted\": false,\n \"name\": \"wma55/u1210sshd\",\n \"star_count\": 0\n },\n {\n \"description\": \"\",\n \"is_official\": false,\n \"is_trusted\": false,\n \"name\": \"jdswinbank/sshd\",\n \"star_count\": 0\n },\n {\n \"description\": \"\",\n \"is_official\": false,\n \"is_trusted\": false,\n \"name\": \"vgauthier/sshd\",\n \"star_count\": 0\n }\n ...\n ] Query Parameters: term \u2013 term to search Status Codes: 200 \u2013 no error 500 \u2013 server error 2.3 Misc Build an image from Dockerfile via stdin POST /build Build an image from Dockerfile via stdin Example request : POST /build HTTP/1.1\n\n {{ TAR STREAM }} Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\"stream\": \"Step 1...\"}\n {\"stream\": \"...\"}\n {\"error\": \"Error...\", \"errorDetail\": {\"code\": 123, \"message\": \"Error...\"}}\n\nThe stream must be a tar archive compressed with one of the\nfollowing algorithms: identity (no compression), gzip, bzip2, xz.\n\nThe archive must include a file called `Dockerfile` at its root. It may include any number of other files,\n which will be accessible in the build context (See the ADD build\n command ). Query Parameters: t \u2013 repository name (and optionally a tag) to be applied to\n the resulting image in case of success remote \u2013 git or HTTP/HTTPS URI build source q \u2013 suppress verbose build output nocache \u2013 do not use the cache when building the image rm - remove intermediate containers after a successful build Request Headers: Content-type \u2013 should be set to \"application/tar\" . X-Registry-Config \u2013 base64-encoded ConfigFile objec Status Codes: 200 \u2013 no error 500 \u2013 server error Check auth configuration POST /auth Get the default username and email Example request : POST /auth HTTP/1.1\n Content-Type: application/json\n\n {\n \"username\":\" hannibal\",\n \"password: \"xxxx\",\n \"email\": \"hannibal@a-team.com\",\n \"serveraddress\": \"https://index.docker.io/v1/\"\n } Example response : HTTP/1.1 200 OK\n Content-Type: text/plain Status Codes: 200 \u2013 no error 204 \u2013 no error 500 \u2013 server error Display system-wide information GET /info Display system-wide information Example request : GET /info HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\n \"Containers\":11,\n \"Images\":16,\n \"Debug\":false,\n \"NFd\": 11,\n \"NGoroutines\":21,\n \"MemoryLimit\":true,\n \"SwapLimit\":false,\n \"IPv4Forwarding\":true\n } Status Codes: 200 \u2013 no error 500 \u2013 server error Show the docker version information GET /version Show the docker version information Example request : GET /version HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\n \"Version\":\"0.2.2\",\n \"GitCommit\":\"5a2a5cc+CHANGES\",\n \"GoVersion\":\"go1.0.3\"\n } Status Codes: 200 \u2013 no error 500 \u2013 server error Create a new image from a container's changes POST /commit Create a new image from a container's changes Example request : POST /commit?container=44c004db4b17 m=message repo=myrepo HTTP/1.1\n Content-Type: application/json\n\n {\n \"Hostname\":\"\",\n \"User\":\"\",\n \"Memory\":0,\n \"MemorySwap\":0,\n \"AttachStdin\":false,\n \"AttachStdout\":true,\n \"AttachStderr\":true,\n \"PortSpecs\":null,\n \"Tty\":false,\n \"OpenStdin\":false,\n \"StdinOnce\":false,\n \"Env\":null,\n \"Cmd\":[\n \"date\"\n ],\n \"Volumes\":{\n \"/tmp\": {}\n },\n \"WorkingDir\":\"\",\n \"NetworkDisabled\": false,\n \"ExposedPorts\":{\n \"22/tcp\": {}\n }\n } Example response : HTTP/1.1 201 OK\n Content-Type: application/vnd.docker.raw-stream\n\n {\"Id\": \"596069db4bf5\"} Json Parameters: config - the container's configuration Query Parameters: container \u2013 source container repo \u2013 repository tag \u2013 tag m \u2013 commit message author \u2013 author (e.g., \"John Hannibal Smith\n hannibal@a-team.com \") Status Codes: 201 \u2013 no error 404 \u2013 no such container 500 \u2013 server error Monitor Docker's events GET /events Get events from docker, either in real time via streaming, or via\npolling (using since). Docker containers will report the following events: create, destroy, die, export, kill, pause, restart, start, stop, unpause and Docker images will report: untag, delete Example request : GET /events?since=1374067924 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\"status\": \"create\", \"id\": \"dfdf82bd3881\",\"from\": \"ubuntu:latest\", \"time\":1374067924}\n {\"status\": \"start\", \"id\": \"dfdf82bd3881\",\"from\": \"ubuntu:latest\", \"time\":1374067924}\n {\"status\": \"stop\", \"id\": \"dfdf82bd3881\",\"from\": \"ubuntu:latest\", \"time\":1374067966}\n {\"status\": \"destroy\", \"id\": \"dfdf82bd3881\",\"from\": \"ubuntu:latest\", \"time\":1374067970} Query Parameters: since \u2013 timestamp used for polling Status Codes: 200 \u2013 no error 500 \u2013 server error Get a tarball containing all images and tags in a repository GET /images/(name)/get Get a tarball containing all images and metadata for the repository\n specified by name . See the image tarball format for more details. Example request GET /images/ubuntu/get Example response : HTTP/1.1 200 OK\n Content-Type: application/x-tar\n\n Binary data stream Status Codes: 200 \u2013 no error 500 \u2013 server error Load a tarball with a set of images and tags into docker POST /images/load Load a set of images and tags into the docker repository. See the image tarball format for more details. Example request POST /images/load\n\n Tarball in body Example response : HTTP/1.1 200 OK Status Codes: 200 \u2013 no error 500 \u2013 server error Image tarball format An image tarball contains one directory per image layer (named using its long ID),\neach containing three files: VERSION : currently 1.0 - the file format version json : detailed layer information, similar to docker inspect layer_id layer.tar : A tarfile containing the filesystem changes in this layer The layer.tar file will contain aufs style .wh..wh.aufs files and directories\nfor storing attribute changes and deletions. If the tarball defines a repository, there will also be a repositories file at\nthe root that contains a list of repository and tag names mapped to layer IDs. { hello-world :\n { latest : 565a9d68a73f6706862bfe8409a7f659776d4d60a8d096eb4a3cbce6999cc2a1 }\n}",
|
|
"title": "2.1 Containers"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api_v1.10#3-going-further",
|
|
"tags": "",
|
|
"text": "",
|
|
"title": "3. Going further"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api_v1.10#31-inside-docker-run",
|
|
"tags": "",
|
|
"text": "Here are the steps of docker run : Create the container If the status code is 404, it means the image doesn't exist:\n - Try to pull it\n - Then retry to create the container Start the container If you are not in detached mode:\n - Attach to the container, using logs=1 (to have stdout and\n stderr from the container's start) and stream=1 If in detached mode or only stdin is attached:\n - Display the container's id",
|
|
"title": "3.1 Inside docker run"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api_v1.10#32-hijacking",
|
|
"tags": "",
|
|
"text": "In this version of the API, /attach, uses hijacking to transport stdin,\nstdout and stderr on the same socket. This might change in the future.",
|
|
"title": "3.2 Hijacking"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api_v1.10#33-cors-requests",
|
|
"tags": "",
|
|
"text": "To enable cross origin requests to the remote api add the flag\n\"--api-enable-cors\" when running docker in daemon mode. $ docker -d -H=\"192.168.1.9:2375\" --api-enable-cors",
|
|
"title": "3.3 CORS Requests"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api_v1.9/",
|
|
"tags": "",
|
|
"text": "Docker Remote API v1.9\n1. Brief introduction\n\nThe Remote API has replaced rcli\nThe daemon listens on unix:///var/run/docker.sock but you can bind\n Docker to another host/port or a Unix socket.\nThe API tends to be REST, but for some complex commands, like attach\n or pull, the HTTP connection is hijacked to transport stdout, stdin\n and stderr\n\n2. Endpoints\n2.1 Containers\nList containers\nGET /containers/json\nList containers.\nExample request:\n GET /containers/json?all=1before=8dfafdbc3a40size=1 HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n [\n {\n \"Id\": \"8dfafdbc3a40\",\n \"Image\": \"base:latest\",\n \"Command\": \"echo 1\",\n \"Created\": 1367854155,\n \"Status\": \"Exit 0\",\n \"Ports\": [{\"PrivatePort\": 2222, \"PublicPort\": 3333, \"Type\": \"tcp\"}],\n \"SizeRw\": 12288,\n \"SizeRootFs\": 0\n },\n {\n \"Id\": \"9cd87474be90\",\n \"Image\": \"base:latest\",\n \"Command\": \"echo 222222\",\n \"Created\": 1367854155,\n \"Status\": \"Exit 0\",\n \"Ports\": [],\n \"SizeRw\": 12288,\n \"SizeRootFs\": 0\n },\n {\n \"Id\": \"3176a2479c92\",\n \"Image\": \"base:latest\",\n \"Command\": \"echo 3333333333333333\",\n \"Created\": 1367854154,\n \"Status\": \"Exit 0\",\n \"Ports\":[],\n \"SizeRw\":12288,\n \"SizeRootFs\":0\n },\n {\n \"Id\": \"4cb07b47f9fb\",\n \"Image\": \"base:latest\",\n \"Command\": \"echo 444444444444444444444444444444444\",\n \"Created\": 1367854152,\n \"Status\": \"Exit 0\",\n \"Ports\": [],\n \"SizeRw\": 12288,\n \"SizeRootFs\": 0\n }\n ]\n\nQuery Parameters:\n\n\n\nall \u2013 1/True/true or 0/False/false, Show all containers.\n Only running containers are shown by default (i.e., this defaults to false)\nlimit \u2013 Show limit last created containers, include non-running ones.\nsince \u2013 Show only containers created since Id, include non-running ones.\nbefore \u2013 Show only containers created before Id, include non-running ones.\nsize \u2013 1/True/true or 0/False/false, Show the containers sizes\n\nStatus Codes:\n\n200 \u2013 no error\n400 \u2013 bad parameter\n500 \u2013 server error\n\nCreate a container\nPOST /containers/create\nCreate a container\nExample request:\n POST /containers/create HTTP/1.1\n Content-Type: application/json\n\n {\n \"Hostname\":\"\",\n \"User\":\"\",\n \"Memory\":0,\n \"MemorySwap\":0,\n \"CpuShares\":0,\n \"AttachStdin\":false,\n \"AttachStdout\":true,\n \"AttachStderr\":true,\n \"PortSpecs\":null,\n \"Tty\":false,\n \"OpenStdin\":false,\n \"StdinOnce\":false,\n \"Env\":null,\n \"Cmd\":[\n \"date\"\n ],\n \"Dns\":null,\n \"Image\":\"base\",\n \"Volumes\":{\n \"/tmp\": {}\n },\n \"VolumesFrom\":\"\",\n \"WorkingDir\":\"\",\n \"ExposedPorts\":{\n \"22/tcp\": {}\n }\n }\n\nExample response:\n HTTP/1.1 201 Created\n Content-Type: application/json\n\n {\n \"Id\":\"e90e34656806\"\n \"Warnings\":[]\n }\n\nJson Parameters:\n\n\n\nHostname \u2013 Container host name\nUser \u2013 Username or UID\nMemory \u2013 Memory Limit in bytes\nCpuShares \u2013 CPU shares (relative weight)\nAttachStdin \u2013 1/True/true or 0/False/false, attach to\n standard input. Default false\nAttachStdout \u2013 1/True/true or 0/False/false, attach to\n standard output. Default false\nAttachStderr \u2013 1/True/true or 0/False/false, attach to\n standard error. Default false\nTty \u2013 1/True/true or 0/False/false, allocate a pseudo-tty.\n Default false\nOpenStdin \u2013 1/True/true or 0/False/false, keep stdin open\n even if not attached. Default false\n\nQuery Parameters:\n\n\n\nname \u2013 Assign the specified name to the container. Mus\n match /?[a-zA-Z0-9_-]+.\n\nStatus Codes:\n\n201 \u2013 no error\n404 \u2013 no such container\n406 \u2013 impossible to attach (container not running)\n500 \u2013 server error\n\nInspect a container\nGET /containers/(id)/json\nReturn low-level information on the container id\nExample request:\n GET /containers/4fa6e0f0c678/json HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\n \"Id\": \"4fa6e0f0c6786287e131c3852c58a2e01cc697a68231826813597e4994f1d6e2\",\n \"Created\": \"2013-05-07T14:51:42.041847+02:00\",\n \"Path\": \"date\",\n \"Args\": [],\n \"Config\": {\n \"Hostname\": \"4fa6e0f0c678\",\n \"User\": \"\",\n \"Memory\": 0,\n \"MemorySwap\": 0,\n \"AttachStdin\": false,\n \"AttachStdout\": true,\n \"AttachStderr\": true,\n \"PortSpecs\": null,\n \"Tty\": false,\n \"OpenStdin\": false,\n \"StdinOnce\": false,\n \"Env\": null,\n \"Cmd\": [\n \"date\"\n ],\n \"Dns\": null,\n \"Image\": \"base\",\n \"Volumes\": {},\n \"VolumesFrom\": \"\",\n \"WorkingDir\": \"\"\n },\n \"State\": {\n \"Running\": false,\n \"Pid\": 0,\n \"ExitCode\": 0,\n \"StartedAt\": \"2013-05-07T14:51:42.087658+02:01360\",\n \"Ghost\": false\n },\n \"Image\": \"b750fe79269d2ec9a3c593ef05b4332b1d1a02a62b4accb2c21d589ff2f5f2dc\",\n \"NetworkSettings\": {\n \"IpAddress\": \"\",\n \"IpPrefixLen\": 0,\n \"Gateway\": \"\",\n \"Bridge\": \"\",\n \"PortMapping\": null\n },\n \"SysInitPath\": \"/home/kitty/go/src/github.com/docker/docker/bin/docker\",\n \"ResolvConfPath\": \"/etc/resolv.conf\",\n \"Volumes\": {},\n \"HostConfig\": {\n \"Binds\": null,\n \"ContainerIDFile\": \"\",\n \"LxcConf\": [],\n \"Privileged\": false,\n \"PortBindings\": {\n \"80/tcp\": [\n {\n \"HostIp\": \"0.0.0.0\",\n \"HostPort\": \"49153\"\n }\n ]\n },\n \"Links\": null,\n \"PublishAllPorts\": false\n }\n }\n\nStatus Codes:\n\n200 \u2013 no error\n404 \u2013 no such container\n500 \u2013 server error\n\nList processes running inside a container\nGET /containers/(id)/top\nList processes running inside the container id\nExample request:\n GET /containers/4fa6e0f0c678/top HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\n \"Titles\": [\n \"USER\",\n \"PID\",\n \"%CPU\",\n \"%MEM\",\n \"VSZ\",\n \"RSS\",\n \"TTY\",\n \"STAT\",\n \"START\",\n \"TIME\",\n \"COMMAND\"\n ],\n \"Processes\": [\n [\"root\",\"20147\",\"0.0\",\"0.1\",\"18060\",\"1864\",\"pts/4\",\"S\",\"10:06\",\"0:00\",\"bash\"],\n [\"root\",\"20271\",\"0.0\",\"0.0\",\"4312\",\"352\",\"pts/4\",\"S+\",\"10:07\",\"0:00\",\"sleep\",\"10\"]\n ]\n }\n\nQuery Parameters:\n\nps_args \u2013 ps arguments to use (e.g., aux)\n\nStatus Codes:\n\n200 \u2013 no error\n404 \u2013 no such container\n500 \u2013 server error\n\nInspect changes on a container's filesystem\nGET /containers/(id)/changes\nInspect changes on container id's filesystem\nExample request:\n GET /containers/4fa6e0f0c678/changes HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n [\n {\n \"Path\": \"/dev\",\n \"Kind\": 0\n },\n {\n \"Path\": \"/dev/kmsg\",\n \"Kind\": 1\n },\n {\n \"Path\": \"/test\",\n \"Kind\": 1\n }\n ]\n\nStatus Codes:\n\n200 \u2013 no error\n404 \u2013 no such container\n500 \u2013 server error\n\nExport a container\nGET /containers/(id)/export\nExport the contents of container id\nExample request:\n GET /containers/4fa6e0f0c678/export HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/octet-stream\n\n {{ TAR STREAM }}\n\nStatus Codes:\n\n200 \u2013 no error\n404 \u2013 no such container\n500 \u2013 server error\n\nStart a container\nPOST /containers/(id)/start\nStart the container id\nExample request:\n POST /containers/(id)/start HTTP/1.1\n Content-Type: application/json\n\n {\n \"Binds\":[\"/tmp:/tmp\"],\n \"LxcConf\":[{\"Key\":\"lxc.utsname\",\"Value\":\"docker\"}],\n \"PortBindings\":{ \"22/tcp\": [{ \"HostPort\": \"11022\" }] },\n \"PublishAllPorts\":false,\n \"Privileged\":false\n }\n\nExample response:\n HTTP/1.1 204 No Content\n Content-Type: text/plain\n\nJson Parameters:\n\n\n\nBinds \u2013 Create a bind mount to a directory or file with\n [host-path]:[container-path]:[rw|ro]. If a directory\n \"container-path\" is missing, then docker creates a new volume.\nLxcConf \u2013 Map of custom lxc options\nPortBindings \u2013 Expose ports from the container, optionally\n publishing them via the HostPort flag\nPublishAllPorts \u2013 1/True/true or 0/False/false, publish all\n exposed ports to the host interfaces. Default false\nPrivileged \u2013 1/True/true or 0/False/false, give extended\n privileges to this container. Default false\n\nStatus Codes:\n\n204 \u2013 no error\n404 \u2013 no such container\n500 \u2013 server error\n\nStop a container\nPOST /containers/(id)/stop\nStop the container id\nExample request:\n POST /containers/e90e34656806/stop?t=5 HTTP/1.1\n\nExample response:\n HTTP/1.1 204 OK\n\nQuery Parameters:\n\nt \u2013 number of seconds to wait before killing the container\n\nStatus Codes:\n\n204 \u2013 no error\n404 \u2013 no such container\n500 \u2013 server error\n\nRestart a container\nPOST /containers/(id)/restart\nRestart the container id\nExample request:\n POST /containers/e90e34656806/restart?t=5 HTTP/1.1\n\nExample response:\n HTTP/1.1 204 No Content\n\nQuery Parameters:\n\nt \u2013 number of seconds to wait before killing the container\n\nStatus Codes:\n\n204 \u2013 no error\n404 \u2013 no such container\n500 \u2013 server error\n\nKill a container\nPOST /containers/(id)/kill\nKill the container id\nExample request:\n POST /containers/e90e34656806/kill HTTP/1.1\n\nExample response:\n HTTP/1.1 204 No Content\n\nQuery Parameters\n\nsignal - Signal to send to the container: integer or string like \"SIGINT\".\n When not set, SIGKILL is assumed and the call will wait for the container to exit.\n\nStatus Codes:\n\n204 \u2013 no error\n404 \u2013 no such container\n500 \u2013 server error\n\nAttach to a container\nPOST /containers/(id)/attach\nAttach to the container id\nExample request:\n POST /containers/16253994b7c4/attach?logs=1stream=0stdout=1 HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/vnd.docker.raw-stream\n\n {{ STREAM }}\n\nQuery Parameters:\n\nlogs \u2013 1/True/true or 0/False/false, return logs. Defaul\n false\nstream \u2013 1/True/true or 0/False/false, return stream.\n Default false\nstdin \u2013 1/True/true or 0/False/false, if stream=true, attach\n to stdin. Default false\nstdout \u2013 1/True/true or 0/False/false, if logs=true, return\n stdout log, if stream=true, attach to stdout. Default false\nstderr \u2013 1/True/true or 0/False/false, if logs=true, return\n stderr log, if stream=true, attach to stderr. Default false\n\nStatus Codes:\n\n200 \u2013 no error\n400 \u2013 bad parameter\n404 \u2013 no such container\n\n500 \u2013 server error\nStream details:\nWhen using the TTY setting is enabled in\nPOST /containers/create, the\nstream is the raw data from the process PTY and client's stdin. When\nthe TTY is disabled, then the stream is multiplexed to separate\nstdout and stderr.\nThe format is a Header and a Payload (frame).\nHEADER\nThe header will contain the information on which stream write the\nstream (stdout or stderr). It also contain the size of the\nassociated frame encoded on the last 4 bytes (uint32).\nIt is encoded on the first 8 bytes like this:\nheader := [8]byte{STREAM_TYPE, 0, 0, 0, SIZE1, SIZE2, SIZE3, SIZE4}\n\nSTREAM_TYPE can be:\n\n\n0: stdin (will be written on stdout)\n\n1: stdout\n\n2: stderr\nSIZE1, SIZE2, SIZE3, SIZE4 are the 4 bytes of\nthe uint32 size encoded as big endian.\nPAYLOAD\nThe payload is the raw stream.\nIMPLEMENTATION\nThe simplest way to implement the Attach protocol is the following:\n\nRead 8 bytes\nchose stdout or stderr depending on the first byte\nExtract the frame size from the last 4 byets\nRead the extracted size and output it on the correct output\nGoto 1)\n\n\n\nAttach to a container (websocket)\nGET /containers/(id)/attach/ws\nAttach to the container id via websocket\nImplements websocket protocol handshake according to RFC 6455\nExample request\n GET /containers/e90e34656806/attach/ws?logs=0stream=1stdin=1stdout=1stderr=1 HTTP/1.1\n\nExample response\n {{ STREAM }}\n\nQuery Parameters:\n\nlogs \u2013 1/True/true or 0/False/false, return logs. Default false\nstream \u2013 1/True/true or 0/False/false, return stream.\n Default false\nstdin \u2013 1/True/true or 0/False/false, if stream=true, attach\n to stdin. Default false\nstdout \u2013 1/True/true or 0/False/false, if logs=true, return\n stdout log, if stream=true, attach to stdout. Default false\nstderr \u2013 1/True/true or 0/False/false, if logs=true, return\n stderr log, if stream=true, attach to stderr. Default false\n\nStatus Codes:\n\n200 \u2013 no error\n400 \u2013 bad parameter\n404 \u2013 no such container\n500 \u2013 server error\n\nWait a container\nPOST /containers/(id)/wait\nBlock until container id stops, then returns the exit code\nExample request:\n POST /containers/16253994b7c4/wait HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\"StatusCode\": 0}\n\nStatus Codes:\n\n200 \u2013 no error\n404 \u2013 no such container\n500 \u2013 server error\n\nRemove a container\nDELETE /containers/(id)\nRemove the container id from the filesystem\nExample request:\n DELETE /containers/16253994b7c4?v=1 HTTP/1.1\n\nExample response:\n HTTP/1.1 204 No Content\n\nQuery Parameters:\n\nv \u2013 1/True/true or 0/False/false, Remove the volumes\n associated to the container. Default false\n\nStatus Codes:\n\n204 \u2013 no error\n400 \u2013 bad parameter\n404 \u2013 no such container\n500 \u2013 server error\n\nCopy files or folders from a container\nPOST /containers/(id)/copy\nCopy files or folders of container id\nExample request:\n POST /containers/4fa6e0f0c678/copy HTTP/1.1\n Content-Type: application/json\n\n {\n \"Resource\": \"test.txt\"\n }\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/octet-stream\n\n {{ TAR STREAM }}\n\nStatus Codes:\n\n200 \u2013 no error\n404 \u2013 no such container\n500 \u2013 server error\n\n2.2 Images\nList Images\nGET /images/json\nExample request:\n GET /images/json?all=0 HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n [\n {\n \"RepoTags\": [\n \"ubuntu:12.04\",\n \"ubuntu:precise\",\n \"ubuntu:latest\"\n ],\n \"Id\": \"8dbd9e392a964056420e5d58ca5cc376ef18e2de93b5cc90e868a1bbc8318c1c\",\n \"Created\": 1365714795,\n \"Size\": 131506275,\n \"VirtualSize\": 131506275\n },\n {\n \"RepoTags\": [\n \"ubuntu:12.10\",\n \"ubuntu:quantal\"\n ],\n \"ParentId\": \"27cf784147099545\",\n \"Id\": \"b750fe79269d2ec9a3c593ef05b4332b1d1a02a62b4accb2c21d589ff2f5f2dc\",\n \"Created\": 1364102658,\n \"Size\": 24653,\n \"VirtualSize\": 180116135\n }\n ]\n\nCreate an image\nPOST /images/create\nCreate an image, either by pull it from the registry or by importing i\nExample request:\n POST /images/create?fromImage=base HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\"status\": \"Pulling...\"}\n {\"status\": \"Pulling\", \"progress\": \"1 B/ 100 B\", \"progressDetail\": {\"current\": 1, \"total\": 100}}\n {\"error\": \"Invalid...\"}\n ...\n\nWhen using this endpoint to pull an image from the registry, the\n`X-Registry-Auth` header can be used to include\na base64-encoded AuthConfig object.\n\nQuery Parameters:\n\nfromImage \u2013 name of the image to pull\nfromSrc \u2013 source to import, - means stdin\nrepo \u2013 repository\ntag \u2013 tag\nregistry \u2013 the registry to pull from\n\nRequest Headers:\n\nX-Registry-Auth \u2013 base64-encoded AuthConfig object\n\nStatus Codes:\n\n200 \u2013 no error\n500 \u2013 server error\n\nInsert a file in an image\nPOST /images/(name)/insert\nInsert a file from url in the image name at path\nExample request:\n POST /images/test/insert?path=/usrurl=myurl HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\"status\":\"Inserting...\"}\n {\"status\":\"Inserting\", \"progress\":\"1/? (n/a)\", \"progressDetail\":{\"current\":1}}\n {\"error\":\"Invalid...\"}\n ...\n\nQuery Parameters:\n\nurl \u2013 The url from where the file is taken\npath \u2013 The path where the file is stored\n\nStatus Codes:\n\n200 \u2013 no error\n500 \u2013 server error\n\nInspect an image\nGET /images/(name)/json\nReturn low-level information on the image name\nExample request:\n GET /images/base/json HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\n \"id\":\"b750fe79269d2ec9a3c593ef05b4332b1d1a02a62b4accb2c21d589ff2f5f2dc\",\n \"parent\":\"27cf784147099545\",\n \"created\":\"2013-03-23T22:24:18.818426-07:00\",\n \"container\":\"3d67245a8d72ecf13f33dffac9f79dcdf70f75acb84d308770391510e0c23ad0\",\n \"container_config\":\n {\n \"Hostname\":\"\",\n \"User\":\"\",\n \"Memory\":0,\n \"MemorySwap\":0,\n \"AttachStdin\":false,\n \"AttachStdout\":false,\n \"AttachStderr\":false,\n \"PortSpecs\":null,\n \"Tty\":true,\n \"OpenStdin\":true,\n \"StdinOnce\":false,\n \"Env\":null,\n \"Cmd\": [\"/bin/bash\"],\n \"Dns\":null,\n \"Image\":\"base\",\n \"Volumes\":null,\n \"VolumesFrom\":\"\",\n \"WorkingDir\":\"\"\n },\n \"Size\": 6824592\n }\n\nStatus Codes:\n\n200 \u2013 no error\n404 \u2013 no such image\n500 \u2013 server error\n\nGet the history of an image\nGET /images/(name)/history\nReturn the history of the image name\nExample request:\n GET /images/base/history HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n [\n {\n \"Id\": \"b750fe79269d\",\n \"Created\": 1364102658,\n \"CreatedBy\": \"/bin/bash\"\n },\n {\n \"Id\": \"27cf78414709\",\n \"Created\": 1364068391,\n \"CreatedBy\": \"\"\n }\n ]\n\nStatus Codes:\n\n200 \u2013 no error\n404 \u2013 no such image\n500 \u2013 server error\n\nPush an image on the registry\nPOST /images/(name)/push\nPush the image name on the registry\nExample request:\n POST /images/test/push HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\"status\": \"Pushing...\"}\n {\"status\": \"Pushing\", \"progress\": \"1/? (n/a)\", \"progressDetail\": {\"current\": 1}}}\n {\"error\": \"Invalid...\"}\n ...\n\nRequest Headers:\n\n\nX-Registry-Auth \u2013 include a base64-encoded AuthConfig\n object.\n\nStatus Codes:\n\n200 \u2013 no error\n404 \u2013 no such image\n500 \u2013 server error\n\nTag an image into a repository\nPOST /images/(name)/tag\nTag the image name into a repository\nExample request:\n POST /images/test/tag?repo=myrepoforce=0tag=v42 HTTP/1.1\n\nExample response:\n HTTP/1.1 201 OK\n\nQuery Parameters:\n\nrepo \u2013 The repository to tag in\nforce \u2013 1/True/true or 0/False/false, default false\ntag - The new tag name\n\nStatus Codes:\n\n201 \u2013 no error\n400 \u2013 bad parameter\n404 \u2013 no such image\n409 \u2013 conflict\n500 \u2013 server error\n\nRemove an image\nDELETE /images/(name*)\n: Remove the imagename` from the filesystem\nExample request:\n DELETE /images/test HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-type: application/json\n\n [\n {\"Untagged\": \"3e2f21a89f\"},\n {\"Deleted\": \"3e2f21a89f\"},\n {\"Deleted\": \"53b4f83ac9\"}\n ]\n\nStatus Codes:\n\n200 \u2013 no error\n404 \u2013 no such image\n409 \u2013 conflict\n500 \u2013 server error\n\nSearch images\nGET /images/search\nSearch for an image on Docker Hub.\n\nNote:\nThe response keys have changed from API v1.6 to reflect the JSON\nsent by the registry server to the docker daemon's request.\n\nExample request:\n GET /images/search?term=sshd HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n [\n {\n \"description\": \"\",\n \"is_official\": false,\n \"is_trusted\": false,\n \"name\": \"wma55/u1210sshd\",\n \"star_count\": 0\n },\n {\n \"description\": \"\",\n \"is_official\": false,\n \"is_trusted\": false,\n \"name\": \"jdswinbank/sshd\",\n \"star_count\": 0\n },\n {\n \"description\": \"\",\n \"is_official\": false,\n \"is_trusted\": false,\n \"name\": \"vgauthier/sshd\",\n \"star_count\": 0\n }\n ...\n ]\n\nQuery Parameters:\n\nterm \u2013 term to search\n\nStatus Codes:\n\n200 \u2013 no error\n500 \u2013 server error\n\n2.3 Misc\nBuild an image from Dockerfile\nPOST /build\nBuild an image from Dockerfile using a POST body.\nExample request:\n POST /build HTTP/1.1\n\n {{ TAR STREAM }}\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\"stream\": \"Step 1...\"}\n {\"stream\": \"...\"}\n {\"error\": \"Error...\", \"errorDetail\": {\"code\": 123, \"message\": \"Error...\"}}\n\nThe stream must be a tar archive compressed with one of the\nfollowing algorithms: identity (no compression), gzip, bzip2, xz.\n\nThe archive must include a file called `Dockerfile`\n\nat its root. It may include any number of other files,\n which will be accessible in the build context (See the ADD build\n command).\nQuery Parameters:\n\nt \u2013 repository name (and optionally a tag) to be applied to\n the resulting image in case of success\nremote \u2013 build source URI (git or HTTPS/HTTP)\nq \u2013 suppress verbose build output\nnocache \u2013 do not use the cache when building the image\n\nrm \u2013 Remove intermediate containers after a successful build\nRequest Headers:\n\n\nContent-type \u2013 should be set to \"application/tar\".\n\nX-Registry-Config \u2013 base64-encoded ConfigFile objec\n\nStatus Codes:\n\n200 \u2013 no error\n500 \u2013 server error\n\nCheck auth configuration\nPOST /auth\nGet the default username and email\nExample request:\n POST /auth HTTP/1.1\n Content-Type: application/json\n\n {\n \"username\":\" hannibal\",\n \"password: \"xxxx\",\n \"email\": \"hannibal@a-team.com\",\n \"serveraddress\": \"https://index.docker.io/v1/\"\n }\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: text/plain\n\nStatus Codes:\n\n200 \u2013 no error\n204 \u2013 no error\n500 \u2013 server error\n\nDisplay system-wide information\nGET /info\nDisplay system-wide information\nExample request:\n GET /info HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\n \"Containers\":11,\n \"Images\":16,\n \"Debug\":false,\n \"NFd\": 11,\n \"NGoroutines\":21,\n \"MemoryLimit\":true,\n \"SwapLimit\":false,\n \"IPv4Forwarding\":true\n }\n\nStatus Codes:\n\n200 \u2013 no error\n500 \u2013 server error\n\nShow the docker version information\nGET /version\nShow the docker version information\nExample request:\n GET /version HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\n \"Version\":\"0.2.2\",\n \"GitCommit\":\"5a2a5cc+CHANGES\",\n \"GoVersion\":\"go1.0.3\"\n }\n\nStatus Codes:\n\n200 \u2013 no error\n500 \u2013 server error\n\nCreate a new image from a container's changes\nPOST /commit\nCreate a new image from a container's changes\nExample request:\n POST /commit?container=44c004db4b17m=messagerepo=myrepo HTTP/1.1\n Content-Type: application/json\n\n {\n \"Hostname\":\"\",\n \"User\":\"\",\n \"Memory\":0,\n \"MemorySwap\":0,\n \"AttachStdin\":false,\n \"AttachStdout\":true,\n \"AttachStderr\":true,\n \"PortSpecs\":null,\n \"Tty\":false,\n \"OpenStdin\":false,\n \"StdinOnce\":false,\n \"Env\":null,\n \"Cmd\":[\n \"date\"\n ],\n \"Volumes\":{\n \"/tmp\": {}\n },\n \"WorkingDir\":\"\",\n \"DisableNetwork\": false,\n \"ExposedPorts\":{\n \"22/tcp\": {}\n }\n }\n\nExample response:\n HTTP/1.1 201 Created\n Content-Type: application/vnd.docker.raw-stream\n\n {\"Id\": \"596069db4bf5\"}\n\nJson Parameters:\n\nconfig - the container's configuration\n\nQuery Parameters:\n\ncontainer \u2013 source container\nrepo \u2013 repository\ntag \u2013 tag\nm \u2013 commit message\nauthor \u2013 author (e.g., \"John Hannibal Smith\n hannibal@a-team.com\")\n\nStatus Codes:\n\n201 \u2013 no error\n404 \u2013 no such container\n500 \u2013 server error\n\nMonitor Docker's events\nGET /events\nGet events from docker, either in real time via streaming, or via\npolling (using since).\nDocker containers will report the following events:\ncreate, destroy, die, export, kill, pause, restart, start, stop, unpause\n\nand Docker images will report:\nuntag, delete\n\nExample request:\n GET /events?since=1374067924\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\"status\": \"create\", \"id\": \"dfdf82bd3881\",\"from\": \"base:latest\", \"time\":1374067924}\n {\"status\": \"start\", \"id\": \"dfdf82bd3881\",\"from\": \"base:latest\", \"time\":1374067924}\n {\"status\": \"stop\", \"id\": \"dfdf82bd3881\",\"from\": \"base:latest\", \"time\":1374067966}\n {\"status\": \"destroy\", \"id\": \"dfdf82bd3881\",\"from\": \"base:latest\", \"time\":1374067970}\n\nQuery Parameters:\n\nsince \u2013 timestamp used for polling\n\nStatus Codes:\n\n200 \u2013 no error\n500 \u2013 server error\n\nGet a tarball containing all images and tags in a repository\nGET /images/(name)/get\nGet a tarball containing all images and metadata for the repository specified by name.\nSee the image tarball format for more details.\nExample request\n GET /images/ubuntu/get\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/x-tar\n\n Binary data stream\n\nStatus Codes:\n\n200 \u2013 no error\n500 \u2013 server error\n\nLoad a tarball with a set of images and tags into docker\nPOST /images/load\nLoad a set of images and tags into the docker repository.\nSee the image tarball format for more details.\nExample request\n POST /images/load\n\n Tarball in body\n\nExample response:\n HTTP/1.1 200 OK\n\nStatus Codes:\n\n200 \u2013 no error\n500 \u2013 server error\n\nImage tarball format\nAn image tarball contains one directory per image layer (named using its long ID),\neach containing three files:\n\nVERSION: currently 1.0 - the file format version\njson: detailed layer information, similar to docker inspect layer_id\nlayer.tar: A tarfile containing the filesystem changes in this layer\n\nThe layer.tar file will contain aufs style .wh..wh.aufs files and directories\nfor storing attribute changes and deletions.\nIf the tarball defines a repository, there will also be a repositories file at\nthe root that contains a list of repository and tag names mapped to layer IDs.\n{hello-world:\n {latest: 565a9d68a73f6706862bfe8409a7f659776d4d60a8d096eb4a3cbce6999cc2a1}\n}\n\n\n3. Going further\n3.1 Inside docker run\nHere are the steps of docker run :\n\n\nCreate the container\n\n\nIf the status code is 404, it means the image doesn't exist:\n\n\nTry to pull it\n\n\nThen retry to create the container\n\n\nStart the container\n\n\nIf you are not in detached mode:\n\n\nAttach to the container, using logs=1 (to have stdout and\n\n\nstderr from the container's start) and stream=1\n\n\nIf in detached mode or only stdin is attached:\n\n\nDisplay the container's id\n\n\n3.2 Hijacking\nIn this version of the API, /attach, uses hijacking to transport stdin,\nstdout and stderr on the same socket. This might change in the future.\n3.3 CORS Requests\nTo enable cross origin requests to the remote api add the flag\n\"--api-enable-cors\" when running docker in daemon mode.\n$ docker -d -H=\"192.168.1.9:2375\" --api-enable-cors",
|
|
"title": "**HIDDEN**"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api_v1.9#docker-remote-api-v19",
|
|
"tags": "",
|
|
"text": "",
|
|
"title": "Docker Remote API v1.9"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api_v1.9#1-brief-introduction",
|
|
"tags": "",
|
|
"text": "The Remote API has replaced rcli The daemon listens on unix:///var/run/docker.sock but you can bind\n Docker to another host/port or a Unix socket. The API tends to be REST, but for some complex commands, like attach \n or pull , the HTTP connection is hijacked to transport stdout, stdin \n and stderr",
|
|
"title": "1. Brief introduction"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api_v1.9#2-endpoints",
|
|
"tags": "",
|
|
"text": "",
|
|
"title": "2. Endpoints"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api_v1.9#21-containers",
|
|
"tags": "",
|
|
"text": "List containers GET /containers/json List containers. Example request : GET /containers/json?all=1 before=8dfafdbc3a40 size=1 HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n [\n {\n \"Id\": \"8dfafdbc3a40\",\n \"Image\": \"base:latest\",\n \"Command\": \"echo 1\",\n \"Created\": 1367854155,\n \"Status\": \"Exit 0\",\n \"Ports\": [{\"PrivatePort\": 2222, \"PublicPort\": 3333, \"Type\": \"tcp\"}],\n \"SizeRw\": 12288,\n \"SizeRootFs\": 0\n },\n {\n \"Id\": \"9cd87474be90\",\n \"Image\": \"base:latest\",\n \"Command\": \"echo 222222\",\n \"Created\": 1367854155,\n \"Status\": \"Exit 0\",\n \"Ports\": [],\n \"SizeRw\": 12288,\n \"SizeRootFs\": 0\n },\n {\n \"Id\": \"3176a2479c92\",\n \"Image\": \"base:latest\",\n \"Command\": \"echo 3333333333333333\",\n \"Created\": 1367854154,\n \"Status\": \"Exit 0\",\n \"Ports\":[],\n \"SizeRw\":12288,\n \"SizeRootFs\":0\n },\n {\n \"Id\": \"4cb07b47f9fb\",\n \"Image\": \"base:latest\",\n \"Command\": \"echo 444444444444444444444444444444444\",\n \"Created\": 1367854152,\n \"Status\": \"Exit 0\",\n \"Ports\": [],\n \"SizeRw\": 12288,\n \"SizeRootFs\": 0\n }\n ] Query Parameters: all \u2013 1/True/true or 0/False/false, Show all containers.\n Only running containers are shown by default (i.e., this defaults to false) limit \u2013 Show limit last created containers, include non-running ones. since \u2013 Show only containers created since Id, include non-running ones. before \u2013 Show only containers created before Id, include non-running ones. size \u2013 1/True/true or 0/False/false, Show the containers sizes Status Codes: 200 \u2013 no error 400 \u2013 bad parameter 500 \u2013 server error Create a container POST /containers/create Create a container Example request : POST /containers/create HTTP/1.1\n Content-Type: application/json\n\n {\n \"Hostname\":\"\",\n \"User\":\"\",\n \"Memory\":0,\n \"MemorySwap\":0,\n \"CpuShares\":0,\n \"AttachStdin\":false,\n \"AttachStdout\":true,\n \"AttachStderr\":true,\n \"PortSpecs\":null,\n \"Tty\":false,\n \"OpenStdin\":false,\n \"StdinOnce\":false,\n \"Env\":null,\n \"Cmd\":[\n \"date\"\n ],\n \"Dns\":null,\n \"Image\":\"base\",\n \"Volumes\":{\n \"/tmp\": {}\n },\n \"VolumesFrom\":\"\",\n \"WorkingDir\":\"\",\n \"ExposedPorts\":{\n \"22/tcp\": {}\n }\n } Example response : HTTP/1.1 201 Created\n Content-Type: application/json\n\n {\n \"Id\":\"e90e34656806\"\n \"Warnings\":[]\n } Json Parameters: Hostname \u2013 Container host name User \u2013 Username or UID Memory \u2013 Memory Limit in bytes CpuShares \u2013 CPU shares (relative weight) AttachStdin \u2013 1/True/true or 0/False/false, attach to\n standard input. Default false AttachStdout \u2013 1/True/true or 0/False/false, attach to\n standard output. Default false AttachStderr \u2013 1/True/true or 0/False/false, attach to\n standard error. Default false Tty \u2013 1/True/true or 0/False/false, allocate a pseudo-tty.\n Default false OpenStdin \u2013 1/True/true or 0/False/false, keep stdin open\n even if not attached. Default false Query Parameters: name \u2013 Assign the specified name to the container. Mus\n match /?[a-zA-Z0-9_-]+ . Status Codes: 201 \u2013 no error 404 \u2013 no such container 406 \u2013 impossible to attach (container not running) 500 \u2013 server error Inspect a container GET /containers/(id)/json Return low-level information on the container id Example request : GET /containers/4fa6e0f0c678/json HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\n \"Id\": \"4fa6e0f0c6786287e131c3852c58a2e01cc697a68231826813597e4994f1d6e2\",\n \"Created\": \"2013-05-07T14:51:42.041847+02:00\",\n \"Path\": \"date\",\n \"Args\": [],\n \"Config\": {\n \"Hostname\": \"4fa6e0f0c678\",\n \"User\": \"\",\n \"Memory\": 0,\n \"MemorySwap\": 0,\n \"AttachStdin\": false,\n \"AttachStdout\": true,\n \"AttachStderr\": true,\n \"PortSpecs\": null,\n \"Tty\": false,\n \"OpenStdin\": false,\n \"StdinOnce\": false,\n \"Env\": null,\n \"Cmd\": [\n \"date\"\n ],\n \"Dns\": null,\n \"Image\": \"base\",\n \"Volumes\": {},\n \"VolumesFrom\": \"\",\n \"WorkingDir\": \"\"\n },\n \"State\": {\n \"Running\": false,\n \"Pid\": 0,\n \"ExitCode\": 0,\n \"StartedAt\": \"2013-05-07T14:51:42.087658+02:01360\",\n \"Ghost\": false\n },\n \"Image\": \"b750fe79269d2ec9a3c593ef05b4332b1d1a02a62b4accb2c21d589ff2f5f2dc\",\n \"NetworkSettings\": {\n \"IpAddress\": \"\",\n \"IpPrefixLen\": 0,\n \"Gateway\": \"\",\n \"Bridge\": \"\",\n \"PortMapping\": null\n },\n \"SysInitPath\": \"/home/kitty/go/src/github.com/docker/docker/bin/docker\",\n \"ResolvConfPath\": \"/etc/resolv.conf\",\n \"Volumes\": {},\n \"HostConfig\": {\n \"Binds\": null,\n \"ContainerIDFile\": \"\",\n \"LxcConf\": [],\n \"Privileged\": false,\n \"PortBindings\": {\n \"80/tcp\": [\n {\n \"HostIp\": \"0.0.0.0\",\n \"HostPort\": \"49153\"\n }\n ]\n },\n \"Links\": null,\n \"PublishAllPorts\": false\n }\n } Status Codes: 200 \u2013 no error 404 \u2013 no such container 500 \u2013 server error List processes running inside a container GET /containers/(id)/top List processes running inside the container id Example request : GET /containers/4fa6e0f0c678/top HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\n \"Titles\": [\n \"USER\",\n \"PID\",\n \"%CPU\",\n \"%MEM\",\n \"VSZ\",\n \"RSS\",\n \"TTY\",\n \"STAT\",\n \"START\",\n \"TIME\",\n \"COMMAND\"\n ],\n \"Processes\": [\n [\"root\",\"20147\",\"0.0\",\"0.1\",\"18060\",\"1864\",\"pts/4\",\"S\",\"10:06\",\"0:00\",\"bash\"],\n [\"root\",\"20271\",\"0.0\",\"0.0\",\"4312\",\"352\",\"pts/4\",\"S+\",\"10:07\",\"0:00\",\"sleep\",\"10\"]\n ]\n } Query Parameters: ps_args \u2013 ps arguments to use (e.g., aux) Status Codes: 200 \u2013 no error 404 \u2013 no such container 500 \u2013 server error Inspect changes on a container's filesystem GET /containers/(id)/changes Inspect changes on container id 's filesystem Example request : GET /containers/4fa6e0f0c678/changes HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n [\n {\n \"Path\": \"/dev\",\n \"Kind\": 0\n },\n {\n \"Path\": \"/dev/kmsg\",\n \"Kind\": 1\n },\n {\n \"Path\": \"/test\",\n \"Kind\": 1\n }\n ] Status Codes: 200 \u2013 no error 404 \u2013 no such container 500 \u2013 server error Export a container GET /containers/(id)/export Export the contents of container id Example request : GET /containers/4fa6e0f0c678/export HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/octet-stream\n\n {{ TAR STREAM }} Status Codes: 200 \u2013 no error 404 \u2013 no such container 500 \u2013 server error Start a container POST /containers/(id)/start Start the container id Example request : POST /containers/(id)/start HTTP/1.1\n Content-Type: application/json\n\n {\n \"Binds\":[\"/tmp:/tmp\"],\n \"LxcConf\":[{\"Key\":\"lxc.utsname\",\"Value\":\"docker\"}],\n \"PortBindings\":{ \"22/tcp\": [{ \"HostPort\": \"11022\" }] },\n \"PublishAllPorts\":false,\n \"Privileged\":false\n } Example response : HTTP/1.1 204 No Content\n Content-Type: text/plain Json Parameters: Binds \u2013 Create a bind mount to a directory or file with\n [host-path]:[container-path]:[rw|ro]. If a directory\n \"container-path\" is missing, then docker creates a new volume. LxcConf \u2013 Map of custom lxc options PortBindings \u2013 Expose ports from the container, optionally\n publishing them via the HostPort flag PublishAllPorts \u2013 1/True/true or 0/False/false, publish all\n exposed ports to the host interfaces. Default false Privileged \u2013 1/True/true or 0/False/false, give extended\n privileges to this container. Default false Status Codes: 204 \u2013 no error 404 \u2013 no such container 500 \u2013 server error Stop a container POST /containers/(id)/stop Stop the container id Example request : POST /containers/e90e34656806/stop?t=5 HTTP/1.1 Example response : HTTP/1.1 204 OK Query Parameters: t \u2013 number of seconds to wait before killing the container Status Codes: 204 \u2013 no error 404 \u2013 no such container 500 \u2013 server error Restart a container POST /containers/(id)/restart Restart the container id Example request : POST /containers/e90e34656806/restart?t=5 HTTP/1.1 Example response : HTTP/1.1 204 No Content Query Parameters: t \u2013 number of seconds to wait before killing the container Status Codes: 204 \u2013 no error 404 \u2013 no such container 500 \u2013 server error Kill a container POST /containers/(id)/kill Kill the container id Example request : POST /containers/e90e34656806/kill HTTP/1.1 Example response : HTTP/1.1 204 No Content Query Parameters signal - Signal to send to the container: integer or string like \"SIGINT\".\n When not set, SIGKILL is assumed and the call will wait for the container to exit. Status Codes: 204 \u2013 no error 404 \u2013 no such container 500 \u2013 server error Attach to a container POST /containers/(id)/attach Attach to the container id Example request : POST /containers/16253994b7c4/attach?logs=1 stream=0 stdout=1 HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/vnd.docker.raw-stream\n\n {{ STREAM }} Query Parameters: logs \u2013 1/True/true or 0/False/false, return logs. Defaul\n false stream \u2013 1/True/true or 0/False/false, return stream.\n Default false stdin \u2013 1/True/true or 0/False/false, if stream=true, attach\n to stdin. Default false stdout \u2013 1/True/true or 0/False/false, if logs=true, return\n stdout log, if stream=true, attach to stdout. Default false stderr \u2013 1/True/true or 0/False/false, if logs=true, return\n stderr log, if stream=true, attach to stderr. Default false Status Codes: 200 \u2013 no error 400 \u2013 bad parameter 404 \u2013 no such container 500 \u2013 server error Stream details : When using the TTY setting is enabled in POST /containers/create , the\nstream is the raw data from the process PTY and client's stdin. When\nthe TTY is disabled, then the stream is multiplexed to separate\nstdout and stderr. The format is a Header and a Payload (frame). HEADER The header will contain the information on which stream write the\nstream (stdout or stderr). It also contain the size of the\nassociated frame encoded on the last 4 bytes (uint32). It is encoded on the first 8 bytes like this: header := [8]byte{STREAM_TYPE, 0, 0, 0, SIZE1, SIZE2, SIZE3, SIZE4} STREAM_TYPE can be: 0: stdin (will be written on stdout) 1: stdout 2: stderr SIZE1, SIZE2, SIZE3, SIZE4 are the 4 bytes of\nthe uint32 size encoded as big endian. PAYLOAD The payload is the raw stream. IMPLEMENTATION The simplest way to implement the Attach protocol is the following: Read 8 bytes chose stdout or stderr depending on the first byte Extract the frame size from the last 4 byets Read the extracted size and output it on the correct output Goto 1) Attach to a container (websocket) GET /containers/(id)/attach/ws Attach to the container id via websocket Implements websocket protocol handshake according to RFC 6455 Example request GET /containers/e90e34656806/attach/ws?logs=0 stream=1 stdin=1 stdout=1 stderr=1 HTTP/1.1 Example response {{ STREAM }} Query Parameters: logs \u2013 1/True/true or 0/False/false, return logs. Default false stream \u2013 1/True/true or 0/False/false, return stream.\n Default false stdin \u2013 1/True/true or 0/False/false, if stream=true, attach\n to stdin. Default false stdout \u2013 1/True/true or 0/False/false, if logs=true, return\n stdout log, if stream=true, attach to stdout. Default false stderr \u2013 1/True/true or 0/False/false, if logs=true, return\n stderr log, if stream=true, attach to stderr. Default false Status Codes: 200 \u2013 no error 400 \u2013 bad parameter 404 \u2013 no such container 500 \u2013 server error Wait a container POST /containers/(id)/wait Block until container id stops, then returns the exit code Example request : POST /containers/16253994b7c4/wait HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\"StatusCode\": 0} Status Codes: 200 \u2013 no error 404 \u2013 no such container 500 \u2013 server error Remove a container DELETE /containers/(id) Remove the container id from the filesystem Example request : DELETE /containers/16253994b7c4?v=1 HTTP/1.1 Example response : HTTP/1.1 204 No Content Query Parameters: v \u2013 1/True/true or 0/False/false, Remove the volumes\n associated to the container. Default false Status Codes: 204 \u2013 no error 400 \u2013 bad parameter 404 \u2013 no such container 500 \u2013 server error Copy files or folders from a container POST /containers/(id)/copy Copy files or folders of container id Example request : POST /containers/4fa6e0f0c678/copy HTTP/1.1\n Content-Type: application/json\n\n {\n \"Resource\": \"test.txt\"\n } Example response : HTTP/1.1 200 OK\n Content-Type: application/octet-stream\n\n {{ TAR STREAM }} Status Codes: 200 \u2013 no error 404 \u2013 no such container 500 \u2013 server error",
|
|
"title": "2.1 Containers"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api_v1.9#22-images",
|
|
"tags": "",
|
|
"text": "List Images GET /images/json Example request : GET /images/json?all=0 HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n [\n {\n \"RepoTags\": [\n \"ubuntu:12.04\",\n \"ubuntu:precise\",\n \"ubuntu:latest\"\n ],\n \"Id\": \"8dbd9e392a964056420e5d58ca5cc376ef18e2de93b5cc90e868a1bbc8318c1c\",\n \"Created\": 1365714795,\n \"Size\": 131506275,\n \"VirtualSize\": 131506275\n },\n {\n \"RepoTags\": [\n \"ubuntu:12.10\",\n \"ubuntu:quantal\"\n ],\n \"ParentId\": \"27cf784147099545\",\n \"Id\": \"b750fe79269d2ec9a3c593ef05b4332b1d1a02a62b4accb2c21d589ff2f5f2dc\",\n \"Created\": 1364102658,\n \"Size\": 24653,\n \"VirtualSize\": 180116135\n }\n ] Create an image POST /images/create Create an image, either by pull it from the registry or by importing i Example request : POST /images/create?fromImage=base HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\"status\": \"Pulling...\"}\n {\"status\": \"Pulling\", \"progress\": \"1 B/ 100 B\", \"progressDetail\": {\"current\": 1, \"total\": 100}}\n {\"error\": \"Invalid...\"}\n ...\n\nWhen using this endpoint to pull an image from the registry, the\n`X-Registry-Auth` header can be used to include\na base64-encoded AuthConfig object. Query Parameters: fromImage \u2013 name of the image to pull fromSrc \u2013 source to import, - means stdin repo \u2013 repository tag \u2013 tag registry \u2013 the registry to pull from Request Headers: X-Registry-Auth \u2013 base64-encoded AuthConfig object Status Codes: 200 \u2013 no error 500 \u2013 server error Insert a file in an image POST /images/(name)/insert Insert a file from url in the image name at path Example request : POST /images/test/insert?path=/usr url=myurl HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\"status\":\"Inserting...\"}\n {\"status\":\"Inserting\", \"progress\":\"1/? (n/a)\", \"progressDetail\":{\"current\":1}}\n {\"error\":\"Invalid...\"}\n ... Query Parameters: url \u2013 The url from where the file is taken path \u2013 The path where the file is stored Status Codes: 200 \u2013 no error 500 \u2013 server error Inspect an image GET /images/(name)/json Return low-level information on the image name Example request : GET /images/base/json HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\n \"id\":\"b750fe79269d2ec9a3c593ef05b4332b1d1a02a62b4accb2c21d589ff2f5f2dc\",\n \"parent\":\"27cf784147099545\",\n \"created\":\"2013-03-23T22:24:18.818426-07:00\",\n \"container\":\"3d67245a8d72ecf13f33dffac9f79dcdf70f75acb84d308770391510e0c23ad0\",\n \"container_config\":\n {\n \"Hostname\":\"\",\n \"User\":\"\",\n \"Memory\":0,\n \"MemorySwap\":0,\n \"AttachStdin\":false,\n \"AttachStdout\":false,\n \"AttachStderr\":false,\n \"PortSpecs\":null,\n \"Tty\":true,\n \"OpenStdin\":true,\n \"StdinOnce\":false,\n \"Env\":null,\n \"Cmd\": [\"/bin/bash\"],\n \"Dns\":null,\n \"Image\":\"base\",\n \"Volumes\":null,\n \"VolumesFrom\":\"\",\n \"WorkingDir\":\"\"\n },\n \"Size\": 6824592\n } Status Codes: 200 \u2013 no error 404 \u2013 no such image 500 \u2013 server error Get the history of an image GET /images/(name)/history Return the history of the image name Example request : GET /images/base/history HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n [\n {\n \"Id\": \"b750fe79269d\",\n \"Created\": 1364102658,\n \"CreatedBy\": \"/bin/bash\"\n },\n {\n \"Id\": \"27cf78414709\",\n \"Created\": 1364068391,\n \"CreatedBy\": \"\"\n }\n ] Status Codes: 200 \u2013 no error 404 \u2013 no such image 500 \u2013 server error Push an image on the registry POST /images/(name)/push Push the image name on the registry Example request : POST /images/test/push HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\"status\": \"Pushing...\"}\n {\"status\": \"Pushing\", \"progress\": \"1/? (n/a)\", \"progressDetail\": {\"current\": 1}}}\n {\"error\": \"Invalid...\"}\n ...\n\nRequest Headers: X-Registry-Auth \u2013 include a base64-encoded AuthConfig\n object. Status Codes: 200 \u2013 no error 404 \u2013 no such image 500 \u2013 server error Tag an image into a repository POST /images/(name)/tag Tag the image name into a repository Example request : POST /images/test/tag?repo=myrepo force=0 tag=v42 HTTP/1.1 Example response : HTTP/1.1 201 OK Query Parameters: repo \u2013 The repository to tag in force \u2013 1/True/true or 0/False/false, default false tag - The new tag name Status Codes: 201 \u2013 no error 400 \u2013 bad parameter 404 \u2013 no such image 409 \u2013 conflict 500 \u2013 server error Remove an image DELETE /images/(name*)\n: Remove the image name` from the filesystem Example request : DELETE /images/test HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-type: application/json\n\n [\n {\"Untagged\": \"3e2f21a89f\"},\n {\"Deleted\": \"3e2f21a89f\"},\n {\"Deleted\": \"53b4f83ac9\"}\n ] Status Codes: 200 \u2013 no error 404 \u2013 no such image 409 \u2013 conflict 500 \u2013 server error Search images GET /images/search Search for an image on Docker Hub . Note :\nThe response keys have changed from API v1.6 to reflect the JSON\nsent by the registry server to the docker daemon's request. Example request : GET /images/search?term=sshd HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n [\n {\n \"description\": \"\",\n \"is_official\": false,\n \"is_trusted\": false,\n \"name\": \"wma55/u1210sshd\",\n \"star_count\": 0\n },\n {\n \"description\": \"\",\n \"is_official\": false,\n \"is_trusted\": false,\n \"name\": \"jdswinbank/sshd\",\n \"star_count\": 0\n },\n {\n \"description\": \"\",\n \"is_official\": false,\n \"is_trusted\": false,\n \"name\": \"vgauthier/sshd\",\n \"star_count\": 0\n }\n ...\n ] Query Parameters: term \u2013 term to search Status Codes: 200 \u2013 no error 500 \u2013 server error",
|
|
"title": "2.2 Images"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api_v1.9#23-misc",
|
|
"tags": "",
|
|
"text": "Build an image from Dockerfile POST /build Build an image from Dockerfile using a POST body. Example request : POST /build HTTP/1.1\n\n {{ TAR STREAM }} Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\"stream\": \"Step 1...\"}\n {\"stream\": \"...\"}\n {\"error\": \"Error...\", \"errorDetail\": {\"code\": 123, \"message\": \"Error...\"}}\n\nThe stream must be a tar archive compressed with one of the\nfollowing algorithms: identity (no compression), gzip, bzip2, xz.\n\nThe archive must include a file called `Dockerfile` at its root. It may include any number of other files,\n which will be accessible in the build context (See the ADD build\n command ). Query Parameters: t \u2013 repository name (and optionally a tag) to be applied to\n the resulting image in case of success remote \u2013 build source URI (git or HTTPS/HTTP) q \u2013 suppress verbose build output nocache \u2013 do not use the cache when building the image rm \u2013 Remove intermediate containers after a successful build Request Headers: Content-type \u2013 should be set to \"application/tar\" . X-Registry-Config \u2013 base64-encoded ConfigFile objec Status Codes: 200 \u2013 no error 500 \u2013 server error Check auth configuration POST /auth Get the default username and email Example request : POST /auth HTTP/1.1\n Content-Type: application/json\n\n {\n \"username\":\" hannibal\",\n \"password: \"xxxx\",\n \"email\": \"hannibal@a-team.com\",\n \"serveraddress\": \"https://index.docker.io/v1/\"\n } Example response : HTTP/1.1 200 OK\n Content-Type: text/plain Status Codes: 200 \u2013 no error 204 \u2013 no error 500 \u2013 server error Display system-wide information GET /info Display system-wide information Example request : GET /info HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\n \"Containers\":11,\n \"Images\":16,\n \"Debug\":false,\n \"NFd\": 11,\n \"NGoroutines\":21,\n \"MemoryLimit\":true,\n \"SwapLimit\":false,\n \"IPv4Forwarding\":true\n } Status Codes: 200 \u2013 no error 500 \u2013 server error Show the docker version information GET /version Show the docker version information Example request : GET /version HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\n \"Version\":\"0.2.2\",\n \"GitCommit\":\"5a2a5cc+CHANGES\",\n \"GoVersion\":\"go1.0.3\"\n } Status Codes: 200 \u2013 no error 500 \u2013 server error Create a new image from a container's changes POST /commit Create a new image from a container's changes Example request : POST /commit?container=44c004db4b17 m=message repo=myrepo HTTP/1.1\n Content-Type: application/json\n\n {\n \"Hostname\":\"\",\n \"User\":\"\",\n \"Memory\":0,\n \"MemorySwap\":0,\n \"AttachStdin\":false,\n \"AttachStdout\":true,\n \"AttachStderr\":true,\n \"PortSpecs\":null,\n \"Tty\":false,\n \"OpenStdin\":false,\n \"StdinOnce\":false,\n \"Env\":null,\n \"Cmd\":[\n \"date\"\n ],\n \"Volumes\":{\n \"/tmp\": {}\n },\n \"WorkingDir\":\"\",\n \"DisableNetwork\": false,\n \"ExposedPorts\":{\n \"22/tcp\": {}\n }\n } Example response : HTTP/1.1 201 Created\n Content-Type: application/vnd.docker.raw-stream\n\n {\"Id\": \"596069db4bf5\"} Json Parameters: config - the container's configuration Query Parameters: container \u2013 source container repo \u2013 repository tag \u2013 tag m \u2013 commit message author \u2013 author (e.g., \"John Hannibal Smith\n hannibal@a-team.com \") Status Codes: 201 \u2013 no error 404 \u2013 no such container 500 \u2013 server error Monitor Docker's events GET /events Get events from docker, either in real time via streaming, or via\npolling (using since). Docker containers will report the following events: create, destroy, die, export, kill, pause, restart, start, stop, unpause and Docker images will report: untag, delete Example request : GET /events?since=1374067924 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\"status\": \"create\", \"id\": \"dfdf82bd3881\",\"from\": \"base:latest\", \"time\":1374067924}\n {\"status\": \"start\", \"id\": \"dfdf82bd3881\",\"from\": \"base:latest\", \"time\":1374067924}\n {\"status\": \"stop\", \"id\": \"dfdf82bd3881\",\"from\": \"base:latest\", \"time\":1374067966}\n {\"status\": \"destroy\", \"id\": \"dfdf82bd3881\",\"from\": \"base:latest\", \"time\":1374067970} Query Parameters: since \u2013 timestamp used for polling Status Codes: 200 \u2013 no error 500 \u2013 server error Get a tarball containing all images and tags in a repository GET /images/(name)/get Get a tarball containing all images and metadata for the repository specified by name . See the image tarball format for more details. Example request GET /images/ubuntu/get Example response : HTTP/1.1 200 OK\n Content-Type: application/x-tar\n\n Binary data stream Status Codes: 200 \u2013 no error 500 \u2013 server error Load a tarball with a set of images and tags into docker POST /images/load Load a set of images and tags into the docker repository. See the image tarball format for more details. Example request POST /images/load\n\n Tarball in body Example response : HTTP/1.1 200 OK Status Codes: 200 \u2013 no error 500 \u2013 server error Image tarball format An image tarball contains one directory per image layer (named using its long ID),\neach containing three files: VERSION : currently 1.0 - the file format version json : detailed layer information, similar to docker inspect layer_id layer.tar : A tarfile containing the filesystem changes in this layer The layer.tar file will contain aufs style .wh..wh.aufs files and directories\nfor storing attribute changes and deletions. If the tarball defines a repository, there will also be a repositories file at\nthe root that contains a list of repository and tag names mapped to layer IDs. { hello-world :\n { latest : 565a9d68a73f6706862bfe8409a7f659776d4d60a8d096eb4a3cbce6999cc2a1 }\n}",
|
|
"title": "2.3 Misc"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api_v1.9#3-going-further",
|
|
"tags": "",
|
|
"text": "",
|
|
"title": "3. Going further"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api_v1.9#31-inside-docker-run",
|
|
"tags": "",
|
|
"text": "Here are the steps of docker run : Create the container If the status code is 404, it means the image doesn't exist: Try to pull it Then retry to create the container Start the container If you are not in detached mode: Attach to the container, using logs=1 (to have stdout and stderr from the container's start) and stream=1 If in detached mode or only stdin is attached: Display the container's id",
|
|
"title": "3.1 Inside docker run"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api_v1.9#32-hijacking",
|
|
"tags": "",
|
|
"text": "In this version of the API, /attach, uses hijacking to transport stdin,\nstdout and stderr on the same socket. This might change in the future.",
|
|
"title": "3.2 Hijacking"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api_v1.9#33-cors-requests",
|
|
"tags": "",
|
|
"text": "To enable cross origin requests to the remote api add the flag\n\"--api-enable-cors\" when running docker in daemon mode. $ docker -d -H=\"192.168.1.9:2375\" --api-enable-cors",
|
|
"title": "3.3 CORS Requests"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api_v1.8/",
|
|
"tags": "",
|
|
"text": "Docker Remote API v1.8\n1. Brief introduction\n\nThe Remote API has replaced rcli\nThe daemon listens on unix:///var/run/docker.sock but you can bind\n Docker to another host/port or a Unix socket.\nThe API tends to be REST, but for some complex commands, like attach\n or pull, the HTTP connection is hijacked to transport stdout, stdin\n and stderr\n\n2. Endpoints\n2.1 Containers\nList containers\nGET /containers/json\nList containers\nExample request:\n GET /containers/json?all=1before=8dfafdbc3a40size=1 HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n [\n {\n \"Id\": \"8dfafdbc3a40\",\n \"Image\": \"base:latest\",\n \"Command\": \"echo 1\",\n \"Created\": 1367854155,\n \"Status\": \"Exit 0\",\n \"Ports\": [{\"PrivatePort\": 2222, \"PublicPort\": 3333, \"Type\": \"tcp\"}],\n \"SizeRw\": 12288,\n \"SizeRootFs\": 0\n },\n {\n \"Id\": \"9cd87474be90\",\n \"Image\": \"base:latest\",\n \"Command\": \"echo 222222\",\n \"Created\": 1367854155,\n \"Status\": \"Exit 0\",\n \"Ports\": [],\n \"SizeRw\": 12288,\n \"SizeRootFs\": 0\n },\n {\n \"Id\": \"3176a2479c92\",\n \"Image\": \"base:latest\",\n \"Command\": \"echo 3333333333333333\",\n \"Created\": 1367854154,\n \"Status\": \"Exit 0\",\n \"Ports\":[],\n \"SizeRw\":12288,\n \"SizeRootFs\":0\n },\n {\n \"Id\": \"4cb07b47f9fb\",\n \"Image\": \"base:latest\",\n \"Command\": \"echo 444444444444444444444444444444444\",\n \"Created\": 1367854152,\n \"Status\": \"Exit 0\",\n \"Ports\": [],\n \"SizeRw\": 12288,\n \"SizeRootFs\": 0\n }\n ]\n\nQuery Parameters:\n\n\n\nall \u2013 1/True/true or 0/False/false, Show all containers.\n Only running containers are shown by default (i.e., this defaults to false)\nlimit \u2013 Show limit last created containers, include non-running ones.\nsince \u2013 Show only containers created since Id, include non-running ones.\nbefore \u2013 Show only containers created before Id, include non-running ones.\nsize \u2013 1/True/true or 0/False/false, Show the containers sizes\n\nStatus Codes:\n\n200 \u2013 no error\n400 \u2013 bad parameter\n500 \u2013 server error\n\nCreate a container\nPOST /containers/create\nCreate a container\nExample request:\n POST /containers/create HTTP/1.1\n Content-Type: application/json\n\n {\n \"Hostname\":\"\",\n \"User\":\"\",\n \"Memory\":0,\n \"MemorySwap\":0,\n \"CpuShares\":0,\n \"AttachStdin\":false,\n \"AttachStdout\":true,\n \"AttachStderr\":true,\n \"PortSpecs\":null,\n \"Tty\":false,\n \"OpenStdin\":false,\n \"StdinOnce\":false,\n \"Env\":null,\n \"Cmd\":[\n \"date\"\n ],\n \"Dns\":null,\n \"Image\":\"base\",\n \"Volumes\":{\n \"/tmp\": {}\n },\n \"VolumesFrom\":\"\",\n \"WorkingDir\":\"\",\n \"ExposedPorts\":{\n \"22/tcp\": {}\n }\n }\n\nExample response:\n HTTP/1.1 201 Created\n Content-Type: application/json\n\n {\n \"Id\":\"e90e34656806\"\n \"Warnings\":[]\n }\n\nJson Parameters:\n\n\n\nHostname \u2013 Container host name\nUser \u2013 Username or UID\nMemory \u2013 Memory Limit in bytes\nCpuShares \u2013 CPU shares (relative weight)\nAttachStdin \u2013 1/True/true or 0/False/false, attach to\n standard input. Default false\nAttachStdout \u2013 1/True/true or 0/False/false, attach to\n standard output. Default false\nAttachStderr \u2013 1/True/true or 0/False/false, attach to\n standard error. Default false\nTty \u2013 1/True/true or 0/False/false, allocate a pseudo-tty.\n Default false\nOpenStdin \u2013 1/True/true or 0/False/false, keep stdin open\n even if not attached. Default false\n\nQuery Parameters:\n\n\n\nname \u2013 Assign the specified name to the container. Mus\n match /?[a-zA-Z0-9_-]+.\n\nStatus Codes:\n\n201 \u2013 no error\n404 \u2013 no such container\n406 \u2013 impossible to attach (container not running)\n500 \u2013 server error\n\nInspect a container\nGET /containers/(id)/json\nReturn low-level information on the container id\nExample request:\n GET /containers/4fa6e0f0c678/json HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\n \"Id\": \"4fa6e0f0c6786287e131c3852c58a2e01cc697a68231826813597e4994f1d6e2\",\n \"Created\": \"2013-05-07T14:51:42.041847+02:00\",\n \"Path\": \"date\",\n \"Args\": [],\n \"Config\": {\n \"Hostname\": \"4fa6e0f0c678\",\n \"User\": \"\",\n \"Memory\": 0,\n \"MemorySwap\": 0,\n \"AttachStdin\": false,\n \"AttachStdout\": true,\n \"AttachStderr\": true,\n \"PortSpecs\": null,\n \"Tty\": false,\n \"OpenStdin\": false,\n \"StdinOnce\": false,\n \"Env\": null,\n \"Cmd\": [\n \"date\"\n ],\n \"Dns\": null,\n \"Image\": \"base\",\n \"Volumes\": {},\n \"VolumesFrom\": \"\",\n \"WorkingDir\": \"\"\n },\n \"State\": {\n \"Running\": false,\n \"Pid\": 0,\n \"ExitCode\": 0,\n \"StartedAt\": \"2013-05-07T14:51:42.087658+02:01360\",\n \"Ghost\": false\n },\n \"Image\": \"b750fe79269d2ec9a3c593ef05b4332b1d1a02a62b4accb2c21d589ff2f5f2dc\",\n \"NetworkSettings\": {\n \"IpAddress\": \"\",\n \"IpPrefixLen\": 0,\n \"Gateway\": \"\",\n \"Bridge\": \"\",\n \"PortMapping\": null\n },\n \"SysInitPath\": \"/home/kitty/go/src/github.com/docker/docker/bin/docker\",\n \"ResolvConfPath\": \"/etc/resolv.conf\",\n \"Volumes\": {},\n \"HostConfig\": {\n \"Binds\": null,\n \"ContainerIDFile\": \"\",\n \"LxcConf\": [],\n \"Privileged\": false,\n \"PortBindings\": {\n \"80/tcp\": [\n {\n \"HostIp\": \"0.0.0.0\",\n \"HostPort\": \"49153\"\n }\n ]\n },\n \"Links\": null,\n \"PublishAllPorts\": false\n }\n }\n\nStatus Codes:\n\n200 \u2013 no error\n404 \u2013 no such container\n500 \u2013 server error\n\nList processes running inside a container\nGET /containers/(id)/top\nList processes running inside the container id\nExample request:\n GET /containers/4fa6e0f0c678/top HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\n \"Titles\": [\n \"USER\",\n \"PID\",\n \"%CPU\",\n \"%MEM\",\n \"VSZ\",\n \"RSS\",\n \"TTY\",\n \"STAT\",\n \"START\",\n \"TIME\",\n \"COMMAND\"\n ],\n \"Processes\": [\n [\"root\",\"20147\",\"0.0\",\"0.1\",\"18060\",\"1864\",\"pts/4\",\"S\",\"10:06\",\"0:00\",\"bash\"],\n [\"root\",\"20271\",\"0.0\",\"0.0\",\"4312\",\"352\",\"pts/4\",\"S+\",\"10:07\",\"0:00\",\"sleep\",\"10\"]\n ]\n }\n\nQuery Parameters:\n\nps_args \u2013 ps arguments to use (e.g., aux)\n\nStatus Codes:\n\n200 \u2013 no error\n404 \u2013 no such container\n500 \u2013 server error\n\nInspect changes on a container's filesystem\nGET /containers/(id)/changes\nInspect changes on container id's filesystem\nExample request:\n GET /containers/4fa6e0f0c678/changes HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n [\n {\n \"Path\": \"/dev\",\n \"Kind\": 0\n },\n {\n \"Path\": \"/dev/kmsg\",\n \"Kind\": 1\n },\n {\n \"Path\": \"/test\",\n \"Kind\": 1\n }\n ]\n\nStatus Codes:\n\n200 \u2013 no error\n404 \u2013 no such container\n500 \u2013 server error\n\nExport a container\nGET /containers/(id)/export\nExport the contents of container id\nExample request:\n GET /containers/4fa6e0f0c678/export HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/octet-stream\n\n {{ TAR STREAM }}\n\nStatus Codes:\n\n200 \u2013 no error\n404 \u2013 no such container\n500 \u2013 server error\n\nStart a container\nPOST /containers/(id)/start\nStart the container id\nExample request:\n POST /containers/(id)/start HTTP/1.1\n Content-Type: application/json\n\n {\n \"Binds\":[\"/tmp:/tmp\"],\n \"LxcConf\":[{\"Key\":\"lxc.utsname\",\"Value\":\"docker\"}],\n \"PortBindings\":{ \"22/tcp\": [{ \"HostPort\": \"11022\" }] },\n \"PublishAllPorts\":false,\n \"Privileged\":false\n }\n\nExample response:\n HTTP/1.1 204 No Content\n Content-Type: text/plain\n\nJson Parameters:\n\n\n\nBinds \u2013 Create a bind mount to a directory or file with\n [host-path]:[container-path]:[rw|ro]. If a directory\n \"container-path\" is missing, then docker creates a new volume.\nLxcConf \u2013 Map of custom lxc options\nPortBindings \u2013 Expose ports from the container, optionally\n publishing them via the HostPort flag\nPublishAllPorts \u2013 1/True/true or 0/False/false, publish all\n exposed ports to the host interfaces. Default false\nPrivileged \u2013 1/True/true or 0/False/false, give extended\n privileges to this container. Default false\n\nStatus Codes:\n\n204 \u2013 no error\n404 \u2013 no such container\n500 \u2013 server error\n\nStop a container\nPOST /containers/(id)/stop\nStop the container id\nExample request:\n POST /containers/e90e34656806/stop?t=5 HTTP/1.1\n\nExample response:\n HTTP/1.1 204 OK\n\nQuery Parameters:\n\nt \u2013 number of seconds to wait before killing the container\n\nStatus Codes:\n\n204 \u2013 no error\n404 \u2013 no such container\n500 \u2013 server error\n\nRestart a container\nPOST /containers/(id)/restart\nRestart the container id\nExample request:\n POST /containers/e90e34656806/restart?t=5 HTTP/1.1\n\nExample response:\n HTTP/1.1 204 No Content\n\nQuery Parameters:\n\nt \u2013 number of seconds to wait before killing the container\n\nStatus Codes:\n\n204 \u2013 no error\n404 \u2013 no such container\n500 \u2013 server error\n\nKill a container\nPOST /containers/(id)/kill\nKill the container id\nExample request:\n POST /containers/e90e34656806/kill HTTP/1.1\n\nExample response:\n HTTP/1.1 204 No Content\n\nStatus Codes:\n\n204 \u2013 no error\n404 \u2013 no such container\n500 \u2013 server error\n\nAttach to a container\nPOST /containers/(id)/attach\nAttach to the container id\nExample request:\n POST /containers/16253994b7c4/attach?logs=1stream=0stdout=1 HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/vnd.docker.raw-stream\n\n {{ STREAM }}\n\nQuery Parameters:\n\nlogs \u2013 1/True/true or 0/False/false, return logs. Defaul\n false\nstream \u2013 1/True/true or 0/False/false, return stream.\n Default false\nstdin \u2013 1/True/true or 0/False/false, if stream=true, attach\n to stdin. Default false\nstdout \u2013 1/True/true or 0/False/false, if logs=true, return\n stdout log, if stream=true, attach to stdout. Default false\nstderr \u2013 1/True/true or 0/False/false, if logs=true, return\n stderr log, if stream=true, attach to stderr. Default false\n\nStatus Codes:\n\n200 \u2013 no error\n400 \u2013 bad parameter\n404 \u2013 no such container\n\n500 \u2013 server error\nStream details:\nWhen using the TTY setting is enabled in\nPOST /containers/create\n,\nthe stream is the raw data from the process PTY and client's stdin.\nWhen the TTY is disabled, then the stream is multiplexed to separate\nstdout and stderr.\nThe format is a Header and a Payload (frame).\nHEADER\nThe header will contain the information on which stream write the\nstream (stdout or stderr). It also contain the size of the\nassociated frame encoded on the last 4 bytes (uint32).\nIt is encoded on the first 8 bytes like this:\nheader := [8]byte{STREAM_TYPE, 0, 0, 0, SIZE1, SIZE2, SIZE3, SIZE4}\n\nSTREAM_TYPE can be:\n\n\n0: stdin (will be written on stdout)\n\n1: stdout\n\n2: stderr\nSIZE1, SIZE2, SIZE3, SIZE4 are the 4 bytes of\nthe uint32 size encoded as big endian.\nPAYLOAD\nThe payload is the raw stream.\nIMPLEMENTATION\nThe simplest way to implement the Attach protocol is the following:\n\nRead 8 bytes\nchose stdout or stderr depending on the first byte\nExtract the frame size from the last 4 byets\nRead the extracted size and output it on the correct output\nGoto 1)\n\n\n\nAttach to a container (websocket)\nGET /containers/(id)/attach/ws\nAttach to the container id via websocket\nImplements websocket protocol handshake according to RFC 6455\nExample request\n GET /containers/e90e34656806/attach/ws?logs=0stream=1stdin=1stdout=1stderr=1 HTTP/1.1\n\nExample response\n {{ STREAM }}\n\nQuery Parameters:\n\nlogs \u2013 1/True/true or 0/False/false, return logs. Default false\nstream \u2013 1/True/true or 0/False/false, return stream.\n Default false\nstdin \u2013 1/True/true or 0/False/false, if stream=true, attach\n to stdin. Default false\nstdout \u2013 1/True/true or 0/False/false, if logs=true, return\n stdout log, if stream=true, attach to stdout. Default false\nstderr \u2013 1/True/true or 0/False/false, if logs=true, return\n stderr log, if stream=true, attach to stderr. Default false\n\nStatus Codes:\n\n200 \u2013 no error\n400 \u2013 bad parameter\n404 \u2013 no such container\n500 \u2013 server error\n\nWait a container\nPOST /containers/(id)/wait\nBlock until container id stops, then returns the exit code\nExample request:\n POST /containers/16253994b7c4/wait HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\"StatusCode\": 0}\n\nStatus Codes:\n\n200 \u2013 no error\n404 \u2013 no such container\n500 \u2013 server error\n\nRemove a container\nDELETE /containers/(id)\nRemove the container id from the filesystem\nExample request:\n DELETE /containers/16253994b7c4?v=1 HTTP/1.1\n\nExample response:\n HTTP/1.1 204 No Content\n\nQuery Parameters:\n\nv \u2013 1/True/true or 0/False/false, Remove the volumes\n associated to the container. Default false\n\nStatus Codes:\n\n204 \u2013 no error\n400 \u2013 bad parameter\n404 \u2013 no such container\n500 \u2013 server error\n\nCopy files or folders from a container\nPOST /containers/(id)/copy\nCopy files or folders of container id\nExample request:\n POST /containers/4fa6e0f0c678/copy HTTP/1.1\n Content-Type: application/json\n\n {\n \"Resource\": \"test.txt\"\n }\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/octet-stream\n\n {{ TAR STREAM }}\n\nStatus Codes:\n\n200 \u2013 no error\n404 \u2013 no such container\n500 \u2013 server error\n\n2.2 Images\nList Images\nGET /images/json\nExample request:\n GET /images/json?all=0 HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n [\n {\n \"RepoTags\": [\n \"ubuntu:12.04\",\n \"ubuntu:precise\",\n \"ubuntu:latest\"\n ],\n \"Id\": \"8dbd9e392a964056420e5d58ca5cc376ef18e2de93b5cc90e868a1bbc8318c1c\",\n \"Created\": 1365714795,\n \"Size\": 131506275,\n \"VirtualSize\": 131506275\n },\n {\n \"RepoTags\": [\n \"ubuntu:12.10\",\n \"ubuntu:quantal\"\n ],\n \"ParentId\": \"27cf784147099545\",\n \"Id\": \"b750fe79269d2ec9a3c593ef05b4332b1d1a02a62b4accb2c21d589ff2f5f2dc\",\n \"Created\": 1364102658,\n \"Size\": 24653,\n \"VirtualSize\": 180116135\n }\n ]\n\nCreate an image\nPOST /images/create\nCreate an image, either by pull it from the registry or by importing i\nExample request:\n POST /images/create?fromImage=base HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\"status\": \"Pulling...\"}\n {\"status\": \"Pulling\", \"progress\": \"1 B/ 100 B\", \"progressDetail\": {\"current\": 1, \"total\": 100}}\n {\"error\": \"Invalid...\"}\n ...\n\nWhen using this endpoint to pull an image from the registry, the\n`X-Registry-Auth` header can be used to include\na base64-encoded AuthConfig object.\n\nQuery Parameters:\n\nfromImage \u2013 name of the image to pull\nfromSrc \u2013 source to import, - means stdin\nrepo \u2013 repository\ntag \u2013 tag\nregistry \u2013 the registry to pull from\n\nRequest Headers:\n\nX-Registry-Auth \u2013 base64-encoded AuthConfig object\n\nStatus Codes:\n\n200 \u2013 no error\n500 \u2013 server error\n\nInsert a file in an image\nPOST /images/(name)/insert\nInsert a file from url in the image name at path\nExample request:\n POST /images/test/insert?path=/usrurl=myurl HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\"status\":\"Inserting...\"}\n {\"status\":\"Inserting\", \"progress\":\"1/? (n/a)\", \"progressDetail\":{\"current\":1}}\n {\"error\":\"Invalid...\"}\n ...\n\nQuery Parameters:\n\nurl \u2013 The url from where the file is taken\npath \u2013 The path where the file is stored\n\nStatus Codes:\n\n200 \u2013 no error\n500 \u2013 server error\n\nInspect an image\nGET /images/(name)/json\nReturn low-level information on the image name\nExample request:\n GET /images/base/json HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\n \"id\":\"b750fe79269d2ec9a3c593ef05b4332b1d1a02a62b4accb2c21d589ff2f5f2dc\",\n \"parent\":\"27cf784147099545\",\n \"created\":\"2013-03-23T22:24:18.818426-07:00\",\n \"container\":\"3d67245a8d72ecf13f33dffac9f79dcdf70f75acb84d308770391510e0c23ad0\",\n \"container_config\":\n {\n \"Hostname\":\"\",\n \"User\":\"\",\n \"Memory\":0,\n \"MemorySwap\":0,\n \"AttachStdin\":false,\n \"AttachStdout\":false,\n \"AttachStderr\":false,\n \"PortSpecs\":null,\n \"Tty\":true,\n \"OpenStdin\":true,\n \"StdinOnce\":false,\n \"Env\":null,\n \"Cmd\": [\"/bin/bash\"],\n \"Dns\":null,\n \"Image\":\"base\",\n \"Volumes\":null,\n \"VolumesFrom\":\"\",\n \"WorkingDir\":\"\"\n },\n \"Size\": 6824592\n }\n\nStatus Codes:\n\n200 \u2013 no error\n404 \u2013 no such image\n500 \u2013 server error\n\nGet the history of an image\nGET /images/(name)/history\nReturn the history of the image name\nExample request:\n GET /images/base/history HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n [\n {\n \"Id\": \"b750fe79269d\",\n \"Created\": 1364102658,\n \"CreatedBy\": \"/bin/bash\"\n },\n {\n \"Id\": \"27cf78414709\",\n \"Created\": 1364068391,\n \"CreatedBy\": \"\"\n }\n ]\n\nStatus Codes:\n\n200 \u2013 no error\n404 \u2013 no such image\n500 \u2013 server error\n\nPush an image on the registry\nPOST /images/(name)/push\nPush the image name on the registry\nExample request:\n POST /images/test/push HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\"status\": \"Pushing...\"}\n {\"status\": \"Pushing\", \"progress\": \"1/? (n/a)\", \"progressDetail\": {\"current\": 1}}}\n {\"error\": \"Invalid...\"}\n ...\n\nRequest Headers:\n\n\nX-Registry-Auth \u2013 include a base64-encoded AuthConfig\n object.\n\nStatus Codes:\n\n200 \u2013 no error\n404 \u2013 no such image\n500 \u2013 server error\n\nTag an image into a repository\nPOST /images/(name)/tag\nTag the image name into a repository\nExample request:\n POST /images/test/tag?repo=myrepoforce=0tag=v42 HTTP/1.1\n\nExample response:\n HTTP/1.1 201 OK\n\nQuery Parameters:\n\nrepo \u2013 The repository to tag in\nforce \u2013 1/True/true or 0/False/false, default false\ntag - The new tag name\n\nStatus Codes:\n\n201 \u2013 no error\n400 \u2013 bad parameter\n404 \u2013 no such image\n409 \u2013 conflict\n500 \u2013 server error\n\nRemove an image\nDELETE /images/(name)\nRemove the image name from the filesystem\nExample request:\n DELETE /images/test HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-type: application/json\n\n [\n {\"Untagged\": \"3e2f21a89f\"},\n {\"Deleted\": \"3e2f21a89f\"},\n {\"Deleted\": \"53b4f83ac9\"}\n ]\n\nStatus Codes:\n\n200 \u2013 no error\n404 \u2013 no such image\n409 \u2013 conflict\n500 \u2013 server error\n\nSearch images\nGET /images/search\nSearch for an image on Docker Hub.\n\nNote:\nThe response keys have changed from API v1.6 to reflect the JSON\nsent by the registry server to the docker daemon's request.\n\nExample request:\n GET /images/search?term=sshd HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n [\n {\n \"description\": \"\",\n \"is_official\": false,\n \"is_trusted\": false,\n \"name\": \"wma55/u1210sshd\",\n \"star_count\": 0\n },\n {\n \"description\": \"\",\n \"is_official\": false,\n \"is_trusted\": false,\n \"name\": \"jdswinbank/sshd\",\n \"star_count\": 0\n },\n {\n \"description\": \"\",\n \"is_official\": false,\n \"is_trusted\": false,\n \"name\": \"vgauthier/sshd\",\n \"star_count\": 0\n }\n ...\n ]\n\nQuery Parameters:\n\nterm \u2013 term to search\n\nStatus Codes:\n\n200 \u2013 no error\n500 \u2013 server error\n\n2.3 Misc\nBuild an image from Dockerfile via stdin\nPOST /build\nBuild an image from Dockerfile via stdin\nExample request:\n POST /build HTTP/1.1\n\n {{ TAR STREAM }}\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\"stream\": \"Step 1...\"}\n {\"stream\": \"...\"}\n {\"error\": \"Error...\", \"errorDetail\": {\"code\": 123, \"message\": \"Error...\"}}\n\nThe stream must be a tar archive compressed with one of the\nfollowing algorithms: identity (no compression), gzip, bzip2, xz.\n\nThe archive must include a file called `Dockerfile`\nat its root. It may include any number of other files,\nwhich will be accessible in the build context (See the [*ADD build\ncommand*](/reference/builder/#dockerbuilder)).\n\nQuery Parameters:\n\nt \u2013 repository name (and optionally a tag) to be applied to\n the resulting image in case of success\nremote \u2013 build source URI (git or HTTPS/HTTP)\nq \u2013 suppress verbose build output\n\nnocache \u2013 do not use the cache when building the image\nRequest Headers:\n\n\nContent-type \u2013 should be set to\n \"application/tar\".\n\nX-Registry-Auth \u2013 base64-encoded AuthConfig objec\n\nStatus Codes:\n\n200 \u2013 no error\n500 \u2013 server error\n\nCheck auth configuration\nPOST /auth\nGet the default username and email\nExample request:\n POST /auth HTTP/1.1\n Content-Type: application/json\n\n {\n \"username\":\" hannibal\",\n \"password: \"xxxx\",\n \"email\": \"hannibal@a-team.com\",\n \"serveraddress\": \"https://index.docker.io/v1/\"\n }\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: text/plain\n\nStatus Codes:\n\n200 \u2013 no error\n204 \u2013 no error\n500 \u2013 server error\n\nDisplay system-wide information\nGET /info\nDisplay system-wide information\nExample request:\n GET /info HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\n \"Containers\":11,\n \"Images\":16,\n \"Debug\":false,\n \"NFd\": 11,\n \"NGoroutines\":21,\n \"MemoryLimit\":true,\n \"SwapLimit\":false,\n \"IPv4Forwarding\":true\n }\n\nStatus Codes:\n\n200 \u2013 no error\n500 \u2013 server error\n\nShow the docker version information\nGET /version\nShow the docker version information\nExample request:\n GET /version HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\n \"Version\":\"0.2.2\",\n \"GitCommit\":\"5a2a5cc+CHANGES\",\n \"GoVersion\":\"go1.0.3\"\n }\n\nStatus Codes:\n\n200 \u2013 no error\n500 \u2013 server error\n\nCreate a new image from a container's changes\nPOST /commit\nCreate a new image from a container's changes\nExample request:\n POST /commit?container=44c004db4b17m=messagerepo=myrepo HTTP/1.1\n\nExample response:\n HTTP/1.1 201 OK\n Content-Type: application/vnd.docker.raw-stream\n\n {\"Id\": \"596069db4bf5\"}\n\nQuery Parameters:\n\ncontainer \u2013 source container\nrepo \u2013 repository\ntag \u2013 tag\nm \u2013 commit message\nauthor \u2013 author (e.g., \"John Hannibal Smith\n hannibal@a-team.com\")\nrun \u2013 config automatically applied when the image is run.\n (ex: {\"Cmd\": [\"cat\", \"/world\"], \"PortSpecs\":[\"22\"]})\n\nStatus Codes:\n\n201 \u2013 no error\n404 \u2013 no such container\n500 \u2013 server error\n\nMonitor Docker's events\nGET /events\nGet events from docker, either in real time via streaming,\nor via polling (using since).\nDocker containers will report the following events:\ncreate, destroy, die, export, kill, pause, restart, start, stop, unpause\n\nand Docker images will report:\nuntag, delete\n\nExample request:\n GET /events?since=1374067924\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\"status\": \"create\", \"id\": \"dfdf82bd3881\",\"from\": \"base:latest\", \"time\":1374067924}\n {\"status\": \"start\", \"id\": \"dfdf82bd3881\",\"from\": \"base:latest\", \"time\":1374067924}\n {\"status\": \"stop\", \"id\": \"dfdf82bd3881\",\"from\": \"base:latest\", \"time\":1374067966}\n {\"status\": \"destroy\", \"id\": \"dfdf82bd3881\",\"from\": \"base:latest\", \"time\":1374067970}\n\nQuery Parameters:\n\nsince \u2013 timestamp used for polling\n\nStatus Codes:\n\n200 \u2013 no error\n500 \u2013 server error\n\nGet a tarball containing all images and tags in a repository\nGET /images/(name)/get\nGet a tarball containing all images and metadata for the repository\nspecified by name.\nSee the image tarball format for more details.\nExample request\n GET /images/ubuntu/get\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/x-tar\n\n Binary data stream\n\nStatus Codes:\n\n200 \u2013 no error\n500 \u2013 server error\n\nLoad a tarball with a set of images and tags into docker\nPOST /images/load\nLoad a set of images and tags into the docker repository.\nSee the image tarball format for more details.\nExample request\n POST /images/load\n\n Tarball in body\n\nExample response:\n HTTP/1.1 200 OK\n\nStatus Codes:\n\n200 \u2013 no error\n500 \u2013 server error\n\nImage tarball format\nAn image tarball contains one directory per image layer (named using its long ID),\neach containing three files:\n\nVERSION: currently 1.0 - the file format version\njson: detailed layer information, similar to docker inspect layer_id\nlayer.tar: A tarfile containing the filesystem changes in this layer\n\nThe layer.tar file will contain aufs style .wh..wh.aufs files and directories\nfor storing attribute changes and deletions.\nIf the tarball defines a repository, there will also be a repositories file at\nthe root that contains a list of repository and tag names mapped to layer IDs.\n{hello-world:\n {latest: 565a9d68a73f6706862bfe8409a7f659776d4d60a8d096eb4a3cbce6999cc2a1}\n}\n\n\n3. Going further\n3.1 Inside docker run\nHere are the steps of docker run:\n\n\nCreate the container\n\n\nIf the status code is 404, it means the image doesn't exist:\n - Try to pull it\n - Then retry to create the container\n\n\nStart the container\n\n\nIf you are not in detached mode:\n - Attach to the container, using logs=1 (to have stdout and\n stderr from the container's start) and stream=1\n\n\nIf in detached mode or only stdin is attached:\n - Display the container's id\n\n\n3.2 Hijacking\nIn this version of the API, /attach, uses hijacking to transport stdin,\nstdout and stderr on the same socket. This might change in the future.\n3.3 CORS Requests\nTo enable cross origin requests to the remote api add the flag\n\"--api-enable-cors\" when running docker in daemon mode.\n$ docker -d -H=\"192.168.1.9:2375\" --api-enable-cors",
|
|
"title": "**HIDDEN**"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api_v1.8#docker-remote-api-v18",
|
|
"tags": "",
|
|
"text": "",
|
|
"title": "Docker Remote API v1.8"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api_v1.8#1-brief-introduction",
|
|
"tags": "",
|
|
"text": "The Remote API has replaced rcli The daemon listens on unix:///var/run/docker.sock but you can bind\n Docker to another host/port or a Unix socket. The API tends to be REST, but for some complex commands, like attach \n or pull , the HTTP connection is hijacked to transport stdout, stdin \n and stderr",
|
|
"title": "1. Brief introduction"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api_v1.8#2-endpoints",
|
|
"tags": "",
|
|
"text": "",
|
|
"title": "2. Endpoints"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api_v1.8#21-containers",
|
|
"tags": "",
|
|
"text": "List containers GET /containers/json List containers Example request : GET /containers/json?all=1 before=8dfafdbc3a40 size=1 HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n [\n {\n \"Id\": \"8dfafdbc3a40\",\n \"Image\": \"base:latest\",\n \"Command\": \"echo 1\",\n \"Created\": 1367854155,\n \"Status\": \"Exit 0\",\n \"Ports\": [{\"PrivatePort\": 2222, \"PublicPort\": 3333, \"Type\": \"tcp\"}],\n \"SizeRw\": 12288,\n \"SizeRootFs\": 0\n },\n {\n \"Id\": \"9cd87474be90\",\n \"Image\": \"base:latest\",\n \"Command\": \"echo 222222\",\n \"Created\": 1367854155,\n \"Status\": \"Exit 0\",\n \"Ports\": [],\n \"SizeRw\": 12288,\n \"SizeRootFs\": 0\n },\n {\n \"Id\": \"3176a2479c92\",\n \"Image\": \"base:latest\",\n \"Command\": \"echo 3333333333333333\",\n \"Created\": 1367854154,\n \"Status\": \"Exit 0\",\n \"Ports\":[],\n \"SizeRw\":12288,\n \"SizeRootFs\":0\n },\n {\n \"Id\": \"4cb07b47f9fb\",\n \"Image\": \"base:latest\",\n \"Command\": \"echo 444444444444444444444444444444444\",\n \"Created\": 1367854152,\n \"Status\": \"Exit 0\",\n \"Ports\": [],\n \"SizeRw\": 12288,\n \"SizeRootFs\": 0\n }\n ] Query Parameters: all \u2013 1/True/true or 0/False/false, Show all containers.\n Only running containers are shown by default (i.e., this defaults to false) limit \u2013 Show limit last created containers, include non-running ones. since \u2013 Show only containers created since Id, include non-running ones. before \u2013 Show only containers created before Id, include non-running ones. size \u2013 1/True/true or 0/False/false, Show the containers sizes Status Codes: 200 \u2013 no error 400 \u2013 bad parameter 500 \u2013 server error Create a container POST /containers/create Create a container Example request : POST /containers/create HTTP/1.1\n Content-Type: application/json\n\n {\n \"Hostname\":\"\",\n \"User\":\"\",\n \"Memory\":0,\n \"MemorySwap\":0,\n \"CpuShares\":0,\n \"AttachStdin\":false,\n \"AttachStdout\":true,\n \"AttachStderr\":true,\n \"PortSpecs\":null,\n \"Tty\":false,\n \"OpenStdin\":false,\n \"StdinOnce\":false,\n \"Env\":null,\n \"Cmd\":[\n \"date\"\n ],\n \"Dns\":null,\n \"Image\":\"base\",\n \"Volumes\":{\n \"/tmp\": {}\n },\n \"VolumesFrom\":\"\",\n \"WorkingDir\":\"\",\n \"ExposedPorts\":{\n \"22/tcp\": {}\n }\n } Example response : HTTP/1.1 201 Created\n Content-Type: application/json\n\n {\n \"Id\":\"e90e34656806\"\n \"Warnings\":[]\n } Json Parameters: Hostname \u2013 Container host name User \u2013 Username or UID Memory \u2013 Memory Limit in bytes CpuShares \u2013 CPU shares (relative weight) AttachStdin \u2013 1/True/true or 0/False/false, attach to\n standard input. Default false AttachStdout \u2013 1/True/true or 0/False/false, attach to\n standard output. Default false AttachStderr \u2013 1/True/true or 0/False/false, attach to\n standard error. Default false Tty \u2013 1/True/true or 0/False/false, allocate a pseudo-tty.\n Default false OpenStdin \u2013 1/True/true or 0/False/false, keep stdin open\n even if not attached. Default false Query Parameters: name \u2013 Assign the specified name to the container. Mus\n match /?[a-zA-Z0-9_-]+ . Status Codes: 201 \u2013 no error 404 \u2013 no such container 406 \u2013 impossible to attach (container not running) 500 \u2013 server error Inspect a container GET /containers/(id)/json Return low-level information on the container id Example request : GET /containers/4fa6e0f0c678/json HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\n \"Id\": \"4fa6e0f0c6786287e131c3852c58a2e01cc697a68231826813597e4994f1d6e2\",\n \"Created\": \"2013-05-07T14:51:42.041847+02:00\",\n \"Path\": \"date\",\n \"Args\": [],\n \"Config\": {\n \"Hostname\": \"4fa6e0f0c678\",\n \"User\": \"\",\n \"Memory\": 0,\n \"MemorySwap\": 0,\n \"AttachStdin\": false,\n \"AttachStdout\": true,\n \"AttachStderr\": true,\n \"PortSpecs\": null,\n \"Tty\": false,\n \"OpenStdin\": false,\n \"StdinOnce\": false,\n \"Env\": null,\n \"Cmd\": [\n \"date\"\n ],\n \"Dns\": null,\n \"Image\": \"base\",\n \"Volumes\": {},\n \"VolumesFrom\": \"\",\n \"WorkingDir\": \"\"\n },\n \"State\": {\n \"Running\": false,\n \"Pid\": 0,\n \"ExitCode\": 0,\n \"StartedAt\": \"2013-05-07T14:51:42.087658+02:01360\",\n \"Ghost\": false\n },\n \"Image\": \"b750fe79269d2ec9a3c593ef05b4332b1d1a02a62b4accb2c21d589ff2f5f2dc\",\n \"NetworkSettings\": {\n \"IpAddress\": \"\",\n \"IpPrefixLen\": 0,\n \"Gateway\": \"\",\n \"Bridge\": \"\",\n \"PortMapping\": null\n },\n \"SysInitPath\": \"/home/kitty/go/src/github.com/docker/docker/bin/docker\",\n \"ResolvConfPath\": \"/etc/resolv.conf\",\n \"Volumes\": {},\n \"HostConfig\": {\n \"Binds\": null,\n \"ContainerIDFile\": \"\",\n \"LxcConf\": [],\n \"Privileged\": false,\n \"PortBindings\": {\n \"80/tcp\": [\n {\n \"HostIp\": \"0.0.0.0\",\n \"HostPort\": \"49153\"\n }\n ]\n },\n \"Links\": null,\n \"PublishAllPorts\": false\n }\n } Status Codes: 200 \u2013 no error 404 \u2013 no such container 500 \u2013 server error List processes running inside a container GET /containers/(id)/top List processes running inside the container id Example request : GET /containers/4fa6e0f0c678/top HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\n \"Titles\": [\n \"USER\",\n \"PID\",\n \"%CPU\",\n \"%MEM\",\n \"VSZ\",\n \"RSS\",\n \"TTY\",\n \"STAT\",\n \"START\",\n \"TIME\",\n \"COMMAND\"\n ],\n \"Processes\": [\n [\"root\",\"20147\",\"0.0\",\"0.1\",\"18060\",\"1864\",\"pts/4\",\"S\",\"10:06\",\"0:00\",\"bash\"],\n [\"root\",\"20271\",\"0.0\",\"0.0\",\"4312\",\"352\",\"pts/4\",\"S+\",\"10:07\",\"0:00\",\"sleep\",\"10\"]\n ]\n } Query Parameters: ps_args \u2013 ps arguments to use (e.g., aux) Status Codes: 200 \u2013 no error 404 \u2013 no such container 500 \u2013 server error Inspect changes on a container's filesystem GET /containers/(id)/changes Inspect changes on container id 's filesystem Example request : GET /containers/4fa6e0f0c678/changes HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n [\n {\n \"Path\": \"/dev\",\n \"Kind\": 0\n },\n {\n \"Path\": \"/dev/kmsg\",\n \"Kind\": 1\n },\n {\n \"Path\": \"/test\",\n \"Kind\": 1\n }\n ] Status Codes: 200 \u2013 no error 404 \u2013 no such container 500 \u2013 server error Export a container GET /containers/(id)/export Export the contents of container id Example request : GET /containers/4fa6e0f0c678/export HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/octet-stream\n\n {{ TAR STREAM }} Status Codes: 200 \u2013 no error 404 \u2013 no such container 500 \u2013 server error Start a container POST /containers/(id)/start Start the container id Example request : POST /containers/(id)/start HTTP/1.1\n Content-Type: application/json\n\n {\n \"Binds\":[\"/tmp:/tmp\"],\n \"LxcConf\":[{\"Key\":\"lxc.utsname\",\"Value\":\"docker\"}],\n \"PortBindings\":{ \"22/tcp\": [{ \"HostPort\": \"11022\" }] },\n \"PublishAllPorts\":false,\n \"Privileged\":false\n } Example response : HTTP/1.1 204 No Content\n Content-Type: text/plain Json Parameters: Binds \u2013 Create a bind mount to a directory or file with\n [host-path]:[container-path]:[rw|ro]. If a directory\n \"container-path\" is missing, then docker creates a new volume. LxcConf \u2013 Map of custom lxc options PortBindings \u2013 Expose ports from the container, optionally\n publishing them via the HostPort flag PublishAllPorts \u2013 1/True/true or 0/False/false, publish all\n exposed ports to the host interfaces. Default false Privileged \u2013 1/True/true or 0/False/false, give extended\n privileges to this container. Default false Status Codes: 204 \u2013 no error 404 \u2013 no such container 500 \u2013 server error Stop a container POST /containers/(id)/stop Stop the container id Example request : POST /containers/e90e34656806/stop?t=5 HTTP/1.1 Example response : HTTP/1.1 204 OK Query Parameters: t \u2013 number of seconds to wait before killing the container Status Codes: 204 \u2013 no error 404 \u2013 no such container 500 \u2013 server error Restart a container POST /containers/(id)/restart Restart the container id Example request : POST /containers/e90e34656806/restart?t=5 HTTP/1.1 Example response : HTTP/1.1 204 No Content Query Parameters: t \u2013 number of seconds to wait before killing the container Status Codes: 204 \u2013 no error 404 \u2013 no such container 500 \u2013 server error Kill a container POST /containers/(id)/kill Kill the container id Example request : POST /containers/e90e34656806/kill HTTP/1.1 Example response : HTTP/1.1 204 No Content Status Codes: 204 \u2013 no error 404 \u2013 no such container 500 \u2013 server error Attach to a container POST /containers/(id)/attach Attach to the container id Example request : POST /containers/16253994b7c4/attach?logs=1 stream=0 stdout=1 HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/vnd.docker.raw-stream\n\n {{ STREAM }} Query Parameters: logs \u2013 1/True/true or 0/False/false, return logs. Defaul\n false stream \u2013 1/True/true or 0/False/false, return stream.\n Default false stdin \u2013 1/True/true or 0/False/false, if stream=true, attach\n to stdin. Default false stdout \u2013 1/True/true or 0/False/false, if logs=true, return\n stdout log, if stream=true, attach to stdout. Default false stderr \u2013 1/True/true or 0/False/false, if logs=true, return\n stderr log, if stream=true, attach to stderr. Default false Status Codes: 200 \u2013 no error 400 \u2013 bad parameter 404 \u2013 no such container 500 \u2013 server error Stream details : When using the TTY setting is enabled in POST /containers/create ,\nthe stream is the raw data from the process PTY and client's stdin.\nWhen the TTY is disabled, then the stream is multiplexed to separate\nstdout and stderr. The format is a Header and a Payload (frame). HEADER The header will contain the information on which stream write the\nstream (stdout or stderr). It also contain the size of the\nassociated frame encoded on the last 4 bytes (uint32). It is encoded on the first 8 bytes like this: header := [8]byte{STREAM_TYPE, 0, 0, 0, SIZE1, SIZE2, SIZE3, SIZE4} STREAM_TYPE can be: 0: stdin (will be written on stdout) 1: stdout 2: stderr SIZE1, SIZE2, SIZE3, SIZE4 are the 4 bytes of\nthe uint32 size encoded as big endian. PAYLOAD The payload is the raw stream. IMPLEMENTATION The simplest way to implement the Attach protocol is the following: Read 8 bytes chose stdout or stderr depending on the first byte Extract the frame size from the last 4 byets Read the extracted size and output it on the correct output Goto 1) Attach to a container (websocket) GET /containers/(id)/attach/ws Attach to the container id via websocket Implements websocket protocol handshake according to RFC 6455 Example request GET /containers/e90e34656806/attach/ws?logs=0 stream=1 stdin=1 stdout=1 stderr=1 HTTP/1.1 Example response {{ STREAM }} Query Parameters: logs \u2013 1/True/true or 0/False/false, return logs. Default false stream \u2013 1/True/true or 0/False/false, return stream.\n Default false stdin \u2013 1/True/true or 0/False/false, if stream=true, attach\n to stdin. Default false stdout \u2013 1/True/true or 0/False/false, if logs=true, return\n stdout log, if stream=true, attach to stdout. Default false stderr \u2013 1/True/true or 0/False/false, if logs=true, return\n stderr log, if stream=true, attach to stderr. Default false Status Codes: 200 \u2013 no error 400 \u2013 bad parameter 404 \u2013 no such container 500 \u2013 server error Wait a container POST /containers/(id)/wait Block until container id stops, then returns the exit code Example request : POST /containers/16253994b7c4/wait HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\"StatusCode\": 0} Status Codes: 200 \u2013 no error 404 \u2013 no such container 500 \u2013 server error Remove a container DELETE /containers/(id) Remove the container id from the filesystem Example request : DELETE /containers/16253994b7c4?v=1 HTTP/1.1 Example response : HTTP/1.1 204 No Content Query Parameters: v \u2013 1/True/true or 0/False/false, Remove the volumes\n associated to the container. Default false Status Codes: 204 \u2013 no error 400 \u2013 bad parameter 404 \u2013 no such container 500 \u2013 server error Copy files or folders from a container POST /containers/(id)/copy Copy files or folders of container id Example request : POST /containers/4fa6e0f0c678/copy HTTP/1.1\n Content-Type: application/json\n\n {\n \"Resource\": \"test.txt\"\n } Example response : HTTP/1.1 200 OK\n Content-Type: application/octet-stream\n\n {{ TAR STREAM }} Status Codes: 200 \u2013 no error 404 \u2013 no such container 500 \u2013 server error",
|
|
"title": "2.1 Containers"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api_v1.8#22-images",
|
|
"tags": "",
|
|
"text": "List Images GET /images/json Example request : GET /images/json?all=0 HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n [\n {\n \"RepoTags\": [\n \"ubuntu:12.04\",\n \"ubuntu:precise\",\n \"ubuntu:latest\"\n ],\n \"Id\": \"8dbd9e392a964056420e5d58ca5cc376ef18e2de93b5cc90e868a1bbc8318c1c\",\n \"Created\": 1365714795,\n \"Size\": 131506275,\n \"VirtualSize\": 131506275\n },\n {\n \"RepoTags\": [\n \"ubuntu:12.10\",\n \"ubuntu:quantal\"\n ],\n \"ParentId\": \"27cf784147099545\",\n \"Id\": \"b750fe79269d2ec9a3c593ef05b4332b1d1a02a62b4accb2c21d589ff2f5f2dc\",\n \"Created\": 1364102658,\n \"Size\": 24653,\n \"VirtualSize\": 180116135\n }\n ] Create an image POST /images/create Create an image, either by pull it from the registry or by importing i Example request : POST /images/create?fromImage=base HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\"status\": \"Pulling...\"}\n {\"status\": \"Pulling\", \"progress\": \"1 B/ 100 B\", \"progressDetail\": {\"current\": 1, \"total\": 100}}\n {\"error\": \"Invalid...\"}\n ...\n\nWhen using this endpoint to pull an image from the registry, the\n`X-Registry-Auth` header can be used to include\na base64-encoded AuthConfig object. Query Parameters: fromImage \u2013 name of the image to pull fromSrc \u2013 source to import, - means stdin repo \u2013 repository tag \u2013 tag registry \u2013 the registry to pull from Request Headers: X-Registry-Auth \u2013 base64-encoded AuthConfig object Status Codes: 200 \u2013 no error 500 \u2013 server error Insert a file in an image POST /images/(name)/insert Insert a file from url in the image name at path Example request : POST /images/test/insert?path=/usr url=myurl HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\"status\":\"Inserting...\"}\n {\"status\":\"Inserting\", \"progress\":\"1/? (n/a)\", \"progressDetail\":{\"current\":1}}\n {\"error\":\"Invalid...\"}\n ... Query Parameters: url \u2013 The url from where the file is taken path \u2013 The path where the file is stored Status Codes: 200 \u2013 no error 500 \u2013 server error Inspect an image GET /images/(name)/json Return low-level information on the image name Example request : GET /images/base/json HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\n \"id\":\"b750fe79269d2ec9a3c593ef05b4332b1d1a02a62b4accb2c21d589ff2f5f2dc\",\n \"parent\":\"27cf784147099545\",\n \"created\":\"2013-03-23T22:24:18.818426-07:00\",\n \"container\":\"3d67245a8d72ecf13f33dffac9f79dcdf70f75acb84d308770391510e0c23ad0\",\n \"container_config\":\n {\n \"Hostname\":\"\",\n \"User\":\"\",\n \"Memory\":0,\n \"MemorySwap\":0,\n \"AttachStdin\":false,\n \"AttachStdout\":false,\n \"AttachStderr\":false,\n \"PortSpecs\":null,\n \"Tty\":true,\n \"OpenStdin\":true,\n \"StdinOnce\":false,\n \"Env\":null,\n \"Cmd\": [\"/bin/bash\"],\n \"Dns\":null,\n \"Image\":\"base\",\n \"Volumes\":null,\n \"VolumesFrom\":\"\",\n \"WorkingDir\":\"\"\n },\n \"Size\": 6824592\n } Status Codes: 200 \u2013 no error 404 \u2013 no such image 500 \u2013 server error Get the history of an image GET /images/(name)/history Return the history of the image name Example request : GET /images/base/history HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n [\n {\n \"Id\": \"b750fe79269d\",\n \"Created\": 1364102658,\n \"CreatedBy\": \"/bin/bash\"\n },\n {\n \"Id\": \"27cf78414709\",\n \"Created\": 1364068391,\n \"CreatedBy\": \"\"\n }\n ] Status Codes: 200 \u2013 no error 404 \u2013 no such image 500 \u2013 server error Push an image on the registry POST /images/(name)/push Push the image name on the registry Example request : POST /images/test/push HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\"status\": \"Pushing...\"}\n {\"status\": \"Pushing\", \"progress\": \"1/? (n/a)\", \"progressDetail\": {\"current\": 1}}}\n {\"error\": \"Invalid...\"}\n ...\n\nRequest Headers: X-Registry-Auth \u2013 include a base64-encoded AuthConfig\n object. Status Codes: 200 \u2013 no error 404 \u2013 no such image 500 \u2013 server error Tag an image into a repository POST /images/(name)/tag Tag the image name into a repository Example request : POST /images/test/tag?repo=myrepo force=0 tag=v42 HTTP/1.1 Example response : HTTP/1.1 201 OK Query Parameters: repo \u2013 The repository to tag in force \u2013 1/True/true or 0/False/false, default false tag - The new tag name Status Codes: 201 \u2013 no error 400 \u2013 bad parameter 404 \u2013 no such image 409 \u2013 conflict 500 \u2013 server error Remove an image DELETE /images/(name) Remove the image name from the filesystem Example request : DELETE /images/test HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-type: application/json\n\n [\n {\"Untagged\": \"3e2f21a89f\"},\n {\"Deleted\": \"3e2f21a89f\"},\n {\"Deleted\": \"53b4f83ac9\"}\n ] Status Codes: 200 \u2013 no error 404 \u2013 no such image 409 \u2013 conflict 500 \u2013 server error Search images GET /images/search Search for an image on Docker Hub . Note :\nThe response keys have changed from API v1.6 to reflect the JSON\nsent by the registry server to the docker daemon's request. Example request : GET /images/search?term=sshd HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n [\n {\n \"description\": \"\",\n \"is_official\": false,\n \"is_trusted\": false,\n \"name\": \"wma55/u1210sshd\",\n \"star_count\": 0\n },\n {\n \"description\": \"\",\n \"is_official\": false,\n \"is_trusted\": false,\n \"name\": \"jdswinbank/sshd\",\n \"star_count\": 0\n },\n {\n \"description\": \"\",\n \"is_official\": false,\n \"is_trusted\": false,\n \"name\": \"vgauthier/sshd\",\n \"star_count\": 0\n }\n ...\n ] Query Parameters: term \u2013 term to search Status Codes: 200 \u2013 no error 500 \u2013 server error",
|
|
"title": "2.2 Images"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api_v1.8#23-misc",
|
|
"tags": "",
|
|
"text": "Build an image from Dockerfile via stdin POST /build Build an image from Dockerfile via stdin Example request : POST /build HTTP/1.1\n\n {{ TAR STREAM }} Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\"stream\": \"Step 1...\"}\n {\"stream\": \"...\"}\n {\"error\": \"Error...\", \"errorDetail\": {\"code\": 123, \"message\": \"Error...\"}}\n\nThe stream must be a tar archive compressed with one of the\nfollowing algorithms: identity (no compression), gzip, bzip2, xz.\n\nThe archive must include a file called `Dockerfile`\nat its root. It may include any number of other files,\nwhich will be accessible in the build context (See the [*ADD build\ncommand*](/reference/builder/#dockerbuilder)). Query Parameters: t \u2013 repository name (and optionally a tag) to be applied to\n the resulting image in case of success remote \u2013 build source URI (git or HTTPS/HTTP) q \u2013 suppress verbose build output nocache \u2013 do not use the cache when building the image Request Headers: Content-type \u2013 should be set to\n \"application/tar\" . X-Registry-Auth \u2013 base64-encoded AuthConfig objec Status Codes: 200 \u2013 no error 500 \u2013 server error Check auth configuration POST /auth Get the default username and email Example request : POST /auth HTTP/1.1\n Content-Type: application/json\n\n {\n \"username\":\" hannibal\",\n \"password: \"xxxx\",\n \"email\": \"hannibal@a-team.com\",\n \"serveraddress\": \"https://index.docker.io/v1/\"\n } Example response : HTTP/1.1 200 OK\n Content-Type: text/plain Status Codes: 200 \u2013 no error 204 \u2013 no error 500 \u2013 server error Display system-wide information GET /info Display system-wide information Example request : GET /info HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\n \"Containers\":11,\n \"Images\":16,\n \"Debug\":false,\n \"NFd\": 11,\n \"NGoroutines\":21,\n \"MemoryLimit\":true,\n \"SwapLimit\":false,\n \"IPv4Forwarding\":true\n } Status Codes: 200 \u2013 no error 500 \u2013 server error Show the docker version information GET /version Show the docker version information Example request : GET /version HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\n \"Version\":\"0.2.2\",\n \"GitCommit\":\"5a2a5cc+CHANGES\",\n \"GoVersion\":\"go1.0.3\"\n } Status Codes: 200 \u2013 no error 500 \u2013 server error Create a new image from a container's changes POST /commit Create a new image from a container's changes Example request : POST /commit?container=44c004db4b17 m=message repo=myrepo HTTP/1.1 Example response : HTTP/1.1 201 OK\n Content-Type: application/vnd.docker.raw-stream\n\n {\"Id\": \"596069db4bf5\"} Query Parameters: container \u2013 source container repo \u2013 repository tag \u2013 tag m \u2013 commit message author \u2013 author (e.g., \"John Hannibal Smith\n hannibal@a-team.com \") run \u2013 config automatically applied when the image is run.\n (ex: {\"Cmd\": [\"cat\", \"/world\"], \"PortSpecs\":[\"22\"]}) Status Codes: 201 \u2013 no error 404 \u2013 no such container 500 \u2013 server error Monitor Docker's events GET /events Get events from docker, either in real time via streaming,\nor via polling (using since). Docker containers will report the following events: create, destroy, die, export, kill, pause, restart, start, stop, unpause and Docker images will report: untag, delete Example request : GET /events?since=1374067924 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\"status\": \"create\", \"id\": \"dfdf82bd3881\",\"from\": \"base:latest\", \"time\":1374067924}\n {\"status\": \"start\", \"id\": \"dfdf82bd3881\",\"from\": \"base:latest\", \"time\":1374067924}\n {\"status\": \"stop\", \"id\": \"dfdf82bd3881\",\"from\": \"base:latest\", \"time\":1374067966}\n {\"status\": \"destroy\", \"id\": \"dfdf82bd3881\",\"from\": \"base:latest\", \"time\":1374067970} Query Parameters: since \u2013 timestamp used for polling Status Codes: 200 \u2013 no error 500 \u2013 server error Get a tarball containing all images and tags in a repository GET /images/(name)/get Get a tarball containing all images and metadata for the repository\nspecified by name .\nSee the image tarball format for more details. Example request GET /images/ubuntu/get Example response : HTTP/1.1 200 OK\n Content-Type: application/x-tar\n\n Binary data stream Status Codes: 200 \u2013 no error 500 \u2013 server error Load a tarball with a set of images and tags into docker POST /images/load Load a set of images and tags into the docker repository. See the image tarball format for more details. Example request POST /images/load\n\n Tarball in body Example response : HTTP/1.1 200 OK Status Codes: 200 \u2013 no error 500 \u2013 server error Image tarball format An image tarball contains one directory per image layer (named using its long ID),\neach containing three files: VERSION : currently 1.0 - the file format version json : detailed layer information, similar to docker inspect layer_id layer.tar : A tarfile containing the filesystem changes in this layer The layer.tar file will contain aufs style .wh..wh.aufs files and directories\nfor storing attribute changes and deletions. If the tarball defines a repository, there will also be a repositories file at\nthe root that contains a list of repository and tag names mapped to layer IDs. { hello-world :\n { latest : 565a9d68a73f6706862bfe8409a7f659776d4d60a8d096eb4a3cbce6999cc2a1 }\n}",
|
|
"title": "2.3 Misc"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api_v1.8#3-going-further",
|
|
"tags": "",
|
|
"text": "",
|
|
"title": "3. Going further"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api_v1.8#31-inside-docker-run",
|
|
"tags": "",
|
|
"text": "Here are the steps of docker run : Create the container If the status code is 404, it means the image doesn't exist:\n - Try to pull it\n - Then retry to create the container Start the container If you are not in detached mode:\n - Attach to the container, using logs=1 (to have stdout and\n stderr from the container's start) and stream=1 If in detached mode or only stdin is attached:\n - Display the container's id",
|
|
"title": "3.1 Inside docker run"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api_v1.8#32-hijacking",
|
|
"tags": "",
|
|
"text": "In this version of the API, /attach, uses hijacking to transport stdin,\nstdout and stderr on the same socket. This might change in the future.",
|
|
"title": "3.2 Hijacking"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api_v1.8#33-cors-requests",
|
|
"tags": "",
|
|
"text": "To enable cross origin requests to the remote api add the flag\n\"--api-enable-cors\" when running docker in daemon mode. $ docker -d -H=\"192.168.1.9:2375\" --api-enable-cors",
|
|
"title": "3.3 CORS Requests"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api_v1.7/",
|
|
"tags": "",
|
|
"text": "Docker Remote API v1.7\n1. Brief introduction\n\nThe Remote API has replaced rcli\nThe daemon listens on unix:///var/run/docker.sock but you can bind\n Docker to another host/port or a Unix socket.\nThe API tends to be REST, but for some complex commands, like attach\n or pull, the HTTP connection is hijacked to transport stdout, stdin\n and stderr\n\n2. Endpoints\n2.1 Containers\nList containers\nGET /containers/json\nList containers\nExample request:\n GET /containers/json?all=1before=8dfafdbc3a40size=1 HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n [\n {\n \"Id\": \"8dfafdbc3a40\",\n \"Image\": \"base:latest\",\n \"Command\": \"echo 1\",\n \"Created\": 1367854155,\n \"Status\": \"Exit 0\",\n \"Ports\": [{\"PrivatePort\": 2222, \"PublicPort\": 3333, \"Type\": \"tcp\"}],\n \"SizeRw\": 12288,\n \"SizeRootFs\": 0\n },\n {\n \"Id\": \"9cd87474be90\",\n \"Image\": \"base:latest\",\n \"Command\": \"echo 222222\",\n \"Created\": 1367854155,\n \"Status\": \"Exit 0\",\n \"Ports\": [],\n \"SizeRw\": 12288,\n \"SizeRootFs\": 0\n },\n {\n \"Id\": \"3176a2479c92\",\n \"Image\": \"base:latest\",\n \"Command\": \"echo 3333333333333333\",\n \"Created\": 1367854154,\n \"Status\": \"Exit 0\",\n \"Ports\":[],\n \"SizeRw\":12288,\n \"SizeRootFs\":0\n },\n {\n \"Id\": \"4cb07b47f9fb\",\n \"Image\": \"base:latest\",\n \"Command\": \"echo 444444444444444444444444444444444\",\n \"Created\": 1367854152,\n \"Status\": \"Exit 0\",\n \"Ports\": [],\n \"SizeRw\": 12288,\n \"SizeRootFs\": 0\n }\n ]\n\nQuery Parameters:\n\nall \u2013 1/True/true or 0/False/false, Show all containers.\n Only running containers are shown by default (i.e., this defaults to false)\nlimit \u2013 Show limit last created containers, include non-running ones.\nsince \u2013 Show only containers created since Id, include non-running ones.\nbefore \u2013 Show only containers created before Id, include non-running ones.\nsize \u2013 1/True/true or 0/False/false, Show the containers sizes\n\nStatus Codes:\n\n200 \u2013 no error\n400 \u2013 bad parameter\n500 \u2013 server error\n\nCreate a container\nPOST /containers/create\nCreate a container\nExample request:\n POST /containers/create HTTP/1.1\n Content-Type: application/json\n\n {\n \"Hostname\":\"\",\n \"User\":\"\",\n \"Memory\":0,\n \"MemorySwap\":0,\n \"AttachStdin\":false,\n \"AttachStdout\":true,\n \"AttachStderr\":true,\n \"PortSpecs\":null,\n \"Tty\":false,\n \"OpenStdin\":false,\n \"StdinOnce\":false,\n \"Env\":null,\n \"Cmd\":[\n \"date\"\n ],\n \"Dns\":null,\n \"Image\":\"base\",\n \"Volumes\":{\n \"/tmp\": {}\n },\n \"VolumesFrom\":\"\",\n \"WorkingDir\":\"\",\n \"ExposedPorts\":{\n \"22/tcp\": {}\n }\n }\n\nExample response:\n HTTP/1.1 201 Created\n Content-Type: application/json\n\n {\n \"Id\":\"e90e34656806\"\n \"Warnings\":[]\n }\n\nJson Parameters:\n\nconfig \u2013 the container's configuration\n\nStatus Codes:\n\n201 \u2013 no error\n404 \u2013 no such container\n406 \u2013 impossible to attach (container not running)\n500 \u2013 server error\n\nInspect a container\nGET /containers/(id)/json\nReturn low-level information on the container id\nExample request:\n GET /containers/4fa6e0f0c678/json HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\n \"Id\": \"4fa6e0f0c6786287e131c3852c58a2e01cc697a68231826813597e4994f1d6e2\",\n \"Created\": \"2013-05-07T14:51:42.041847+02:00\",\n \"Path\": \"date\",\n \"Args\": [],\n \"Config\": {\n \"Hostname\": \"4fa6e0f0c678\",\n \"User\": \"\",\n \"Memory\": 0,\n \"MemorySwap\": 0,\n \"AttachStdin\": false,\n \"AttachStdout\": true,\n \"AttachStderr\": true,\n \"PortSpecs\": null,\n \"Tty\": false,\n \"OpenStdin\": false,\n \"StdinOnce\": false,\n \"Env\": null,\n \"Cmd\": [\n \"date\"\n ],\n \"Dns\": null,\n \"Image\": \"base\",\n \"Volumes\": {},\n \"VolumesFrom\": \"\",\n \"WorkingDir\": \"\"\n },\n \"State\": {\n \"Running\": false,\n \"Pid\": 0,\n \"ExitCode\": 0,\n \"StartedAt\": \"2013-05-07T14:51:42.087658+02:01360\",\n \"Ghost\": false\n },\n \"Image\": \"b750fe79269d2ec9a3c593ef05b4332b1d1a02a62b4accb2c21d589ff2f5f2dc\",\n \"NetworkSettings\": {\n \"IpAddress\": \"\",\n \"IpPrefixLen\": 0,\n \"Gateway\": \"\",\n \"Bridge\": \"\",\n \"PortMapping\": null\n },\n \"SysInitPath\": \"/home/kitty/go/src/github.com/docker/docker/bin/docker\",\n \"ResolvConfPath\": \"/etc/resolv.conf\",\n \"Volumes\": {}\n }\n\nStatus Codes:\n\n200 \u2013 no error\n404 \u2013 no such container\n500 \u2013 server error\n\nList processes running inside a container\nGET /containers/(id)/top\nList processes running inside the container id\nExample request:\n GET /containers/4fa6e0f0c678/top HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\n \"Titles\": [\n \"USER\",\n \"PID\",\n \"%CPU\",\n \"%MEM\",\n \"VSZ\",\n \"RSS\",\n \"TTY\",\n \"STAT\",\n \"START\",\n \"TIME\",\n \"COMMAND\"\n ],\n \"Processes\": [\n [\"root\",\"20147\",\"0.0\",\"0.1\",\"18060\",\"1864\",\"pts/4\",\"S\",\"10:06\",\"0:00\",\"bash\"],\n [\"root\",\"20271\",\"0.0\",\"0.0\",\"4312\",\"352\",\"pts/4\",\"S+\",\"10:07\",\"0:00\",\"sleep\",\"10\"]\n ]\n }\n\nQuery Parameters:\n\nps_args \u2013 ps arguments to use (e.g., aux)\n\nStatus Codes:\n\n200 \u2013 no error\n404 \u2013 no such container\n500 \u2013 server error\n\nInspect changes on a container's filesystem\nGET /containers/(id)/changes\nInspect changes on container id's filesystem\nExample request:\n GET /containers/4fa6e0f0c678/changes HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n [\n {\n \"Path\": \"/dev\",\n \"Kind\": 0\n },\n {\n \"Path\": \"/dev/kmsg\",\n \"Kind\": 1\n },\n {\n \"Path\": \"/test\",\n \"Kind\": 1\n }\n ]\n\nStatus Codes:\n\n200 \u2013 no error\n404 \u2013 no such container\n500 \u2013 server error\n\nExport a container\nGET /containers/(id)/export\nExport the contents of container id\nExample request:\n GET /containers/4fa6e0f0c678/export HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/octet-stream\n\n {{ TAR STREAM }}\n\nStatus Codes:\n\n200 \u2013 no error\n404 \u2013 no such container\n500 \u2013 server error\n\nStart a container\nPOST /containers/(id)/start\nStart the container id\nExample request:\n POST /containers/(id)/start HTTP/1.1\n Content-Type: application/json\n\n {\n \"Binds\":[\"/tmp:/tmp\"],\n \"LxcConf\":[{\"Key\":\"lxc.utsname\",\"Value\":\"docker\"}],\n \"PortBindings\":{ \"22/tcp\": [{ \"HostPort\": \"11022\" }] },\n \"Privileged\":false,\n \"PublishAllPorts\":false\n }\n\nBinds need to reference Volumes that were defined during container\ncreation.\n\nExample response:\n HTTP/1.1 204 No Content\n Content-Type: text/plain\n\nJson Parameters:\n\nhostConfig \u2013 the container's host configuration (optional)\n\nStatus Codes:\n\n204 \u2013 no error\n404 \u2013 no such container\n500 \u2013 server error\n\nStop a container\nPOST /containers/(id)/stop\nStop the container id\nExample request:\n POST /containers/e90e34656806/stop?t=5 HTTP/1.1\n\nExample response:\n HTTP/1.1 204 OK\n\nQuery Parameters:\n\nt \u2013 number of seconds to wait before killing the container\n\nStatus Codes:\n\n204 \u2013 no error\n404 \u2013 no such container\n500 \u2013 server error\n\nRestart a container\nPOST /containers/(id)/restart\nRestart the container id\nExample request:\n POST /containers/e90e34656806/restart?t=5 HTTP/1.1\n\nExample response:\n HTTP/1.1 204 No Content\n\nQuery Parameters:\n\nt \u2013 number of seconds to wait before killing the container\n\nStatus Codes:\n\n204 \u2013 no error\n404 \u2013 no such container\n500 \u2013 server error\n\nKill a container\nPOST /containers/(id)/kill\nKill the container id\nExample request:\n POST /containers/e90e34656806/kill HTTP/1.1\n\nExample response:\n HTTP/1.1 204 No Content\n\nStatus Codes:\n\n204 \u2013 no error\n404 \u2013 no such container\n500 \u2013 server error\n\nAttach to a container\nPOST /containers/(id)/attach\nAttach to the container id\nExample request:\n POST /containers/16253994b7c4/attach?logs=1stream=0stdout=1 HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/vnd.docker.raw-stream\n\n {{ STREAM }}\n\nQuery Parameters:\n\nlogs \u2013 1/True/true or 0/False/false, return logs. Defaul\n false\nstream \u2013 1/True/true or 0/False/false, return stream.\n Default false\nstdin \u2013 1/True/true or 0/False/false, if stream=true, attach\n to stdin. Default false\nstdout \u2013 1/True/true or 0/False/false, if logs=true, return\n stdout log, if stream=true, attach to stdout. Default false\nstderr \u2013 1/True/true or 0/False/false, if logs=true, return\n stderr log, if stream=true, attach to stderr. Default false\n\nStatus Codes:\n\n200 \u2013 no error\n400 \u2013 bad parameter\n404 \u2013 no such container\n\n500 \u2013 server error\nStream details:\nWhen using the TTY setting is enabled in\nPOST /containers/create\n,\nthe stream is the raw data from the process PTY and client's stdin.\nWhen the TTY is disabled, then the stream is multiplexed to separate\nstdout and stderr.\nThe format is a Header and a Payload (frame).\nHEADER\nThe header will contain the information on which stream write the\nstream (stdout or stderr). It also contain the size of the\nassociated frame encoded on the last 4 bytes (uint32).\nIt is encoded on the first 8 bytes like this:\nheader := [8]byte{STREAM_TYPE, 0, 0, 0, SIZE1, SIZE2, SIZE3, SIZE4}\n\nSTREAM_TYPE can be:\n\n\n0: stdin (will be written on stdout)\n\n1: stdout\n\n2: stderr\nSIZE1, SIZE2, SIZE3, SIZE4 are the 4 bytes of\nthe uint32 size encoded as big endian.\nPAYLOAD\nThe payload is the raw stream.\nIMPLEMENTATION\nThe simplest way to implement the Attach protocol is the following:\n\nRead 8 bytes\nchose stdout or stderr depending on the first byte\nExtract the frame size from the last 4 byets\nRead the extracted size and output it on the correct output\nGoto 1)\n\n\n\nAttach to a container (websocket)\nGET /containers/(id)/attach/ws\nAttach to the container id via websocket\nImplements websocket protocol handshake according to RFC 6455\nExample request\n GET /containers/e90e34656806/attach/ws?logs=0stream=1stdin=1stdout=1stderr=1 HTTP/1.1\n\nExample response\n {{ STREAM }}\n\nQuery Parameters:\n\nlogs \u2013 1/True/true or 0/False/false, return logs. Default false\nstream \u2013 1/True/true or 0/False/false, return stream.\n Default false\nstdin \u2013 1/True/true or 0/False/false, if stream=true, attach\n to stdin. Default false\nstdout \u2013 1/True/true or 0/False/false, if logs=true, return\n stdout log, if stream=true, attach to stdout. Default false\nstderr \u2013 1/True/true or 0/False/false, if logs=true, return\n stderr log, if stream=true, attach to stderr. Default false\n\nStatus Codes:\n\n200 \u2013 no error\n400 \u2013 bad parameter\n404 \u2013 no such container\n500 \u2013 server error\n\nWait a container\nPOST /containers/(id)/wait\nBlock until container id stops, then returns the exit code\nExample request:\n POST /containers/16253994b7c4/wait HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\"StatusCode\": 0}\n\nStatus Codes:\n\n200 \u2013 no error\n404 \u2013 no such container\n500 \u2013 server error\n\nRemove a container\nDELETE /containers/(id)\nRemove the container id from the filesystem\nExample request:\n DELETE /containers/16253994b7c4?v=1 HTTP/1.1\n\nExample response:\n HTTP/1.1 204 No Content\n\nQuery Parameters:\n\nv \u2013 1/True/true or 0/False/false, Remove the volumes\n associated to the container. Default false\n\nStatus Codes:\n\n204 \u2013 no error\n400 \u2013 bad parameter\n404 \u2013 no such container\n500 \u2013 server error\n\nCopy files or folders from a container\nPOST /containers/(id)/copy\nCopy files or folders of container id\nExample request:\n POST /containers/4fa6e0f0c678/copy HTTP/1.1\n Content-Type: application/json\n\n {\n \"Resource\": \"test.txt\"\n }\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/octet-stream\n\n {{ TAR STREAM }}\n\nStatus Codes:\n\n200 \u2013 no error\n404 \u2013 no such container\n500 \u2013 server error\n\n2.2 Images\nList Images\nGET /images/json\nExample request:\n GET /images/json?all=0 HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n [\n {\n \"RepoTags\": [\n \"ubuntu:12.04\",\n \"ubuntu:precise\",\n \"ubuntu:latest\"\n ],\n \"Id\": \"8dbd9e392a964056420e5d58ca5cc376ef18e2de93b5cc90e868a1bbc8318c1c\",\n \"Created\": 1365714795,\n \"Size\": 131506275,\n \"VirtualSize\": 131506275\n },\n {\n \"RepoTags\": [\n \"ubuntu:12.10\",\n \"ubuntu:quantal\"\n ],\n \"ParentId\": \"27cf784147099545\",\n \"Id\": \"b750fe79269d2ec9a3c593ef05b4332b1d1a02a62b4accb2c21d589ff2f5f2dc\",\n \"Created\": 1364102658,\n \"Size\": 24653,\n \"VirtualSize\": 180116135\n }\n ]\n\nCreate an image\nPOST /images/create\nCreate an image, either by pull it from the registry or by importing i\nExample request:\n POST /images/create?fromImage=base HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\"status\":\"Pulling...\"}\n {\"status\":\"Pulling\", \"progress\":\"1/? (n/a)\"}\n {\"error\":\"Invalid...\"}\n ...\n\nWhen using this endpoint to pull an image from the registry, the\n`X-Registry-Auth` header can be used to include\na base64-encoded AuthConfig object.\n\nQuery Parameters:\n\nfromImage \u2013 name of the image to pull\nfromSrc \u2013 source to import, - means stdin\nrepo \u2013 repository\ntag \u2013 tag\nregistry \u2013 the registry to pull from\n\nRequest Headers:\n\nX-Registry-Auth \u2013 base64-encoded AuthConfig object\n\nStatus Codes:\n\n200 \u2013 no error\n500 \u2013 server error\n\nInsert a file in an image\nPOST /images/(name)/insert\nInsert a file from url in the image name at path\nExample request:\n POST /images/test/insert?path=/usrurl=myurl HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\"status\":\"Inserting...\"}\n {\"status\":\"Inserting\", \"progress\":\"1/? (n/a)\"}\n {\"error\":\"Invalid...\"}\n ...\n\nQuery Parameters:\n\nurl \u2013 The url from where the file is taken\npath \u2013 The path where the file is stored\n\nStatus Codes:\n\n200 \u2013 no error\n500 \u2013 server error\n\nInspect an image\nGET /images/(name)/json\nReturn low-level information on the image name\nExample request:\n GET /images/base/json HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\n \"id\":\"b750fe79269d2ec9a3c593ef05b4332b1d1a02a62b4accb2c21d589ff2f5f2dc\",\n \"parent\":\"27cf784147099545\",\n \"created\":\"2013-03-23T22:24:18.818426-07:00\",\n \"container\":\"3d67245a8d72ecf13f33dffac9f79dcdf70f75acb84d308770391510e0c23ad0\",\n \"container_config\":\n {\n \"Hostname\":\"\",\n \"User\":\"\",\n \"Memory\":0,\n \"MemorySwap\":0,\n \"AttachStdin\":false,\n \"AttachStdout\":false,\n \"AttachStderr\":false,\n \"PortSpecs\":null,\n \"Tty\":true,\n \"OpenStdin\":true,\n \"StdinOnce\":false,\n \"Env\":null,\n \"Cmd\": [\"/bin/bash\"],\n \"Dns\":null,\n \"Image\":\"base\",\n \"Volumes\":null,\n \"VolumesFrom\":\"\",\n \"WorkingDir\":\"\"\n },\n \"Size\": 6824592\n }\n\nStatus Codes:\n\n200 \u2013 no error\n404 \u2013 no such image\n500 \u2013 server error\n\nGet the history of an image\nGET /images/(name)/history\nReturn the history of the image name\nExample request:\n GET /images/base/history HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n [\n {\n \"Id\": \"b750fe79269d\",\n \"Created\": 1364102658,\n \"CreatedBy\": \"/bin/bash\"\n },\n {\n \"Id\": \"27cf78414709\",\n \"Created\": 1364068391,\n \"CreatedBy\": \"\"\n }\n ]\n\nStatus Codes:\n\n200 \u2013 no error\n404 \u2013 no such image\n500 \u2013 server error\n\nPush an image on the registry\nPOST /images/(name)/push\nPush the image name on the registry\nExample request:\n POST /images/test/push HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\"status\":\"Pushing...\"}\n {\"status\":\"Pushing\", \"progress\":\"1/? (n/a)\"}\n {\"error\":\"Invalid...\"}\n ...\n\nRequest Headers:\n\n\nX-Registry-Auth \u2013 include a base64-encoded AuthConfig\n object.\n\nStatus Codes:\n\n200 \u2013 no error\n404 \u2013 no such image\n500 \u2013 server error\n\nTag an image into a repository\nPOST /images/(name)/tag\nTag the image name into a repository\nExample request:\n POST /images/test/tag?repo=myrepoforce=0tag=v42 HTTP/1.1\n\nExample response:\n HTTP/1.1 201 OK\n\nQuery Parameters:\n\nrepo \u2013 The repository to tag in\nforce \u2013 1/True/true or 0/False/false, default false\ntag - The new tag name\n\nStatus Codes:\n\n201 \u2013 no error\n400 \u2013 bad parameter\n404 \u2013 no such image\n409 \u2013 conflict\n500 \u2013 server error\n\nRemove an image\nDELETE /images/(name)\nRemove the image name from the filesystem\nExample request:\n DELETE /images/test HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-type: application/json\n\n [\n {\"Untagged\": \"3e2f21a89f\"},\n {\"Deleted\": \"3e2f21a89f\"},\n {\"Deleted\": \"53b4f83ac9\"}\n ]\n\nStatus Codes:\n\n200 \u2013 no error\n404 \u2013 no such image\n409 \u2013 conflict\n500 \u2013 server error\n\nSearch images\nGET /images/search\nSearch for an image on Docker Hub.\n\nNote:\nThe response keys have changed from API v1.6 to reflect the JSON\nsent by the registry server to the docker daemon's request.\n\nExample request:\n GET /images/search?term=sshd HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n [\n {\n \"description\": \"\",\n \"is_official\": false,\n \"is_trusted\": false,\n \"name\": \"wma55/u1210sshd\",\n \"star_count\": 0\n },\n {\n \"description\": \"\",\n \"is_official\": false,\n \"is_trusted\": false,\n \"name\": \"jdswinbank/sshd\",\n \"star_count\": 0\n },\n {\n \"description\": \"\",\n \"is_official\": false,\n \"is_trusted\": false,\n \"name\": \"vgauthier/sshd\",\n \"star_count\": 0\n }\n ...\n ]\n\nQuery Parameters:\n\nterm \u2013 term to search\n\nStatus Codes:\n\n200 \u2013 no error\n500 \u2013 server error\n\n2.3 Misc\nBuild an image from Dockerfile via stdin\nPOST /build\nBuild an image from Dockerfile via stdin\nExample request:\n POST /build HTTP/1.1\n\n {{ TAR STREAM }}\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n {{ STREAM }}\n\nThe stream must be a tar archive compressed with one of the\nfollowing algorithms: identity (no compression), gzip, bzip2, xz.\n\nThe archive must include a file called `Dockerfile`\nat its root. It may include any number of other files,\nwhich will be accessible in the build context (See the [*ADD build\ncommand*](/builder/#dockerbuilder)).\n\nQuery Parameters:\n\nt \u2013 repository name (and optionally a tag) to be applied to\n the resulting image in case of success\nremote \u2013 build source URI (git or HTTPS/HTTP)\nq \u2013 suppress verbose build output\n\nnocache \u2013 do not use the cache when building the image\nRequest Headers:\n\n\nContent-type \u2013 should be set to\n \"application/tar\".\n\n\nStatus Codes:\n\n200 \u2013 no error\n500 \u2013 server error\n\nCheck auth configuration\nPOST /auth\nGet the default username and email\nExample request:\n POST /auth HTTP/1.1\n Content-Type: application/json\n\n {\n \"username\":\" hannibal\",\n \"password: \"xxxx\",\n \"email\": \"hannibal@a-team.com\",\n \"serveraddress\": \"https://index.docker.io/v1/\"\n }\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: text/plain\n\nStatus Codes:\n\n200 \u2013 no error\n204 \u2013 no error\n500 \u2013 server error\n\nDisplay system-wide information\nGET /info\nDisplay system-wide information\nExample request:\n GET /info HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\n \"Containers\":11,\n \"Images\":16,\n \"Debug\":false,\n \"NFd\": 11,\n \"NGoroutines\":21,\n \"MemoryLimit\":true,\n \"SwapLimit\":false,\n \"IPv4Forwarding\":true\n }\n\nStatus Codes:\n\n200 \u2013 no error\n500 \u2013 server error\n\nShow the docker version information\nGET /version\nShow the docker version information\nExample request:\n GET /version HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\n \"Version\":\"0.2.2\",\n \"GitCommit\":\"5a2a5cc+CHANGES\",\n \"GoVersion\":\"go1.0.3\"\n }\n\nStatus Codes:\n\n200 \u2013 no error\n500 \u2013 server error\n\nCreate a new image from a container's changes\nPOST /commit\nCreate a new image from a container's changes\nExample request:\n POST /commit?container=44c004db4b17m=messagerepo=myrepo HTTP/1.1\n\nExample response:\n HTTP/1.1 201 OK\n Content-Type: application/vnd.docker.raw-stream\n\n {\"Id\": \"596069db4bf5\"}\n\nQuery Parameters:\n\ncontainer \u2013 source container\nrepo \u2013 repository\ntag \u2013 tag\nm \u2013 commit message\nauthor \u2013 author (e.g., \"John Hannibal Smith\n hannibal@a-team.com\")\nrun \u2013 config automatically applied when the image is run.\n (ex: {\"Cmd\": [\"cat\", \"/world\"], \"PortSpecs\":[\"22\"]})\n\nStatus Codes:\n\n201 \u2013 no error\n404 \u2013 no such container\n500 \u2013 server error\n\nMonitor Docker's events\nGET /events\nGet events from docker, either in real time via streaming, or via\npolling (using since).\nDocker containers will report the following events:\ncreate, destroy, die, export, kill, pause, restart, start, stop, unpause\n\nand Docker images will report:\nuntag, delete\n\nExample request:\n GET /events?since=1374067924\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\"status\": \"create\", \"id\": \"dfdf82bd3881\",\"from\": \"base:latest\", \"time\":1374067924}\n {\"status\": \"start\", \"id\": \"dfdf82bd3881\",\"from\": \"base:latest\", \"time\":1374067924}\n {\"status\": \"stop\", \"id\": \"dfdf82bd3881\",\"from\": \"base:latest\", \"time\":1374067966}\n {\"status\": \"destroy\", \"id\": \"dfdf82bd3881\",\"from\": \"base:latest\", \"time\":1374067970}\n\nQuery Parameters:\n\nsince \u2013 timestamp used for polling\n\nStatus Codes:\n\n200 \u2013 no error\n500 \u2013 server error\n\nGet a tarball containing all images and tags in a repository\nGET /images/(name)/get\nGet a tarball containing all images and metadata for the repository\nspecified by name.\nExample request\n GET /images/ubuntu/get\n\nExample response:\n .. sourcecode:: http\n\n HTTP/1.1 200 OK\n Content-Type: application/x-tar\n\n Binary data stream\n :statuscode 200: no error\n :statuscode 500: server error\n\nLoad a tarball with a set of images and tags into docker\nPOST /images/load\nLoad a set of images and tags into the docker repository.\nExample request\n POST /images/load\n\n Tarball in body\n\n **Example response**:\n\n .. sourcecode:: http\n\n HTTP/1.1 200 OK\n\n :statuscode 200: no error\n :statuscode 500: server error\n\n3. Going further\n3.1 Inside docker run\nHere are the steps of docker run :\n\n\nCreate the container\n\n\nIf the status code is 404, it means the image doesn't exist:\n - Try to pull it\n - Then retry to create the container\n\n\nStart the container\n\n\nIf you are not in detached mode:\n - Attach to the container, using logs=1 (to have stdout and\n stderr from the container's start) and stream=1\n\n\nIf in detached mode or only stdin is attached:\n - Display the container's id\n\n\n3.2 Hijacking\nIn this version of the API, /attach, uses hijacking to transport stdin,\nstdout and stderr on the same socket. This might change in the future.\n3.3 CORS Requests\nTo enable cross origin requests to the remote api add the flag\n\"--api-enable-cors\" when running docker in daemon mode.\n$ docker -d -H=\"192.168.1.9:2375\" --api-enable-cors",
|
|
"title": "**HIDDEN**"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api_v1.7#docker-remote-api-v17",
|
|
"tags": "",
|
|
"text": "",
|
|
"title": "Docker Remote API v1.7"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api_v1.7#1-brief-introduction",
|
|
"tags": "",
|
|
"text": "The Remote API has replaced rcli The daemon listens on unix:///var/run/docker.sock but you can bind\n Docker to another host/port or a Unix socket. The API tends to be REST, but for some complex commands, like attach \n or pull , the HTTP connection is hijacked to transport stdout, stdin \n and stderr",
|
|
"title": "1. Brief introduction"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api_v1.7#2-endpoints",
|
|
"tags": "",
|
|
"text": "",
|
|
"title": "2. Endpoints"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api_v1.7#21-containers",
|
|
"tags": "",
|
|
"text": "List containers GET /containers/json List containers Example request : GET /containers/json?all=1 before=8dfafdbc3a40 size=1 HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n [\n {\n \"Id\": \"8dfafdbc3a40\",\n \"Image\": \"base:latest\",\n \"Command\": \"echo 1\",\n \"Created\": 1367854155,\n \"Status\": \"Exit 0\",\n \"Ports\": [{\"PrivatePort\": 2222, \"PublicPort\": 3333, \"Type\": \"tcp\"}],\n \"SizeRw\": 12288,\n \"SizeRootFs\": 0\n },\n {\n \"Id\": \"9cd87474be90\",\n \"Image\": \"base:latest\",\n \"Command\": \"echo 222222\",\n \"Created\": 1367854155,\n \"Status\": \"Exit 0\",\n \"Ports\": [],\n \"SizeRw\": 12288,\n \"SizeRootFs\": 0\n },\n {\n \"Id\": \"3176a2479c92\",\n \"Image\": \"base:latest\",\n \"Command\": \"echo 3333333333333333\",\n \"Created\": 1367854154,\n \"Status\": \"Exit 0\",\n \"Ports\":[],\n \"SizeRw\":12288,\n \"SizeRootFs\":0\n },\n {\n \"Id\": \"4cb07b47f9fb\",\n \"Image\": \"base:latest\",\n \"Command\": \"echo 444444444444444444444444444444444\",\n \"Created\": 1367854152,\n \"Status\": \"Exit 0\",\n \"Ports\": [],\n \"SizeRw\": 12288,\n \"SizeRootFs\": 0\n }\n ] Query Parameters: all \u2013 1/True/true or 0/False/false, Show all containers.\n Only running containers are shown by default (i.e., this defaults to false) limit \u2013 Show limit last created containers, include non-running ones. since \u2013 Show only containers created since Id, include non-running ones. before \u2013 Show only containers created before Id, include non-running ones. size \u2013 1/True/true or 0/False/false, Show the containers sizes Status Codes: 200 \u2013 no error 400 \u2013 bad parameter 500 \u2013 server error Create a container POST /containers/create Create a container Example request : POST /containers/create HTTP/1.1\n Content-Type: application/json\n\n {\n \"Hostname\":\"\",\n \"User\":\"\",\n \"Memory\":0,\n \"MemorySwap\":0,\n \"AttachStdin\":false,\n \"AttachStdout\":true,\n \"AttachStderr\":true,\n \"PortSpecs\":null,\n \"Tty\":false,\n \"OpenStdin\":false,\n \"StdinOnce\":false,\n \"Env\":null,\n \"Cmd\":[\n \"date\"\n ],\n \"Dns\":null,\n \"Image\":\"base\",\n \"Volumes\":{\n \"/tmp\": {}\n },\n \"VolumesFrom\":\"\",\n \"WorkingDir\":\"\",\n \"ExposedPorts\":{\n \"22/tcp\": {}\n }\n } Example response : HTTP/1.1 201 Created\n Content-Type: application/json\n\n {\n \"Id\":\"e90e34656806\"\n \"Warnings\":[]\n } Json Parameters: config \u2013 the container's configuration Status Codes: 201 \u2013 no error 404 \u2013 no such container 406 \u2013 impossible to attach (container not running) 500 \u2013 server error Inspect a container GET /containers/(id)/json Return low-level information on the container id Example request : GET /containers/4fa6e0f0c678/json HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\n \"Id\": \"4fa6e0f0c6786287e131c3852c58a2e01cc697a68231826813597e4994f1d6e2\",\n \"Created\": \"2013-05-07T14:51:42.041847+02:00\",\n \"Path\": \"date\",\n \"Args\": [],\n \"Config\": {\n \"Hostname\": \"4fa6e0f0c678\",\n \"User\": \"\",\n \"Memory\": 0,\n \"MemorySwap\": 0,\n \"AttachStdin\": false,\n \"AttachStdout\": true,\n \"AttachStderr\": true,\n \"PortSpecs\": null,\n \"Tty\": false,\n \"OpenStdin\": false,\n \"StdinOnce\": false,\n \"Env\": null,\n \"Cmd\": [\n \"date\"\n ],\n \"Dns\": null,\n \"Image\": \"base\",\n \"Volumes\": {},\n \"VolumesFrom\": \"\",\n \"WorkingDir\": \"\"\n },\n \"State\": {\n \"Running\": false,\n \"Pid\": 0,\n \"ExitCode\": 0,\n \"StartedAt\": \"2013-05-07T14:51:42.087658+02:01360\",\n \"Ghost\": false\n },\n \"Image\": \"b750fe79269d2ec9a3c593ef05b4332b1d1a02a62b4accb2c21d589ff2f5f2dc\",\n \"NetworkSettings\": {\n \"IpAddress\": \"\",\n \"IpPrefixLen\": 0,\n \"Gateway\": \"\",\n \"Bridge\": \"\",\n \"PortMapping\": null\n },\n \"SysInitPath\": \"/home/kitty/go/src/github.com/docker/docker/bin/docker\",\n \"ResolvConfPath\": \"/etc/resolv.conf\",\n \"Volumes\": {}\n } Status Codes: 200 \u2013 no error 404 \u2013 no such container 500 \u2013 server error List processes running inside a container GET /containers/(id)/top List processes running inside the container id Example request : GET /containers/4fa6e0f0c678/top HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\n \"Titles\": [\n \"USER\",\n \"PID\",\n \"%CPU\",\n \"%MEM\",\n \"VSZ\",\n \"RSS\",\n \"TTY\",\n \"STAT\",\n \"START\",\n \"TIME\",\n \"COMMAND\"\n ],\n \"Processes\": [\n [\"root\",\"20147\",\"0.0\",\"0.1\",\"18060\",\"1864\",\"pts/4\",\"S\",\"10:06\",\"0:00\",\"bash\"],\n [\"root\",\"20271\",\"0.0\",\"0.0\",\"4312\",\"352\",\"pts/4\",\"S+\",\"10:07\",\"0:00\",\"sleep\",\"10\"]\n ]\n } Query Parameters: ps_args \u2013 ps arguments to use (e.g., aux) Status Codes: 200 \u2013 no error 404 \u2013 no such container 500 \u2013 server error Inspect changes on a container's filesystem GET /containers/(id)/changes Inspect changes on container id 's filesystem Example request : GET /containers/4fa6e0f0c678/changes HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n [\n {\n \"Path\": \"/dev\",\n \"Kind\": 0\n },\n {\n \"Path\": \"/dev/kmsg\",\n \"Kind\": 1\n },\n {\n \"Path\": \"/test\",\n \"Kind\": 1\n }\n ] Status Codes: 200 \u2013 no error 404 \u2013 no such container 500 \u2013 server error Export a container GET /containers/(id)/export Export the contents of container id Example request : GET /containers/4fa6e0f0c678/export HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/octet-stream\n\n {{ TAR STREAM }} Status Codes: 200 \u2013 no error 404 \u2013 no such container 500 \u2013 server error Start a container POST /containers/(id)/start Start the container id Example request : POST /containers/(id)/start HTTP/1.1\n Content-Type: application/json\n\n {\n \"Binds\":[\"/tmp:/tmp\"],\n \"LxcConf\":[{\"Key\":\"lxc.utsname\",\"Value\":\"docker\"}],\n \"PortBindings\":{ \"22/tcp\": [{ \"HostPort\": \"11022\" }] },\n \"Privileged\":false,\n \"PublishAllPorts\":false\n }\n\nBinds need to reference Volumes that were defined during container\ncreation. Example response : HTTP/1.1 204 No Content\n Content-Type: text/plain Json Parameters: hostConfig \u2013 the container's host configuration (optional) Status Codes: 204 \u2013 no error 404 \u2013 no such container 500 \u2013 server error Stop a container POST /containers/(id)/stop Stop the container id Example request : POST /containers/e90e34656806/stop?t=5 HTTP/1.1 Example response : HTTP/1.1 204 OK Query Parameters: t \u2013 number of seconds to wait before killing the container Status Codes: 204 \u2013 no error 404 \u2013 no such container 500 \u2013 server error Restart a container POST /containers/(id)/restart Restart the container id Example request : POST /containers/e90e34656806/restart?t=5 HTTP/1.1 Example response : HTTP/1.1 204 No Content Query Parameters: t \u2013 number of seconds to wait before killing the container Status Codes: 204 \u2013 no error 404 \u2013 no such container 500 \u2013 server error Kill a container POST /containers/(id)/kill Kill the container id Example request : POST /containers/e90e34656806/kill HTTP/1.1 Example response : HTTP/1.1 204 No Content Status Codes: 204 \u2013 no error 404 \u2013 no such container 500 \u2013 server error Attach to a container POST /containers/(id)/attach Attach to the container id Example request : POST /containers/16253994b7c4/attach?logs=1 stream=0 stdout=1 HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/vnd.docker.raw-stream\n\n {{ STREAM }} Query Parameters: logs \u2013 1/True/true or 0/False/false, return logs. Defaul\n false stream \u2013 1/True/true or 0/False/false, return stream.\n Default false stdin \u2013 1/True/true or 0/False/false, if stream=true, attach\n to stdin. Default false stdout \u2013 1/True/true or 0/False/false, if logs=true, return\n stdout log, if stream=true, attach to stdout. Default false stderr \u2013 1/True/true or 0/False/false, if logs=true, return\n stderr log, if stream=true, attach to stderr. Default false Status Codes: 200 \u2013 no error 400 \u2013 bad parameter 404 \u2013 no such container 500 \u2013 server error Stream details : When using the TTY setting is enabled in POST /containers/create ,\nthe stream is the raw data from the process PTY and client's stdin.\nWhen the TTY is disabled, then the stream is multiplexed to separate\nstdout and stderr. The format is a Header and a Payload (frame). HEADER The header will contain the information on which stream write the\nstream (stdout or stderr). It also contain the size of the\nassociated frame encoded on the last 4 bytes (uint32). It is encoded on the first 8 bytes like this: header := [8]byte{STREAM_TYPE, 0, 0, 0, SIZE1, SIZE2, SIZE3, SIZE4} STREAM_TYPE can be: 0: stdin (will be written on stdout) 1: stdout 2: stderr SIZE1, SIZE2, SIZE3, SIZE4 are the 4 bytes of\nthe uint32 size encoded as big endian. PAYLOAD The payload is the raw stream. IMPLEMENTATION The simplest way to implement the Attach protocol is the following: Read 8 bytes chose stdout or stderr depending on the first byte Extract the frame size from the last 4 byets Read the extracted size and output it on the correct output Goto 1) Attach to a container (websocket) GET /containers/(id)/attach/ws Attach to the container id via websocket Implements websocket protocol handshake according to RFC 6455 Example request GET /containers/e90e34656806/attach/ws?logs=0 stream=1 stdin=1 stdout=1 stderr=1 HTTP/1.1 Example response {{ STREAM }} Query Parameters: logs \u2013 1/True/true or 0/False/false, return logs. Default false stream \u2013 1/True/true or 0/False/false, return stream.\n Default false stdin \u2013 1/True/true or 0/False/false, if stream=true, attach\n to stdin. Default false stdout \u2013 1/True/true or 0/False/false, if logs=true, return\n stdout log, if stream=true, attach to stdout. Default false stderr \u2013 1/True/true or 0/False/false, if logs=true, return\n stderr log, if stream=true, attach to stderr. Default false Status Codes: 200 \u2013 no error 400 \u2013 bad parameter 404 \u2013 no such container 500 \u2013 server error Wait a container POST /containers/(id)/wait Block until container id stops, then returns the exit code Example request : POST /containers/16253994b7c4/wait HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\"StatusCode\": 0} Status Codes: 200 \u2013 no error 404 \u2013 no such container 500 \u2013 server error Remove a container DELETE /containers/(id) Remove the container id from the filesystem Example request : DELETE /containers/16253994b7c4?v=1 HTTP/1.1 Example response : HTTP/1.1 204 No Content Query Parameters: v \u2013 1/True/true or 0/False/false, Remove the volumes\n associated to the container. Default false Status Codes: 204 \u2013 no error 400 \u2013 bad parameter 404 \u2013 no such container 500 \u2013 server error Copy files or folders from a container POST /containers/(id)/copy Copy files or folders of container id Example request : POST /containers/4fa6e0f0c678/copy HTTP/1.1\n Content-Type: application/json\n\n {\n \"Resource\": \"test.txt\"\n } Example response : HTTP/1.1 200 OK\n Content-Type: application/octet-stream\n\n {{ TAR STREAM }} Status Codes: 200 \u2013 no error 404 \u2013 no such container 500 \u2013 server error",
|
|
"title": "2.1 Containers"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api_v1.7#22-images",
|
|
"tags": "",
|
|
"text": "List Images GET /images/json Example request : GET /images/json?all=0 HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n [\n {\n \"RepoTags\": [\n \"ubuntu:12.04\",\n \"ubuntu:precise\",\n \"ubuntu:latest\"\n ],\n \"Id\": \"8dbd9e392a964056420e5d58ca5cc376ef18e2de93b5cc90e868a1bbc8318c1c\",\n \"Created\": 1365714795,\n \"Size\": 131506275,\n \"VirtualSize\": 131506275\n },\n {\n \"RepoTags\": [\n \"ubuntu:12.10\",\n \"ubuntu:quantal\"\n ],\n \"ParentId\": \"27cf784147099545\",\n \"Id\": \"b750fe79269d2ec9a3c593ef05b4332b1d1a02a62b4accb2c21d589ff2f5f2dc\",\n \"Created\": 1364102658,\n \"Size\": 24653,\n \"VirtualSize\": 180116135\n }\n ] Create an image POST /images/create Create an image, either by pull it from the registry or by importing i Example request : POST /images/create?fromImage=base HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\"status\":\"Pulling...\"}\n {\"status\":\"Pulling\", \"progress\":\"1/? (n/a)\"}\n {\"error\":\"Invalid...\"}\n ...\n\nWhen using this endpoint to pull an image from the registry, the\n`X-Registry-Auth` header can be used to include\na base64-encoded AuthConfig object. Query Parameters: fromImage \u2013 name of the image to pull fromSrc \u2013 source to import, - means stdin repo \u2013 repository tag \u2013 tag registry \u2013 the registry to pull from Request Headers: X-Registry-Auth \u2013 base64-encoded AuthConfig object Status Codes: 200 \u2013 no error 500 \u2013 server error Insert a file in an image POST /images/(name)/insert Insert a file from url in the image name at path Example request : POST /images/test/insert?path=/usr url=myurl HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\"status\":\"Inserting...\"}\n {\"status\":\"Inserting\", \"progress\":\"1/? (n/a)\"}\n {\"error\":\"Invalid...\"}\n ... Query Parameters: url \u2013 The url from where the file is taken path \u2013 The path where the file is stored Status Codes: 200 \u2013 no error 500 \u2013 server error Inspect an image GET /images/(name)/json Return low-level information on the image name Example request : GET /images/base/json HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\n \"id\":\"b750fe79269d2ec9a3c593ef05b4332b1d1a02a62b4accb2c21d589ff2f5f2dc\",\n \"parent\":\"27cf784147099545\",\n \"created\":\"2013-03-23T22:24:18.818426-07:00\",\n \"container\":\"3d67245a8d72ecf13f33dffac9f79dcdf70f75acb84d308770391510e0c23ad0\",\n \"container_config\":\n {\n \"Hostname\":\"\",\n \"User\":\"\",\n \"Memory\":0,\n \"MemorySwap\":0,\n \"AttachStdin\":false,\n \"AttachStdout\":false,\n \"AttachStderr\":false,\n \"PortSpecs\":null,\n \"Tty\":true,\n \"OpenStdin\":true,\n \"StdinOnce\":false,\n \"Env\":null,\n \"Cmd\": [\"/bin/bash\"],\n \"Dns\":null,\n \"Image\":\"base\",\n \"Volumes\":null,\n \"VolumesFrom\":\"\",\n \"WorkingDir\":\"\"\n },\n \"Size\": 6824592\n } Status Codes: 200 \u2013 no error 404 \u2013 no such image 500 \u2013 server error Get the history of an image GET /images/(name)/history Return the history of the image name Example request : GET /images/base/history HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n [\n {\n \"Id\": \"b750fe79269d\",\n \"Created\": 1364102658,\n \"CreatedBy\": \"/bin/bash\"\n },\n {\n \"Id\": \"27cf78414709\",\n \"Created\": 1364068391,\n \"CreatedBy\": \"\"\n }\n ] Status Codes: 200 \u2013 no error 404 \u2013 no such image 500 \u2013 server error Push an image on the registry POST /images/(name)/push Push the image name on the registry Example request : POST /images/test/push HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\"status\":\"Pushing...\"}\n {\"status\":\"Pushing\", \"progress\":\"1/? (n/a)\"}\n {\"error\":\"Invalid...\"}\n ...\n\nRequest Headers: X-Registry-Auth \u2013 include a base64-encoded AuthConfig\n object. Status Codes: 200 \u2013 no error 404 \u2013 no such image 500 \u2013 server error Tag an image into a repository POST /images/(name)/tag Tag the image name into a repository Example request : POST /images/test/tag?repo=myrepo force=0 tag=v42 HTTP/1.1 Example response : HTTP/1.1 201 OK Query Parameters: repo \u2013 The repository to tag in force \u2013 1/True/true or 0/False/false, default false tag - The new tag name Status Codes: 201 \u2013 no error 400 \u2013 bad parameter 404 \u2013 no such image 409 \u2013 conflict 500 \u2013 server error Remove an image DELETE /images/(name) Remove the image name from the filesystem Example request : DELETE /images/test HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-type: application/json\n\n [\n {\"Untagged\": \"3e2f21a89f\"},\n {\"Deleted\": \"3e2f21a89f\"},\n {\"Deleted\": \"53b4f83ac9\"}\n ] Status Codes: 200 \u2013 no error 404 \u2013 no such image 409 \u2013 conflict 500 \u2013 server error Search images GET /images/search Search for an image on Docker Hub . Note :\nThe response keys have changed from API v1.6 to reflect the JSON\nsent by the registry server to the docker daemon's request. Example request : GET /images/search?term=sshd HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n [\n {\n \"description\": \"\",\n \"is_official\": false,\n \"is_trusted\": false,\n \"name\": \"wma55/u1210sshd\",\n \"star_count\": 0\n },\n {\n \"description\": \"\",\n \"is_official\": false,\n \"is_trusted\": false,\n \"name\": \"jdswinbank/sshd\",\n \"star_count\": 0\n },\n {\n \"description\": \"\",\n \"is_official\": false,\n \"is_trusted\": false,\n \"name\": \"vgauthier/sshd\",\n \"star_count\": 0\n }\n ...\n ] Query Parameters: term \u2013 term to search Status Codes: 200 \u2013 no error 500 \u2013 server error",
|
|
"title": "2.2 Images"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api_v1.7#23-misc",
|
|
"tags": "",
|
|
"text": "Build an image from Dockerfile via stdin POST /build Build an image from Dockerfile via stdin Example request : POST /build HTTP/1.1\n\n {{ TAR STREAM }} Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n {{ STREAM }}\n\nThe stream must be a tar archive compressed with one of the\nfollowing algorithms: identity (no compression), gzip, bzip2, xz.\n\nThe archive must include a file called `Dockerfile`\nat its root. It may include any number of other files,\nwhich will be accessible in the build context (See the [*ADD build\ncommand*](/builder/#dockerbuilder)). Query Parameters: t \u2013 repository name (and optionally a tag) to be applied to\n the resulting image in case of success remote \u2013 build source URI (git or HTTPS/HTTP) q \u2013 suppress verbose build output nocache \u2013 do not use the cache when building the image Request Headers: Content-type \u2013 should be set to\n \"application/tar\" . Status Codes: 200 \u2013 no error 500 \u2013 server error Check auth configuration POST /auth Get the default username and email Example request : POST /auth HTTP/1.1\n Content-Type: application/json\n\n {\n \"username\":\" hannibal\",\n \"password: \"xxxx\",\n \"email\": \"hannibal@a-team.com\",\n \"serveraddress\": \"https://index.docker.io/v1/\"\n } Example response : HTTP/1.1 200 OK\n Content-Type: text/plain Status Codes: 200 \u2013 no error 204 \u2013 no error 500 \u2013 server error Display system-wide information GET /info Display system-wide information Example request : GET /info HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\n \"Containers\":11,\n \"Images\":16,\n \"Debug\":false,\n \"NFd\": 11,\n \"NGoroutines\":21,\n \"MemoryLimit\":true,\n \"SwapLimit\":false,\n \"IPv4Forwarding\":true\n } Status Codes: 200 \u2013 no error 500 \u2013 server error Show the docker version information GET /version Show the docker version information Example request : GET /version HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\n \"Version\":\"0.2.2\",\n \"GitCommit\":\"5a2a5cc+CHANGES\",\n \"GoVersion\":\"go1.0.3\"\n } Status Codes: 200 \u2013 no error 500 \u2013 server error Create a new image from a container's changes POST /commit Create a new image from a container's changes Example request : POST /commit?container=44c004db4b17 m=message repo=myrepo HTTP/1.1 Example response : HTTP/1.1 201 OK\n Content-Type: application/vnd.docker.raw-stream\n\n {\"Id\": \"596069db4bf5\"} Query Parameters: container \u2013 source container repo \u2013 repository tag \u2013 tag m \u2013 commit message author \u2013 author (e.g., \"John Hannibal Smith\n hannibal@a-team.com \") run \u2013 config automatically applied when the image is run.\n (ex: {\"Cmd\": [\"cat\", \"/world\"], \"PortSpecs\":[\"22\"]}) Status Codes: 201 \u2013 no error 404 \u2013 no such container 500 \u2013 server error Monitor Docker's events GET /events Get events from docker, either in real time via streaming, or via\npolling (using since). Docker containers will report the following events: create, destroy, die, export, kill, pause, restart, start, stop, unpause and Docker images will report: untag, delete Example request : GET /events?since=1374067924 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\"status\": \"create\", \"id\": \"dfdf82bd3881\",\"from\": \"base:latest\", \"time\":1374067924}\n {\"status\": \"start\", \"id\": \"dfdf82bd3881\",\"from\": \"base:latest\", \"time\":1374067924}\n {\"status\": \"stop\", \"id\": \"dfdf82bd3881\",\"from\": \"base:latest\", \"time\":1374067966}\n {\"status\": \"destroy\", \"id\": \"dfdf82bd3881\",\"from\": \"base:latest\", \"time\":1374067970} Query Parameters: since \u2013 timestamp used for polling Status Codes: 200 \u2013 no error 500 \u2013 server error Get a tarball containing all images and tags in a repository GET /images/(name)/get Get a tarball containing all images and metadata for the repository\nspecified by name . Example request GET /images/ubuntu/get Example response : .. sourcecode:: http\n\n HTTP/1.1 200 OK\n Content-Type: application/x-tar\n\n Binary data stream\n :statuscode 200: no error\n :statuscode 500: server error Load a tarball with a set of images and tags into docker POST /images/load Load a set of images and tags into the docker repository. Example request POST /images/load\n\n Tarball in body\n\n **Example response**:\n\n .. sourcecode:: http\n\n HTTP/1.1 200 OK\n\n :statuscode 200: no error\n :statuscode 500: server error",
|
|
"title": "2.3 Misc"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api_v1.7#3-going-further",
|
|
"tags": "",
|
|
"text": "",
|
|
"title": "3. Going further"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api_v1.7#31-inside-docker-run",
|
|
"tags": "",
|
|
"text": "Here are the steps of docker run : Create the container If the status code is 404, it means the image doesn't exist:\n - Try to pull it\n - Then retry to create the container Start the container If you are not in detached mode:\n - Attach to the container, using logs=1 (to have stdout and\n stderr from the container's start) and stream=1 If in detached mode or only stdin is attached:\n - Display the container's id",
|
|
"title": "3.1 Inside docker run"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api_v1.7#32-hijacking",
|
|
"tags": "",
|
|
"text": "In this version of the API, /attach, uses hijacking to transport stdin,\nstdout and stderr on the same socket. This might change in the future.",
|
|
"title": "3.2 Hijacking"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api_v1.7#33-cors-requests",
|
|
"tags": "",
|
|
"text": "To enable cross origin requests to the remote api add the flag\n\"--api-enable-cors\" when running docker in daemon mode. $ docker -d -H=\"192.168.1.9:2375\" --api-enable-cors",
|
|
"title": "3.3 CORS Requests"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api_v1.6/",
|
|
"tags": "",
|
|
"text": "Docker Remote API v1.6\n1. Brief introduction\n\nThe Remote API has replaced rcli\nThe daemon listens on unix:///var/run/docker.sock but you can bind\n Docker to another host/port or a Unix socket.\nThe API tends to be REST, but for some complex commands, like attach\n or pull, the HTTP connection is hijacked to transport stdout, stdin\n and stderr\n\n2. Endpoints\n2.1 Containers\nList containers\nGET /containers/json\nList containers\nExample request:\n GET /containers/json?all=1before=8dfafdbc3a40size=1 HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n [\n {\n \"Id\": \"8dfafdbc3a40\",\n \"Image\": \"base:latest\",\n \"Command\": \"echo 1\",\n \"Created\": 1367854155,\n \"Status\": \"Exit 0\",\n \"Ports\": [{\"PrivatePort\": 2222, \"PublicPort\": 3333, \"Type\": \"tcp\"}],\n \"SizeRw\": 12288,\n \"SizeRootFs\": 0\n },\n {\n \"Id\": \"9cd87474be90\",\n \"Image\": \"base:latest\",\n \"Command\": \"echo 222222\",\n \"Created\": 1367854155,\n \"Status\": \"Exit 0\",\n \"Ports\": [],\n \"SizeRw\": 12288,\n \"SizeRootFs\": 0\n },\n {\n \"Id\": \"3176a2479c92\",\n \"Image\": \"base:latest\",\n \"Command\": \"echo 3333333333333333\",\n \"Created\": 1367854154,\n \"Status\": \"Exit 0\",\n \"Ports\":[],\n \"SizeRw\":12288,\n \"SizeRootFs\":0\n },\n {\n \"Id\": \"4cb07b47f9fb\",\n \"Image\": \"base:latest\",\n \"Command\": \"echo 444444444444444444444444444444444\",\n \"Created\": 1367854152,\n \"Status\": \"Exit 0\",\n \"Ports\": [],\n \"SizeRw\": 12288,\n \"SizeRootFs\": 0\n }\n ]\n\nQuery Parameters:\n\n\n\nall \u2013 1/True/true or 0/False/false, Show all containers.\n Only running containers are shown by default (i.e., this defaults to false)\nlimit \u2013 Show limit last created containers, include non-running ones.\nsince \u2013 Show only containers created since Id, include non-running ones.\nbefore \u2013 Show only containers created before Id, include non-running ones.\nsize \u2013 1/True/true or 0/False/false, Show the containers sizes\n\nStatus Codes:\n\n200 \u2013 no error\n400 \u2013 bad parameter\n500 \u2013 server error\n\nCreate a container\nPOST /containers/create\nCreate a container\nExample request:\n POST /containers/create HTTP/1.1\n Content-Type: application/json\n\n {\n \"Hostname\":\"\",\n \"User\":\"\",\n \"Memory\":0,\n \"MemorySwap\":0,\n \"AttachStdin\":false,\n \"AttachStdout\":true,\n \"AttachStderr\":true,\n \"ExposedPorts\":{},\n \"Tty\":false,\n \"OpenStdin\":false,\n \"StdinOnce\":false,\n \"Env\":null,\n \"Cmd\":[\n \"date\"\n ],\n \"Dns\":null,\n \"Image\":\"base\",\n \"Volumes\":{},\n \"VolumesFrom\":\"\",\n \"WorkingDir\":\"\"\n }\n\nExample response:\n HTTP/1.1 201 Created\n Content-Type: application/json\n\n {\n \"Id\":\"e90e34656806\"\n \"Warnings\":[]\n }\n\nJson Parameters:\n\nconfig \u2013 the container's configuration\n\nQuery Parameters:\n\n\n\nname \u2013 container name to use\n\nStatus Codes:\n\n201 \u2013 no error\n404 \u2013 no such container\n406 \u2013 impossible to attach (container not running)\n\n500 \u2013 server error\nMore Complex Example request, in 2 steps. First, use create to\nexpose a Private Port, which can be bound back to a Public Port a\nstartup:\nPOST /containers/create HTTP/1.1\nContent-Type: application/json\n\n{\n \"Cmd\":[\n \"/usr/sbin/sshd\",\"-D\"\n ],\n \"Image\":\"image-with-sshd\",\n \"ExposedPorts\":{\"22/tcp\":{}}\n }\n\n\n\nExample response:\n HTTP/1.1 201 OK\n Content-Type: application/json\n\n {\n \"Id\":\"e90e34656806\"\n \"Warnings\":[]\n }\n\n**Second, start (using the ID returned above) the image we jus\ncreated, mapping the ssh port 22 to something on the host**:\n\n POST /containers/e90e34656806/start HTTP/1.1\n Content-Type: application/json\n\n {\n \"PortBindings\": { \"22/tcp\": [{ \"HostPort\": \"11022\" }]}\n }\n\nExample response:\n HTTP/1.1 204 No Conten\n Content-Type: text/plain; charset=utf-8\n Content-Length: 0\n\n**Now you can ssh into your new container on port 11022.**\n\nInspect a container\nGET /containers/(id)/json\nReturn low-level information on the container id\nExample request:\n GET /containers/4fa6e0f0c678/json HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\n \"Id\": \"4fa6e0f0c6786287e131c3852c58a2e01cc697a68231826813597e4994f1d6e2\",\n \"Created\": \"2013-05-07T14:51:42.041847+02:00\",\n \"Path\": \"date\",\n \"Args\": [],\n \"Config\": {\n \"Hostname\": \"4fa6e0f0c678\",\n \"User\": \"\",\n \"Memory\": 0,\n \"MemorySwap\": 0,\n \"AttachStdin\": false,\n \"AttachStdout\": true,\n \"AttachStderr\": true,\n \"ExposedPorts\": {},\n \"Tty\": false,\n \"OpenStdin\": false,\n \"StdinOnce\": false,\n \"Env\": null,\n \"Cmd\": [\n \"date\"\n ],\n \"Dns\": null,\n \"Image\": \"base\",\n \"Volumes\": {},\n \"VolumesFrom\": \"\",\n \"WorkingDir\": \"\"\n },\n \"State\": {\n \"Running\": false,\n \"Pid\": 0,\n \"ExitCode\": 0,\n \"StartedAt\": \"2013-05-07T14:51:42.087658+02:01360\",\n \"Ghost\": false\n },\n \"Image\": \"b750fe79269d2ec9a3c593ef05b4332b1d1a02a62b4accb2c21d589ff2f5f2dc\",\n \"NetworkSettings\": {\n \"IpAddress\": \"\",\n \"IpPrefixLen\": 0,\n \"Gateway\": \"\",\n \"Bridge\": \"\",\n \"PortMapping\": null\n },\n \"SysInitPath\": \"/home/kitty/go/src/github.com/docker/docker/bin/docker\",\n \"ResolvConfPath\": \"/etc/resolv.conf\",\n \"Volumes\": {}\n }\n\nStatus Codes:\n\n200 \u2013 no error\n404 \u2013 no such container\n500 \u2013 server error\n\nList processes running inside a container\nGET /containers/(id)/top\nList processes running inside the container id\nExample request:\n GET /containers/4fa6e0f0c678/top HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\n \"Titles\": [\n \"USER\",\n \"PID\",\n \"%CPU\",\n \"%MEM\",\n \"VSZ\",\n \"RSS\",\n \"TTY\",\n \"STAT\",\n \"START\",\n \"TIME\",\n \"COMMAND\"\n ],\n \"Processes\": [\n [\"root\",\"20147\",\"0.0\",\"0.1\",\"18060\",\"1864\",\"pts/4\",\"S\",\"10:06\",\"0:00\",\"bash\"],\n [\"root\",\"20271\",\"0.0\",\"0.0\",\"4312\",\"352\",\"pts/4\",\"S+\",\"10:07\",\"0:00\",\"sleep\",\"10\"]\n ]\n }\n\nQuery Parameters:\n\nps_args \u2013 ps arguments to use (e.g., aux)\n\nStatus Codes:\n\n200 \u2013 no error\n404 \u2013 no such container\n500 \u2013 server error\n\nInspect changes on a container's filesystem\nGET /containers/(id)/changes\nInspect changes on container id's filesystem\nExample request:\n GET /containers/4fa6e0f0c678/changes HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n [\n {\n \"Path\": \"/dev\",\n \"Kind\": 0\n },\n {\n \"Path\": \"/dev/kmsg\",\n \"Kind\": 1\n },\n {\n \"Path\": \"/test\",\n \"Kind\": 1\n }\n ]\n\nStatus Codes:\n\n200 \u2013 no error\n404 \u2013 no such container\n500 \u2013 server error\n\nExport a container\nGET /containers/(id)/export\nExport the contents of container id\nExample request:\n GET /containers/4fa6e0f0c678/export HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/octet-stream\n\n {{ TAR STREAM }}\n\nStatus Codes:\n\n200 \u2013 no error\n404 \u2013 no such container\n500 \u2013 server error\n\nStart a container\nPOST /containers/(id)/start\nStart the container id\nExample request:\n POST /containers/(id)/start HTTP/1.1\n Content-Type: application/json\n\n {\n \"Binds\":[\"/tmp:/tmp\"],\n \"LxcConf\":[{\"Key\":\"lxc.utsname\",\"Value\":\"docker\"}],\n \"ContainerIDFile\": \"\",\n \"Privileged\": false,\n \"PortBindings\": {\"22/tcp\": [{HostIp:\"\", HostPort:\"\"}]},\n \"Links\": [],\n \"PublishAllPorts\": false\n }\n\nExample response:\n HTTP/1.1 204 No Content\n Content-Type: text/plain\n\nJson Parameters:\n\n\n\nhostConfig \u2013 the container's host configuration (optional)\n\nStatus Codes:\n\n204 \u2013 no error\n404 \u2013 no such container\n500 \u2013 server error\n\nStop a container\nPOST /containers/(id)/stop\nStop the container id\nExample request:\n POST /containers/e90e34656806/stop?t=5 HTTP/1.1\n\nExample response:\n HTTP/1.1 204 OK\n\nQuery Parameters:\n\nt \u2013 number of seconds to wait before killing the container\n\nStatus Codes:\n\n204 \u2013 no error\n404 \u2013 no such container\n500 \u2013 server error\n\nRestart a container\nPOST /containers/(id)/restart\nRestart the container id\nExample request:\n POST /containers/e90e34656806/restart?t=5 HTTP/1.1\n\nExample response:\n HTTP/1.1 204 No Content\n\nQuery Parameters:\n\nt \u2013 number of seconds to wait before killing the container\n\nStatus Codes:\n\n204 \u2013 no error\n404 \u2013 no such container\n500 \u2013 server error\n\nKill a container\nPOST /containers/(id)/kill\nKill the container id\nExample request:\n POST /containers/e90e34656806/kill HTTP/1.1\n\nExample response:\n HTTP/1.1 204 No Content\n\nQuery Parameters:\n\n\n\nsignal \u2013 Signal to send to the container (integer). When no\n set, SIGKILL is assumed and the call will waits for the\n container to exit.\n\nStatus Codes:\n\n204 \u2013 no error\n404 \u2013 no such container\n500 \u2013 server error\n\nAttach to a container\nPOST /containers/(id)/attach\nAttach to the container id\nExample request:\n POST /containers/16253994b7c4/attach?logs=1stream=0stdout=1 HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/vnd.docker.raw-stream\n\n {{ STREAM }}\n\nQuery Parameters:\n\nlogs \u2013 1/True/true or 0/False/false, return logs. Defaul\n false\nstream \u2013 1/True/true or 0/False/false, return stream.\n Default false\nstdin \u2013 1/True/true or 0/False/false, if stream=true, attach\n to stdin. Default false\nstdout \u2013 1/True/true or 0/False/false, if logs=true, return\n stdout log, if stream=true, attach to stdout. Default false\nstderr \u2013 1/True/true or 0/False/false, if logs=true, return\n stderr log, if stream=true, attach to stderr. Default false\n\nStatus Codes:\n\n200 \u2013 no error\n400 \u2013 bad parameter\n404 \u2013 no such container\n\n500 \u2013 server error\nStream details:\nWhen using the TTY setting is enabled in\nPOST /containers/create\n,\nthe stream is the raw data from the process PTY and client's stdin.\nWhen the TTY is disabled, then the stream is multiplexed to separate\nstdout and stderr.\nThe format is a Header and a Payload (frame).\nHEADER\nThe header will contain the information on which stream write the\nstream (stdout or stderr). It also contain the size of the\nassociated frame encoded on the last 4 bytes (uint32).\nIt is encoded on the first 8 bytes like this:\nheader := [8]byte{STREAM_TYPE, 0, 0, 0, SIZE1, SIZE2, SIZE3, SIZE4}\n\nSTREAM_TYPE can be:\n\n\n0: stdin (will be written on stdout)\n\n1: stdout\n\n2: stderr\nSIZE1, SIZE2, SIZE3, SIZE4 are the 4 bytes of\nthe uint32 size encoded as big endian.\nPAYLOAD\nThe payload is the raw stream.\nIMPLEMENTATION\nThe simplest way to implement the Attach protocol is the following:\n\nRead 8 bytes\nchose stdout or stderr depending on the first byte\nExtract the frame size from the last 4 byets\nRead the extracted size and output it on the correct output\nGoto 1)\n\n\n\nAttach to a container (websocket)\nGET /containers/(id)/attach/ws\nAttach to the container id via websocket\nImplements websocket protocol handshake according to RFC 6455\nExample request\n GET /containers/e90e34656806/attach/ws?logs=0stream=1stdin=1stdout=1stderr=1 HTTP/1.1\n\nExample response\n {{ STREAM }}\n\nQuery Parameters:\n\nlogs \u2013 1/True/true or 0/False/false, return logs. Default false\nstream \u2013 1/True/true or 0/False/false, return stream.\n Default false\nstdin \u2013 1/True/true or 0/False/false, if stream=true, attach\n to stdin. Default false\nstdout \u2013 1/True/true or 0/False/false, if logs=true, return\n stdout log, if stream=true, attach to stdout. Default false\nstderr \u2013 1/True/true or 0/False/false, if logs=true, return\n stderr log, if stream=true, attach to stderr. Default false\n\nStatus Codes:\n\n200 \u2013 no error\n400 \u2013 bad parameter\n404 \u2013 no such container\n500 \u2013 server error\n\nWait a container\nPOST /containers/(id)/wait\nBlock until container id stops, then returns the exit code\nExample request:\n POST /containers/16253994b7c4/wait HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\"StatusCode\": 0}\n\nStatus Codes:\n\n200 \u2013 no error\n404 \u2013 no such container\n500 \u2013 server error\n\nRemove a container\nDELETE /containers/(id)\nRemove the container id from the filesystem\nExample request:\n DELETE /containers/16253994b7c4?v=1 HTTP/1.1\n\nExample response:\n HTTP/1.1 204 No Content\n\nQuery Parameters:\n\nv \u2013 1/True/true or 0/False/false, Remove the volumes\n associated to the container. Default false\n\nStatus Codes:\n\n204 \u2013 no error\n400 \u2013 bad parameter\n404 \u2013 no such container\n500 \u2013 server error\n\nCopy files or folders from a container\nPOST /containers/(id)/copy\nCopy files or folders of container id\nExample request:\n POST /containers/4fa6e0f0c678/copy HTTP/1.1\n Content-Type: application/json\n\n {\n \"Resource\": \"test.txt\"\n }\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/octet-stream\n\n {{ TAR STREAM }}\n\nStatus Codes:\n\n200 \u2013 no error\n404 \u2013 no such container\n500 \u2013 server error\n\n2.2 Images\nList Images\nGET /images/(format)\nList images format could be json or viz (json default)\nExample request:\n GET /images/json?all=0 HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n [\n {\n \"Repository\":\"base\",\n \"Tag\":\"ubuntu-12.10\",\n \"Id\":\"b750fe79269d\",\n \"Created\":1364102658,\n \"Size\":24653,\n \"VirtualSize\":180116135\n },\n {\n \"Repository\":\"base\",\n \"Tag\":\"ubuntu-quantal\",\n \"Id\":\"b750fe79269d\",\n \"Created\":1364102658,\n \"Size\":24653,\n \"VirtualSize\":180116135\n }\n ]\n\nExample request:\n GET /images/viz HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: text/plain\n\n digraph docker {\n \"d82cbacda43a\" - \"074be284591f\"\n \"1496068ca813\" - \"08306dc45919\"\n \"08306dc45919\" - \"0e7893146ac2\"\n \"b750fe79269d\" - \"1496068ca813\"\n base - \"27cf78414709\" [style=invis]\n \"f71189fff3de\" - \"9a33b36209ed\"\n \"27cf78414709\" - \"b750fe79269d\"\n \"0e7893146ac2\" - \"d6434d954665\"\n \"d6434d954665\" - \"d82cbacda43a\"\n base - \"e9aa60c60128\" [style=invis]\n \"074be284591f\" - \"f71189fff3de\"\n \"b750fe79269d\" [label=\"b750fe79269d\\nbase\",shape=box,fillcolor=\"paleturquoise\",style=\"filled,rounded\"];\n \"e9aa60c60128\" [label=\"e9aa60c60128\\nbase2\",shape=box,fillcolor=\"paleturquoise\",style=\"filled,rounded\"];\n \"9a33b36209ed\" [label=\"9a33b36209ed\\ntest\",shape=box,fillcolor=\"paleturquoise\",style=\"filled,rounded\"];\n base [style=invisible]\n }\n\nQuery Parameters:\n\nall \u2013 1/True/true or 0/False/false, Show all containers.\n Only running containers are shown by defaul\n\nStatus Codes:\n\n200 \u2013 no error\n400 \u2013 bad parameter\n500 \u2013 server error\n\nCreate an image\nPOST /images/create\nCreate an image, either by pull it from the registry or by importing i\nExample request:\n POST /images/create?fromImage=base HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\"status\":\"Pulling...\"}\n {\"status\":\"Pulling\", \"progress\":\"1/? (n/a)\"}\n {\"error\":\"Invalid...\"}\n ...\n\nWhen using this endpoint to pull an image from the registry, the\n`X-Registry-Auth` header can be used to include\na base64-encoded AuthConfig object.\n\nQuery Parameters:\n\nfromImage \u2013 name of the image to pull\nfromSrc \u2013 source to import, - means stdin\nrepo \u2013 repository\ntag \u2013 tag\nregistry \u2013 the registry to pull from\n\nStatus Codes:\n\n200 \u2013 no error\n500 \u2013 server error\n\nInsert a file in an image\nPOST /images/(name)/insert\nInsert a file from url in the image name at path\nExample request:\n POST /images/test/insert?path=/usrurl=myurl HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\"status\":\"Inserting...\"}\n {\"status\":\"Inserting\", \"progress\":\"1/? (n/a)\"}\n {\"error\":\"Invalid...\"}\n ...\n\nQuery Parameters:\n\nurl \u2013 The url from where the file is taken\npath \u2013 The path where the file is stored\n\nStatus Codes:\n\n200 \u2013 no error\n500 \u2013 server error\n\nInspect an image\nGET /images/(name)/json\nReturn low-level information on the image name\nExample request:\n GET /images/base/json HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\n \"id\":\"b750fe79269d2ec9a3c593ef05b4332b1d1a02a62b4accb2c21d589ff2f5f2dc\",\n \"parent\":\"27cf784147099545\",\n \"created\":\"2013-03-23T22:24:18.818426-07:00\",\n \"container\":\"3d67245a8d72ecf13f33dffac9f79dcdf70f75acb84d308770391510e0c23ad0\",\n \"container_config\":\n {\n \"Hostname\":\"\",\n \"User\":\"\",\n \"Memory\":0,\n \"MemorySwap\":0,\n \"AttachStdin\":false,\n \"AttachStdout\":false,\n \"AttachStderr\":false,\n \"ExposedPorts\":{},\n \"Tty\":true,\n \"OpenStdin\":true,\n \"StdinOnce\":false,\n \"Env\":null,\n \"Cmd\": [\"/bin/bash\"],\n \"Dns\":null,\n \"Image\":\"base\",\n \"Volumes\":null,\n \"VolumesFrom\":\"\",\n \"WorkingDir\":\"\"\n },\n \"Size\": 6824592\n }\n\nStatus Codes:\n\n200 \u2013 no error\n404 \u2013 no such image\n500 \u2013 server error\n\nGet the history of an image\nGET /images/(name)/history\nReturn the history of the image name\nExample request:\n GET /images/base/history HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n [\n {\n \"Id\": \"b750fe79269d\",\n \"Created\": 1364102658,\n \"CreatedBy\": \"/bin/bash\"\n },\n {\n \"Id\": \"27cf78414709\",\n \"Created\": 1364068391,\n \"CreatedBy\": \"\"\n }\n ]\n\nStatus Codes:\n\n200 \u2013 no error\n404 \u2013 no such image\n500 \u2013 server error\n\nPush an image on the registry\nPOST /images/(name)/push\nPush the image name on the registry\nExample request:\n POST /images/test/push HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n{\"status\":\"Pushing...\"} {\"status\":\"Pushing\", \"progress\":\"1/? (n/a)\"}\n{\"error\":\"Invalid...\"} ...\n\n The `X-Registry-Auth` header can be used to\n include a base64-encoded AuthConfig object.\n\nStatus Codes:\n\n200 \u2013 no error :statuscode 404: no such image :statuscode\n 500: server error\n\nTag an image into a repository\nPOST /images/(name)/tag\nTag the image name into a repository\nExample request:\n POST /images/test/tag?repo=myrepoforce=0tag=v42 HTTP/1.1\n\nExample response:\n HTTP/1.1 201 OK\n\nQuery Parameters:\n\nrepo \u2013 The repository to tag in\nforce \u2013 1/True/true or 0/False/false, default false\ntag - The new tag name\n\nStatus Codes:\n\n201 \u2013 no error\n400 \u2013 bad parameter\n404 \u2013 no such image\n409 \u2013 conflict\n500 \u2013 server error\n\nRemove an image\nDELETE /images/(name)\nRemove the image name from the filesystem\nExample request:\n DELETE /images/test HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-type: application/json\n\n [\n {\"Untagged\": \"3e2f21a89f\"},\n {\"Deleted\": \"3e2f21a89f\"},\n {\"Deleted\": \"53b4f83ac9\"}\n ]\n\nStatus Codes:\n\n200 \u2013 no error\n404 \u2013 no such image\n409 \u2013 conflict\n500 \u2013 server error\n\nSearch images\nGET /images/search\nSearch for an image on Docker Hub\nExample request:\n GET /images/search?term=sshd HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n [\n {\n \"Name\":\"cespare/sshd\",\n \"Description\":\"\"\n },\n {\n \"Name\":\"johnfuller/sshd\",\n \"Description\":\"\"\n },\n {\n \"Name\":\"dhrp/mongodb-sshd\",\n \"Description\":\"\"\n }\n ]\n\n :query term: term to search\n :statuscode 200: no error\n :statuscode 500: server error\n\n2.3 Misc\nBuild an image from Dockerfile via stdin\nPOST /build\nBuild an image from Dockerfile via stdin\nExample request:\n POST /build HTTP/1.1\n\n {{ TAR STREAM }}\n\nExample response:\n HTTP/1.1 200 OK\n\n {{ STREAM }}\n\nThe stream must be a tar archive compressed with one of the\nfollowing algorithms: identity (no compression), gzip, bzip2, xz.\nThe archive must include a file called Dockerfile at its root. I\nmay include any number of other files, which will be accessible in\nthe build context (See the ADD build command).\n\nThe Content-type header should be set to \"application/tar\".\n\nQuery Parameters:\n\nt \u2013 repository name (and optionally a tag) to be applied to\n the resulting image in case of success\nremote \u2013 build source URI (git or HTTPS/HTTP)\nq \u2013 suppress verbose build output\nnocache \u2013 do not use the cache when building the image\n\nStatus Codes:\n\n200 \u2013 no error\n500 \u2013 server error\n\nCheck auth configuration\nPOST /auth\nGet the default username and email\nExample request:\n POST /auth HTTP/1.1\n Content-Type: application/json\n\n {\n \"username\":\" hannibal\",\n \"password: \"xxxx\",\n \"email\": \"hannibal@a-team.com\",\n \"serveraddress\": \"https://index.docker.io/v1/\"\n }\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: text/plain\n\nStatus Codes:\n\n200 \u2013 no error\n204 \u2013 no error\n500 \u2013 server error\n\nDisplay system-wide information\nGET /info\nDisplay system-wide information\nExample request:\n GET /info HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\n \"Containers\":11,\n \"Images\":16,\n \"Debug\":false,\n \"NFd\": 11,\n \"NGoroutines\":21,\n \"MemoryLimit\":true,\n \"SwapLimit\":false,\n \"IPv4Forwarding\":true\n }\n\nStatus Codes:\n\n200 \u2013 no error\n500 \u2013 server error\n\nShow the docker version information\nGET /version\nShow the docker version information\nExample request:\n GET /version HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\n \"Version\":\"0.2.2\",\n \"GitCommit\":\"5a2a5cc+CHANGES\",\n \"GoVersion\":\"go1.0.3\"\n }\n\nStatus Codes:\n\n200 \u2013 no error\n500 \u2013 server error\n\nCreate a new image from a container's changes\nPOST /commit\nCreate a new image from a container's changes\nExample request:\n POST /commit?container=44c004db4b17m=messagerepo=myrepo HTTP/1.1\n Content-Type: application/json\n\n {\n \"Cmd\": [\"cat\", \"/world\"],\n \"ExposedPorts\":{\"22/tcp\":{}}\n }\n\nExample response:\n HTTP/1.1 201 OK\n Content-Type: application/vnd.docker.raw-stream\n\n {\"Id\": \"596069db4bf5\"}\n\nQuery Parameters:\n\ncontainer \u2013 source container\nrepo \u2013 repository\ntag \u2013 tag\nm \u2013 commit message\nauthor \u2013 author (e.g., \"John Hannibal Smith\n hannibal@a-team.com\")\n\nStatus Codes:\n\n201 \u2013 no error\n404 \u2013 no such container\n500 \u2013 server error\n\nMonitor Docker's events\nGET /events\nGet events from docker, either in real time via streaming, or via\npolling (using since).\nDocker containers will report the following events:\ncreate, destroy, die, export, kill, pause, restart, start, stop, unpause\n\nand Docker images will report:\nuntag, delete\n\nExample request:\n GET /events?since=1374067924\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\"status\": \"create\", \"id\": \"dfdf82bd3881\",\"from\": \"base:latest\", \"time\":1374067924}\n {\"status\": \"start\", \"id\": \"dfdf82bd3881\",\"from\": \"base:latest\", \"time\":1374067924}\n {\"status\": \"stop\", \"id\": \"dfdf82bd3881\",\"from\": \"base:latest\", \"time\":1374067966}\n {\"status\": \"destroy\", \"id\": \"dfdf82bd3881\",\"from\": \"base:latest\", \"time\":1374067970}\n\nQuery Parameters:\n\nsince \u2013 timestamp used for polling\n\nStatus Codes:\n\n200 \u2013 no error\n500 \u2013 server error\n\n3. Going further\n3.1 Inside docker run\nHere are the steps of docker run :\n\n\nCreate the container\n\n\nIf the status code is 404, it means the image doesn't exist:\n - Try to pull it\n - Then retry to create the container\n\n\nStart the container\n\n\nIf you are not in detached mode:\n - Attach to the container, using logs=1 (to have stdout and\n stderr from the container's start) and stream=1\n\n\nIf in detached mode or only stdin is attached:\n - Display the container's id\n\n\n3.2 Hijacking\nIn this version of the API, /attach, uses hijacking to transport stdin,\nstdout and stderr on the same socket. This might change in the future.\n3.3 CORS Requests\nTo enable cross origin requests to the remote api add the flag\n\"--api-enable-cors\" when running docker in daemon mode.\n$ docker -d -H=\"192.168.1.9:2375\" --api-enable-cors",
|
|
"title": "**HIDDEN**"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api_v1.6#docker-remote-api-v16",
|
|
"tags": "",
|
|
"text": "",
|
|
"title": "Docker Remote API v1.6"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api_v1.6#1-brief-introduction",
|
|
"tags": "",
|
|
"text": "The Remote API has replaced rcli The daemon listens on unix:///var/run/docker.sock but you can bind\n Docker to another host/port or a Unix socket. The API tends to be REST, but for some complex commands, like attach \n or pull , the HTTP connection is hijacked to transport stdout, stdin \n and stderr",
|
|
"title": "1. Brief introduction"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api_v1.6#2-endpoints",
|
|
"tags": "",
|
|
"text": "",
|
|
"title": "2. Endpoints"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api_v1.6#21-containers",
|
|
"tags": "",
|
|
"text": "List containers GET /containers/json List containers Example request : GET /containers/json?all=1 before=8dfafdbc3a40 size=1 HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n [\n {\n \"Id\": \"8dfafdbc3a40\",\n \"Image\": \"base:latest\",\n \"Command\": \"echo 1\",\n \"Created\": 1367854155,\n \"Status\": \"Exit 0\",\n \"Ports\": [{\"PrivatePort\": 2222, \"PublicPort\": 3333, \"Type\": \"tcp\"}],\n \"SizeRw\": 12288,\n \"SizeRootFs\": 0\n },\n {\n \"Id\": \"9cd87474be90\",\n \"Image\": \"base:latest\",\n \"Command\": \"echo 222222\",\n \"Created\": 1367854155,\n \"Status\": \"Exit 0\",\n \"Ports\": [],\n \"SizeRw\": 12288,\n \"SizeRootFs\": 0\n },\n {\n \"Id\": \"3176a2479c92\",\n \"Image\": \"base:latest\",\n \"Command\": \"echo 3333333333333333\",\n \"Created\": 1367854154,\n \"Status\": \"Exit 0\",\n \"Ports\":[],\n \"SizeRw\":12288,\n \"SizeRootFs\":0\n },\n {\n \"Id\": \"4cb07b47f9fb\",\n \"Image\": \"base:latest\",\n \"Command\": \"echo 444444444444444444444444444444444\",\n \"Created\": 1367854152,\n \"Status\": \"Exit 0\",\n \"Ports\": [],\n \"SizeRw\": 12288,\n \"SizeRootFs\": 0\n }\n ] Query Parameters: all \u2013 1/True/true or 0/False/false, Show all containers.\n Only running containers are shown by default (i.e., this defaults to false) limit \u2013 Show limit last created containers, include non-running ones. since \u2013 Show only containers created since Id, include non-running ones. before \u2013 Show only containers created before Id, include non-running ones. size \u2013 1/True/true or 0/False/false, Show the containers sizes Status Codes: 200 \u2013 no error 400 \u2013 bad parameter 500 \u2013 server error Create a container POST /containers/create Create a container Example request : POST /containers/create HTTP/1.1\n Content-Type: application/json\n\n {\n \"Hostname\":\"\",\n \"User\":\"\",\n \"Memory\":0,\n \"MemorySwap\":0,\n \"AttachStdin\":false,\n \"AttachStdout\":true,\n \"AttachStderr\":true,\n \"ExposedPorts\":{},\n \"Tty\":false,\n \"OpenStdin\":false,\n \"StdinOnce\":false,\n \"Env\":null,\n \"Cmd\":[\n \"date\"\n ],\n \"Dns\":null,\n \"Image\":\"base\",\n \"Volumes\":{},\n \"VolumesFrom\":\"\",\n \"WorkingDir\":\"\"\n } Example response : HTTP/1.1 201 Created\n Content-Type: application/json\n\n {\n \"Id\":\"e90e34656806\"\n \"Warnings\":[]\n } Json Parameters: config \u2013 the container's configuration Query Parameters: name \u2013 container name to use Status Codes: 201 \u2013 no error 404 \u2013 no such container 406 \u2013 impossible to attach (container not running) 500 \u2013 server error More Complex Example request, in 2 steps. First, use create to\nexpose a Private Port, which can be bound back to a Public Port a\nstartup : POST /containers/create HTTP/1.1\nContent-Type: application/json\n\n{\n \"Cmd\":[\n \"/usr/sbin/sshd\",\"-D\"\n ],\n \"Image\":\"image-with-sshd\",\n \"ExposedPorts\":{\"22/tcp\":{}}\n } Example response : HTTP/1.1 201 OK\n Content-Type: application/json\n\n {\n \"Id\":\"e90e34656806\"\n \"Warnings\":[]\n }\n\n**Second, start (using the ID returned above) the image we jus\ncreated, mapping the ssh port 22 to something on the host**:\n\n POST /containers/e90e34656806/start HTTP/1.1\n Content-Type: application/json\n\n {\n \"PortBindings\": { \"22/tcp\": [{ \"HostPort\": \"11022\" }]}\n } Example response : HTTP/1.1 204 No Conten\n Content-Type: text/plain; charset=utf-8\n Content-Length: 0\n\n**Now you can ssh into your new container on port 11022.** Inspect a container GET /containers/(id)/json Return low-level information on the container id Example request : GET /containers/4fa6e0f0c678/json HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\n \"Id\": \"4fa6e0f0c6786287e131c3852c58a2e01cc697a68231826813597e4994f1d6e2\",\n \"Created\": \"2013-05-07T14:51:42.041847+02:00\",\n \"Path\": \"date\",\n \"Args\": [],\n \"Config\": {\n \"Hostname\": \"4fa6e0f0c678\",\n \"User\": \"\",\n \"Memory\": 0,\n \"MemorySwap\": 0,\n \"AttachStdin\": false,\n \"AttachStdout\": true,\n \"AttachStderr\": true,\n \"ExposedPorts\": {},\n \"Tty\": false,\n \"OpenStdin\": false,\n \"StdinOnce\": false,\n \"Env\": null,\n \"Cmd\": [\n \"date\"\n ],\n \"Dns\": null,\n \"Image\": \"base\",\n \"Volumes\": {},\n \"VolumesFrom\": \"\",\n \"WorkingDir\": \"\"\n },\n \"State\": {\n \"Running\": false,\n \"Pid\": 0,\n \"ExitCode\": 0,\n \"StartedAt\": \"2013-05-07T14:51:42.087658+02:01360\",\n \"Ghost\": false\n },\n \"Image\": \"b750fe79269d2ec9a3c593ef05b4332b1d1a02a62b4accb2c21d589ff2f5f2dc\",\n \"NetworkSettings\": {\n \"IpAddress\": \"\",\n \"IpPrefixLen\": 0,\n \"Gateway\": \"\",\n \"Bridge\": \"\",\n \"PortMapping\": null\n },\n \"SysInitPath\": \"/home/kitty/go/src/github.com/docker/docker/bin/docker\",\n \"ResolvConfPath\": \"/etc/resolv.conf\",\n \"Volumes\": {}\n } Status Codes: 200 \u2013 no error 404 \u2013 no such container 500 \u2013 server error List processes running inside a container GET /containers/(id)/top List processes running inside the container id Example request : GET /containers/4fa6e0f0c678/top HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\n \"Titles\": [\n \"USER\",\n \"PID\",\n \"%CPU\",\n \"%MEM\",\n \"VSZ\",\n \"RSS\",\n \"TTY\",\n \"STAT\",\n \"START\",\n \"TIME\",\n \"COMMAND\"\n ],\n \"Processes\": [\n [\"root\",\"20147\",\"0.0\",\"0.1\",\"18060\",\"1864\",\"pts/4\",\"S\",\"10:06\",\"0:00\",\"bash\"],\n [\"root\",\"20271\",\"0.0\",\"0.0\",\"4312\",\"352\",\"pts/4\",\"S+\",\"10:07\",\"0:00\",\"sleep\",\"10\"]\n ]\n } Query Parameters: ps_args \u2013 ps arguments to use (e.g., aux) Status Codes: 200 \u2013 no error 404 \u2013 no such container 500 \u2013 server error Inspect changes on a container's filesystem GET /containers/(id)/changes Inspect changes on container id 's filesystem Example request : GET /containers/4fa6e0f0c678/changes HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n [\n {\n \"Path\": \"/dev\",\n \"Kind\": 0\n },\n {\n \"Path\": \"/dev/kmsg\",\n \"Kind\": 1\n },\n {\n \"Path\": \"/test\",\n \"Kind\": 1\n }\n ] Status Codes: 200 \u2013 no error 404 \u2013 no such container 500 \u2013 server error Export a container GET /containers/(id)/export Export the contents of container id Example request : GET /containers/4fa6e0f0c678/export HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/octet-stream\n\n {{ TAR STREAM }} Status Codes: 200 \u2013 no error 404 \u2013 no such container 500 \u2013 server error Start a container POST /containers/(id)/start Start the container id Example request : POST /containers/(id)/start HTTP/1.1\n Content-Type: application/json\n\n {\n \"Binds\":[\"/tmp:/tmp\"],\n \"LxcConf\":[{\"Key\":\"lxc.utsname\",\"Value\":\"docker\"}],\n \"ContainerIDFile\": \"\",\n \"Privileged\": false,\n \"PortBindings\": {\"22/tcp\": [{HostIp:\"\", HostPort:\"\"}]},\n \"Links\": [],\n \"PublishAllPorts\": false\n } Example response : HTTP/1.1 204 No Content\n Content-Type: text/plain Json Parameters: hostConfig \u2013 the container's host configuration (optional) Status Codes: 204 \u2013 no error 404 \u2013 no such container 500 \u2013 server error Stop a container POST /containers/(id)/stop Stop the container id Example request : POST /containers/e90e34656806/stop?t=5 HTTP/1.1 Example response : HTTP/1.1 204 OK Query Parameters: t \u2013 number of seconds to wait before killing the container Status Codes: 204 \u2013 no error 404 \u2013 no such container 500 \u2013 server error Restart a container POST /containers/(id)/restart Restart the container id Example request : POST /containers/e90e34656806/restart?t=5 HTTP/1.1 Example response : HTTP/1.1 204 No Content Query Parameters: t \u2013 number of seconds to wait before killing the container Status Codes: 204 \u2013 no error 404 \u2013 no such container 500 \u2013 server error Kill a container POST /containers/(id)/kill Kill the container id Example request : POST /containers/e90e34656806/kill HTTP/1.1 Example response : HTTP/1.1 204 No Content Query Parameters: signal \u2013 Signal to send to the container (integer). When no\n set, SIGKILL is assumed and the call will waits for the\n container to exit. Status Codes: 204 \u2013 no error 404 \u2013 no such container 500 \u2013 server error Attach to a container POST /containers/(id)/attach Attach to the container id Example request : POST /containers/16253994b7c4/attach?logs=1 stream=0 stdout=1 HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/vnd.docker.raw-stream\n\n {{ STREAM }} Query Parameters: logs \u2013 1/True/true or 0/False/false, return logs. Defaul\n false stream \u2013 1/True/true or 0/False/false, return stream.\n Default false stdin \u2013 1/True/true or 0/False/false, if stream=true, attach\n to stdin. Default false stdout \u2013 1/True/true or 0/False/false, if logs=true, return\n stdout log, if stream=true, attach to stdout. Default false stderr \u2013 1/True/true or 0/False/false, if logs=true, return\n stderr log, if stream=true, attach to stderr. Default false Status Codes: 200 \u2013 no error 400 \u2013 bad parameter 404 \u2013 no such container 500 \u2013 server error Stream details : When using the TTY setting is enabled in POST /containers/create ,\nthe stream is the raw data from the process PTY and client's stdin.\nWhen the TTY is disabled, then the stream is multiplexed to separate\nstdout and stderr. The format is a Header and a Payload (frame). HEADER The header will contain the information on which stream write the\nstream (stdout or stderr). It also contain the size of the\nassociated frame encoded on the last 4 bytes (uint32). It is encoded on the first 8 bytes like this: header := [8]byte{STREAM_TYPE, 0, 0, 0, SIZE1, SIZE2, SIZE3, SIZE4} STREAM_TYPE can be: 0: stdin (will be written on stdout) 1: stdout 2: stderr SIZE1, SIZE2, SIZE3, SIZE4 are the 4 bytes of\nthe uint32 size encoded as big endian. PAYLOAD The payload is the raw stream. IMPLEMENTATION The simplest way to implement the Attach protocol is the following: Read 8 bytes chose stdout or stderr depending on the first byte Extract the frame size from the last 4 byets Read the extracted size and output it on the correct output Goto 1) Attach to a container (websocket) GET /containers/(id)/attach/ws Attach to the container id via websocket Implements websocket protocol handshake according to RFC 6455 Example request GET /containers/e90e34656806/attach/ws?logs=0 stream=1 stdin=1 stdout=1 stderr=1 HTTP/1.1 Example response {{ STREAM }} Query Parameters: logs \u2013 1/True/true or 0/False/false, return logs. Default false stream \u2013 1/True/true or 0/False/false, return stream.\n Default false stdin \u2013 1/True/true or 0/False/false, if stream=true, attach\n to stdin. Default false stdout \u2013 1/True/true or 0/False/false, if logs=true, return\n stdout log, if stream=true, attach to stdout. Default false stderr \u2013 1/True/true or 0/False/false, if logs=true, return\n stderr log, if stream=true, attach to stderr. Default false Status Codes: 200 \u2013 no error 400 \u2013 bad parameter 404 \u2013 no such container 500 \u2013 server error Wait a container POST /containers/(id)/wait Block until container id stops, then returns the exit code Example request : POST /containers/16253994b7c4/wait HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\"StatusCode\": 0} Status Codes: 200 \u2013 no error 404 \u2013 no such container 500 \u2013 server error Remove a container DELETE /containers/(id) Remove the container id from the filesystem Example request : DELETE /containers/16253994b7c4?v=1 HTTP/1.1 Example response : HTTP/1.1 204 No Content Query Parameters: v \u2013 1/True/true or 0/False/false, Remove the volumes\n associated to the container. Default false Status Codes: 204 \u2013 no error 400 \u2013 bad parameter 404 \u2013 no such container 500 \u2013 server error Copy files or folders from a container POST /containers/(id)/copy Copy files or folders of container id Example request : POST /containers/4fa6e0f0c678/copy HTTP/1.1\n Content-Type: application/json\n\n {\n \"Resource\": \"test.txt\"\n } Example response : HTTP/1.1 200 OK\n Content-Type: application/octet-stream\n\n {{ TAR STREAM }} Status Codes: 200 \u2013 no error 404 \u2013 no such container 500 \u2013 server error",
|
|
"title": "2.1 Containers"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api_v1.6#22-images",
|
|
"tags": "",
|
|
"text": "List Images GET /images/(format) List images format could be json or viz (json default) Example request : GET /images/json?all=0 HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n [\n {\n \"Repository\":\"base\",\n \"Tag\":\"ubuntu-12.10\",\n \"Id\":\"b750fe79269d\",\n \"Created\":1364102658,\n \"Size\":24653,\n \"VirtualSize\":180116135\n },\n {\n \"Repository\":\"base\",\n \"Tag\":\"ubuntu-quantal\",\n \"Id\":\"b750fe79269d\",\n \"Created\":1364102658,\n \"Size\":24653,\n \"VirtualSize\":180116135\n }\n ] Example request : GET /images/viz HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: text/plain\n\n digraph docker {\n \"d82cbacda43a\" - \"074be284591f\"\n \"1496068ca813\" - \"08306dc45919\"\n \"08306dc45919\" - \"0e7893146ac2\"\n \"b750fe79269d\" - \"1496068ca813\"\n base - \"27cf78414709\" [style=invis]\n \"f71189fff3de\" - \"9a33b36209ed\"\n \"27cf78414709\" - \"b750fe79269d\"\n \"0e7893146ac2\" - \"d6434d954665\"\n \"d6434d954665\" - \"d82cbacda43a\"\n base - \"e9aa60c60128\" [style=invis]\n \"074be284591f\" - \"f71189fff3de\"\n \"b750fe79269d\" [label=\"b750fe79269d\\nbase\",shape=box,fillcolor=\"paleturquoise\",style=\"filled,rounded\"];\n \"e9aa60c60128\" [label=\"e9aa60c60128\\nbase2\",shape=box,fillcolor=\"paleturquoise\",style=\"filled,rounded\"];\n \"9a33b36209ed\" [label=\"9a33b36209ed\\ntest\",shape=box,fillcolor=\"paleturquoise\",style=\"filled,rounded\"];\n base [style=invisible]\n } Query Parameters: all \u2013 1/True/true or 0/False/false, Show all containers.\n Only running containers are shown by defaul Status Codes: 200 \u2013 no error 400 \u2013 bad parameter 500 \u2013 server error Create an image POST /images/create Create an image, either by pull it from the registry or by importing i Example request : POST /images/create?fromImage=base HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\"status\":\"Pulling...\"}\n {\"status\":\"Pulling\", \"progress\":\"1/? (n/a)\"}\n {\"error\":\"Invalid...\"}\n ...\n\nWhen using this endpoint to pull an image from the registry, the\n`X-Registry-Auth` header can be used to include\na base64-encoded AuthConfig object. Query Parameters: fromImage \u2013 name of the image to pull fromSrc \u2013 source to import, - means stdin repo \u2013 repository tag \u2013 tag registry \u2013 the registry to pull from Status Codes: 200 \u2013 no error 500 \u2013 server error Insert a file in an image POST /images/(name)/insert Insert a file from url in the image name at path Example request : POST /images/test/insert?path=/usr url=myurl HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\"status\":\"Inserting...\"}\n {\"status\":\"Inserting\", \"progress\":\"1/? (n/a)\"}\n {\"error\":\"Invalid...\"}\n ... Query Parameters: url \u2013 The url from where the file is taken path \u2013 The path where the file is stored Status Codes: 200 \u2013 no error 500 \u2013 server error Inspect an image GET /images/(name)/json Return low-level information on the image name Example request : GET /images/base/json HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\n \"id\":\"b750fe79269d2ec9a3c593ef05b4332b1d1a02a62b4accb2c21d589ff2f5f2dc\",\n \"parent\":\"27cf784147099545\",\n \"created\":\"2013-03-23T22:24:18.818426-07:00\",\n \"container\":\"3d67245a8d72ecf13f33dffac9f79dcdf70f75acb84d308770391510e0c23ad0\",\n \"container_config\":\n {\n \"Hostname\":\"\",\n \"User\":\"\",\n \"Memory\":0,\n \"MemorySwap\":0,\n \"AttachStdin\":false,\n \"AttachStdout\":false,\n \"AttachStderr\":false,\n \"ExposedPorts\":{},\n \"Tty\":true,\n \"OpenStdin\":true,\n \"StdinOnce\":false,\n \"Env\":null,\n \"Cmd\": [\"/bin/bash\"],\n \"Dns\":null,\n \"Image\":\"base\",\n \"Volumes\":null,\n \"VolumesFrom\":\"\",\n \"WorkingDir\":\"\"\n },\n \"Size\": 6824592\n } Status Codes: 200 \u2013 no error 404 \u2013 no such image 500 \u2013 server error Get the history of an image GET /images/(name)/history Return the history of the image name Example request : GET /images/base/history HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n [\n {\n \"Id\": \"b750fe79269d\",\n \"Created\": 1364102658,\n \"CreatedBy\": \"/bin/bash\"\n },\n {\n \"Id\": \"27cf78414709\",\n \"Created\": 1364068391,\n \"CreatedBy\": \"\"\n }\n ] Status Codes: 200 \u2013 no error 404 \u2013 no such image 500 \u2013 server error Push an image on the registry POST /images/(name)/push Push the image name on the registry Example request : POST /images/test/push HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n{\"status\":\"Pushing...\"} {\"status\":\"Pushing\", \"progress\":\"1/? (n/a)\"}\n{\"error\":\"Invalid...\"} ... The `X-Registry-Auth` header can be used to include a base64-encoded AuthConfig object. Status Codes: 200 \u2013 no error :statuscode 404: no such image :statuscode\n 500: server error Tag an image into a repository POST /images/(name)/tag Tag the image name into a repository Example request : POST /images/test/tag?repo=myrepo force=0 tag=v42 HTTP/1.1 Example response : HTTP/1.1 201 OK Query Parameters: repo \u2013 The repository to tag in force \u2013 1/True/true or 0/False/false, default false tag - The new tag name Status Codes: 201 \u2013 no error 400 \u2013 bad parameter 404 \u2013 no such image 409 \u2013 conflict 500 \u2013 server error Remove an image DELETE /images/(name) Remove the image name from the filesystem Example request : DELETE /images/test HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-type: application/json\n\n [\n {\"Untagged\": \"3e2f21a89f\"},\n {\"Deleted\": \"3e2f21a89f\"},\n {\"Deleted\": \"53b4f83ac9\"}\n ] Status Codes: 200 \u2013 no error 404 \u2013 no such image 409 \u2013 conflict 500 \u2013 server error Search images GET /images/search Search for an image on Docker Hub Example request : GET /images/search?term=sshd HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n [\n {\n \"Name\":\"cespare/sshd\",\n \"Description\":\"\"\n },\n {\n \"Name\":\"johnfuller/sshd\",\n \"Description\":\"\"\n },\n {\n \"Name\":\"dhrp/mongodb-sshd\",\n \"Description\":\"\"\n }\n ]\n\n :query term: term to search\n :statuscode 200: no error\n :statuscode 500: server error",
|
|
"title": "2.2 Images"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api_v1.6#23-misc",
|
|
"tags": "",
|
|
"text": "Build an image from Dockerfile via stdin POST /build Build an image from Dockerfile via stdin Example request : POST /build HTTP/1.1\n\n {{ TAR STREAM }} Example response : HTTP/1.1 200 OK\n\n {{ STREAM }}\n\nThe stream must be a tar archive compressed with one of the\nfollowing algorithms: identity (no compression), gzip, bzip2, xz.\nThe archive must include a file called Dockerfile at its root. I\nmay include any number of other files, which will be accessible in\nthe build context (See the ADD build command).\n\nThe Content-type header should be set to \"application/tar\". Query Parameters: t \u2013 repository name (and optionally a tag) to be applied to\n the resulting image in case of success remote \u2013 build source URI (git or HTTPS/HTTP) q \u2013 suppress verbose build output nocache \u2013 do not use the cache when building the image Status Codes: 200 \u2013 no error 500 \u2013 server error Check auth configuration POST /auth Get the default username and email Example request : POST /auth HTTP/1.1\n Content-Type: application/json\n\n {\n \"username\":\" hannibal\",\n \"password: \"xxxx\",\n \"email\": \"hannibal@a-team.com\",\n \"serveraddress\": \"https://index.docker.io/v1/\"\n } Example response : HTTP/1.1 200 OK\n Content-Type: text/plain Status Codes: 200 \u2013 no error 204 \u2013 no error 500 \u2013 server error Display system-wide information GET /info Display system-wide information Example request : GET /info HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\n \"Containers\":11,\n \"Images\":16,\n \"Debug\":false,\n \"NFd\": 11,\n \"NGoroutines\":21,\n \"MemoryLimit\":true,\n \"SwapLimit\":false,\n \"IPv4Forwarding\":true\n } Status Codes: 200 \u2013 no error 500 \u2013 server error Show the docker version information GET /version Show the docker version information Example request : GET /version HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\n \"Version\":\"0.2.2\",\n \"GitCommit\":\"5a2a5cc+CHANGES\",\n \"GoVersion\":\"go1.0.3\"\n } Status Codes: 200 \u2013 no error 500 \u2013 server error Create a new image from a container's changes POST /commit Create a new image from a container's changes Example request : POST /commit?container=44c004db4b17 m=message repo=myrepo HTTP/1.1\n Content-Type: application/json\n\n {\n \"Cmd\": [\"cat\", \"/world\"],\n \"ExposedPorts\":{\"22/tcp\":{}}\n } Example response : HTTP/1.1 201 OK\n Content-Type: application/vnd.docker.raw-stream\n\n {\"Id\": \"596069db4bf5\"} Query Parameters: container \u2013 source container repo \u2013 repository tag \u2013 tag m \u2013 commit message author \u2013 author (e.g., \"John Hannibal Smith\n hannibal@a-team.com \") Status Codes: 201 \u2013 no error 404 \u2013 no such container 500 \u2013 server error Monitor Docker's events GET /events Get events from docker, either in real time via streaming, or via\npolling (using since). Docker containers will report the following events: create, destroy, die, export, kill, pause, restart, start, stop, unpause and Docker images will report: untag, delete Example request : GET /events?since=1374067924 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\"status\": \"create\", \"id\": \"dfdf82bd3881\",\"from\": \"base:latest\", \"time\":1374067924}\n {\"status\": \"start\", \"id\": \"dfdf82bd3881\",\"from\": \"base:latest\", \"time\":1374067924}\n {\"status\": \"stop\", \"id\": \"dfdf82bd3881\",\"from\": \"base:latest\", \"time\":1374067966}\n {\"status\": \"destroy\", \"id\": \"dfdf82bd3881\",\"from\": \"base:latest\", \"time\":1374067970} Query Parameters: since \u2013 timestamp used for polling Status Codes: 200 \u2013 no error 500 \u2013 server error",
|
|
"title": "2.3 Misc"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api_v1.6#3-going-further",
|
|
"tags": "",
|
|
"text": "",
|
|
"title": "3. Going further"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api_v1.6#31-inside-docker-run",
|
|
"tags": "",
|
|
"text": "Here are the steps of docker run : Create the container If the status code is 404, it means the image doesn't exist:\n - Try to pull it\n - Then retry to create the container Start the container If you are not in detached mode:\n - Attach to the container, using logs=1 (to have stdout and\n stderr from the container's start) and stream=1 If in detached mode or only stdin is attached:\n - Display the container's id",
|
|
"title": "3.1 Inside docker run"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api_v1.6#32-hijacking",
|
|
"tags": "",
|
|
"text": "In this version of the API, /attach, uses hijacking to transport stdin,\nstdout and stderr on the same socket. This might change in the future.",
|
|
"title": "3.2 Hijacking"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api_v1.6#33-cors-requests",
|
|
"tags": "",
|
|
"text": "To enable cross origin requests to the remote api add the flag\n\"--api-enable-cors\" when running docker in daemon mode. $ docker -d -H=\"192.168.1.9:2375\" --api-enable-cors",
|
|
"title": "3.3 CORS Requests"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api_v1.5/",
|
|
"tags": "",
|
|
"text": "Docker Remote API v1.5\n1. Brief introduction\n\nThe Remote API is replacing rcli\nDefault port in the docker daemon is 2375\nThe API tends to be REST, but for some complex commands, like attach\n or pull, the HTTP connection is hijacked to transport stdout stdin\n and stderr\n\n2. Endpoints\n2.1 Containers\nList containers\nGET /containers/json\nList containers\nExample request:\n GET /containers/json?all=1before=8dfafdbc3a40size=1 HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n [\n {\n \"Id\": \"8dfafdbc3a40\",\n \"Image\": \"ubuntu:latest\",\n \"Command\": \"echo 1\",\n \"Created\": 1367854155,\n \"Status\": \"Exit 0\",\n \"Ports\":[{\"PrivatePort\": 2222, \"PublicPort\": 3333, \"Type\": \"tcp\"}],\n \"SizeRw\":12288,\n \"SizeRootFs\":0\n },\n {\n \"Id\": \"9cd87474be90\",\n \"Image\": \"ubuntu:latest\",\n \"Command\": \"echo 222222\",\n \"Created\": 1367854155,\n \"Status\": \"Exit 0\",\n \"Ports\":[],\n \"SizeRw\":12288,\n \"SizeRootFs\":0\n },\n {\n \"Id\": \"3176a2479c92\",\n \"Image\": \"centos:latest\",\n \"Command\": \"echo 3333333333333333\",\n \"Created\": 1367854154,\n \"Status\": \"Exit 0\",\n \"Ports\":[],\n \"SizeRw\":12288,\n \"SizeRootFs\":0\n },\n {\n \"Id\": \"4cb07b47f9fb\",\n \"Image\": \"fedora:latest\",\n \"Command\": \"echo 444444444444444444444444444444444\",\n \"Created\": 1367854152,\n \"Status\": \"Exit 0\",\n \"Ports\":[],\n \"SizeRw\":12288,\n \"SizeRootFs\":0\n }\n ]\n\nQuery Parameters:\n\n\n\nall \u2013 1/True/true or 0/False/false, Show all containers.\n Only running containers are shown by default (i.e., this defaults to false)\nlimit \u2013 Show limit last created containers, include non-running ones.\nsince \u2013 Show only containers created since Id, include non-running ones.\nbefore \u2013 Show only containers created before Id, include non-running ones.\nsize \u2013 1/True/true or 0/False/false, Show the containers sizes\n\nStatus Codes:\n\n200 \u2013 no error\n400 \u2013 bad parameter\n500 \u2013 server error\n\nCreate a container\nPOST /containers/create\nCreate a container\nExample request:\n POST /containers/create HTTP/1.1\n Content-Type: application/json\n\n {\n \"Hostname\":\"\",\n \"User\":\"\",\n \"Memory\":0,\n \"MemorySwap\":0,\n \"AttachStdin\":false,\n \"AttachStdout\":true,\n \"AttachStderr\":true,\n \"PortSpecs\":null,\n \"Privileged\": false,\n \"Tty\":false,\n \"OpenStdin\":false,\n \"StdinOnce\":false,\n \"Env\":null,\n \"Cmd\":[\n \"date\"\n ],\n \"Dns\":null,\n \"Image\":\"ubuntu\",\n \"Volumes\":{},\n \"VolumesFrom\":\"\",\n \"WorkingDir\":\"\"\n }\n\nExample response:\n HTTP/1.1 201 Created\n Content-Type: application/json\n\n {\n \"Id\":\"e90e34656806\"\n \"Warnings\":[]\n }\n\nJson Parameters:\n\nconfig \u2013 the container's configuration\n\nStatus Codes:\n\n201 \u2013 no error\n404 \u2013 no such container\n406 \u2013 impossible to attach (container not running)\n500 \u2013 server error\n\nInspect a container\nGET /containers/(id)/json\nReturn low-level information on the container id\nExample request:\n GET /containers/4fa6e0f0c678/json HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\n \"Id\": \"4fa6e0f0c6786287e131c3852c58a2e01cc697a68231826813597e4994f1d6e2\",\n \"Created\": \"2013-05-07T14:51:42.041847+02:00\",\n \"Path\": \"date\",\n \"Args\": [],\n \"Config\": {\n \"Hostname\": \"4fa6e0f0c678\",\n \"User\": \"\",\n \"Memory\": 0,\n \"MemorySwap\": 0,\n \"AttachStdin\": false,\n \"AttachStdout\": true,\n \"AttachStderr\": true,\n \"PortSpecs\": null,\n \"Tty\": false,\n \"OpenStdin\": false,\n \"StdinOnce\": false,\n \"Env\": null,\n \"Cmd\": [\n \"date\"\n ],\n \"Dns\": null,\n \"Image\": \"ubuntu\",\n \"Volumes\": {},\n \"VolumesFrom\": \"\",\n \"WorkingDir\":\"\"\n },\n \"State\": {\n \"Running\": false,\n \"Pid\": 0,\n \"ExitCode\": 0,\n \"StartedAt\": \"2013-05-07T14:51:42.087658+02:01360\",\n \"Ghost\": false\n },\n \"Image\": \"b750fe79269d2ec9a3c593ef05b4332b1d1a02a62b4accb2c21d589ff2f5f2dc\",\n \"NetworkSettings\": {\n \"IpAddress\": \"\",\n \"IpPrefixLen\": 0,\n \"Gateway\": \"\",\n \"Bridge\": \"\",\n \"PortMapping\": null\n },\n \"SysInitPath\": \"/home/kitty/go/src/github.com/docker/docker/bin/docker\",\n \"ResolvConfPath\": \"/etc/resolv.conf\",\n \"Volumes\": {}\n }\n\nStatus Codes:\n\n200 \u2013 no error\n404 \u2013 no such container\n500 \u2013 server error\n\nList processes running inside a container\nGET /containers/(id)/top\nList processes running inside the container id\nExample request:\n GET /containers/4fa6e0f0c678/top HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\n \"Titles\":[\n \"USER\",\n \"PID\",\n \"%CPU\",\n \"%MEM\",\n \"VSZ\",\n \"RSS\",\n \"TTY\",\n \"STAT\",\n \"START\",\n \"TIME\",\n \"COMMAND\"\n ],\n \"Processes\":[\n [\"root\",\"20147\",\"0.0\",\"0.1\",\"18060\",\"1864\",\"pts/4\",\"S\",\"10:06\",\"0:00\",\"bash\"],\n [\"root\",\"20271\",\"0.0\",\"0.0\",\"4312\",\"352\",\"pts/4\",\"S+\",\"10:07\",\"0:00\",\"sleep\",\"10\"]\n ]\n }\n\nQuery Parameters:\n\nps_args \u2013 ps arguments to use (e.g., aux)\n\nStatus Codes:\n\n200 \u2013 no error\n404 \u2013 no such container\n500 \u2013 server error\n\nInspect changes on a container's filesystem\nGET /containers/(id)/changes\nInspect changes on container id's filesystem\nExample request:\n GET /containers/4fa6e0f0c678/changes HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n [\n {\n \"Path\":\"/dev\",\n \"Kind\":0\n },\n {\n \"Path\":\"/dev/kmsg\",\n \"Kind\":1\n },\n {\n \"Path\":\"/test\",\n \"Kind\":1\n }\n ]\n\nStatus Codes:\n\n200 \u2013 no error\n404 \u2013 no such container\n500 \u2013 server error\n\nExport a container\nGET /containers/(id)/export\nExport the contents of container id\nExample request:\n GET /containers/4fa6e0f0c678/export HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/octet-stream\n\n {{ TAR STREAM }}\n\nStatus Codes:\n\n200 \u2013 no error\n404 \u2013 no such container\n500 \u2013 server error\n\nStart a container\nPOST /containers/(id)/start\nStart the container id\nExample request:\n POST /containers/(id)/start HTTP/1.1\n Content-Type: application/json\n\n {\n \"Binds\":[\"/tmp:/tmp\"],\n \"LxcConf\":[{\"Key\":\"lxc.utsname\",\"Value\":\"docker\"}]\n }\n\nExample response:\n HTTP/1.1 204 No Content\n Content-Type: text/plain\n\nJson Parameters:\n\n\n\nhostConfig \u2013 the container's host configuration (optional)\n\nStatus Codes:\n\n204 \u2013 no error\n404 \u2013 no such container\n500 \u2013 server error\n\nStop a container\nPOST /containers/(id)/stop\nStop the container id\nExample request:\n POST /containers/e90e34656806/stop?t=5 HTTP/1.1\n\nExample response:\n HTTP/1.1 204 OK\n\nQuery Parameters:\n\nt \u2013 number of seconds to wait before killing the container\n\nStatus Codes:\n\n204 \u2013 no error\n404 \u2013 no such container\n500 \u2013 server error\n\nRestart a container\nPOST /containers/(id)/restart\nRestart the container id\nExample request:\n POST /containers/e90e34656806/restart?t=5 HTTP/1.1\n\nExample response:\n HTTP/1.1 204 No Content\n\nQuery Parameters:\n\nt \u2013 number of seconds to wait before killing the container\n\nStatus Codes:\n\n204 \u2013 no error\n404 \u2013 no such container\n500 \u2013 server error\n\nKill a container\nPOST /containers/(id)/kill\nKill the container id\nExample request:\n POST /containers/e90e34656806/kill HTTP/1.1\n\nExample response:\n HTTP/1.1 204 No Content\n\nStatus Codes:\n\n204 \u2013 no error\n404 \u2013 no such container\n500 \u2013 server error\n\nAttach to a container\nPOST /containers/(id)/attach\nAttach to the container id\nExample request:\n POST /containers/16253994b7c4/attach?logs=1stream=0stdout=1 HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/vnd.docker.raw-stream\n\n {{ STREAM }}\n\nQuery Parameters:\n\nlogs \u2013 1/True/true or 0/False/false, return logs. Defaul\n false\nstream \u2013 1/True/true or 0/False/false, return stream.\n Default false\nstdin \u2013 1/True/true or 0/False/false, if stream=true, attach\n to stdin. Default false\nstdout \u2013 1/True/true or 0/False/false, if logs=true, return\n stdout log, if stream=true, attach to stdout. Default false\nstderr \u2013 1/True/true or 0/False/false, if logs=true, return\n stderr log, if stream=true, attach to stderr. Default false\n\nStatus Codes:\n\n200 \u2013 no error\n400 \u2013 bad parameter\n404 \u2013 no such container\n500 \u2013 server error\n\nAttach to a container (websocket)\nGET /containers/(id)/attach/ws\nAttach to the container id via websocket\nImplements websocket protocol handshake according to RFC 6455\nExample request\n GET /containers/e90e34656806/attach/ws?logs=0stream=1stdin=1stdout=1stderr=1 HTTP/1.1\n\nExample response\n {{ STREAM }}\n\nQuery Parameters:\n\nlogs \u2013 1/True/true or 0/False/false, return logs. Default false\nstream \u2013 1/True/true or 0/False/false, return stream.\n Default false\nstdin \u2013 1/True/true or 0/False/false, if stream=true, attach\n to stdin. Default false\nstdout \u2013 1/True/true or 0/False/false, if logs=true, return\n stdout log, if stream=true, attach to stdout. Default false\nstderr \u2013 1/True/true or 0/False/false, if logs=true, return\n stderr log, if stream=true, attach to stderr. Default false\n\nStatus Codes:\n\n200 \u2013 no error\n400 \u2013 bad parameter\n404 \u2013 no such container\n500 \u2013 server error\n\nWait a container\nPOST /containers/(id)/wait\nBlock until container id stops, then returns the exit code\nExample request:\n POST /containers/16253994b7c4/wait HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\"StatusCode\": 0}\n\nStatus Codes:\n\n200 \u2013 no error\n404 \u2013 no such container\n500 \u2013 server error\n\nRemove a container\nDELETE /containers/(id)\nRemove the container id from the filesystem\nExample request:\n DELETE /containers/16253994b7c4?v=1 HTTP/1.1\n\nExample response:\n HTTP/1.1 204 No Content\n\nQuery Parameters:\n\nv \u2013 1/True/true or 0/False/false, Remove the volumes\n associated to the container. Default false\n\nStatus Codes:\n\n204 \u2013 no error\n400 \u2013 bad parameter\n404 \u2013 no such container\n500 \u2013 server error\n\nCopy files or folders from a container\nPOST /containers/(id)/copy\nCopy files or folders of container id\nExample request:\n POST /containers/4fa6e0f0c678/copy HTTP/1.1\n Content-Type: application/json\n\n {\n \"Resource\":\"test.txt\"\n }\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/octet-stream\n\n {{ TAR STREAM }}\n\nStatus Codes:\n\n200 \u2013 no error\n404 \u2013 no such container\n500 \u2013 server error\n\n2.2 Images\nList Images\nGET /images/(format)\nList images format could be json or viz (json default)\nExample request:\n GET /images/json?all=0 HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n [\n {\n \"Repository\":\"ubuntu\",\n \"Tag\":\"precise\",\n \"Id\":\"b750fe79269d\",\n \"Created\":1364102658,\n \"Size\":24653,\n \"VirtualSize\":180116135\n },\n {\n \"Repository\":\"ubuntu\",\n \"Tag\":\"12.04\",\n \"Id\":\"b750fe79269d\",\n \"Created\":1364102658,\n \"Size\":24653,\n \"VirtualSize\":180116135\n }\n ]\n\nExample request:\n GET /images/viz HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: text/plain\n\n digraph docker {\n \"d82cbacda43a\" - \"074be284591f\"\n \"1496068ca813\" - \"08306dc45919\"\n \"08306dc45919\" - \"0e7893146ac2\"\n \"b750fe79269d\" - \"1496068ca813\"\n base - \"27cf78414709\" [style=invis]\n \"f71189fff3de\" - \"9a33b36209ed\"\n \"27cf78414709\" - \"b750fe79269d\"\n \"0e7893146ac2\" - \"d6434d954665\"\n \"d6434d954665\" - \"d82cbacda43a\"\n base - \"e9aa60c60128\" [style=invis]\n \"074be284591f\" - \"f71189fff3de\"\n \"b750fe79269d\" [label=\"b750fe79269d\\nubuntu\",shape=box,fillcolor=\"paleturquoise\",style=\"filled,rounded\"];\n \"e9aa60c60128\" [label=\"e9aa60c60128\\ncentos\",shape=box,fillcolor=\"paleturquoise\",style=\"filled,rounded\"];\n \"9a33b36209ed\" [label=\"9a33b36209ed\\nfedora\",shape=box,fillcolor=\"paleturquoise\",style=\"filled,rounded\"];\n base [style=invisible]\n }\n\nQuery Parameters:\n\nall \u2013 1/True/true or 0/False/false, Show all containers.\n Only running containers are shown by defaul\n\nStatus Codes:\n\n200 \u2013 no error\n400 \u2013 bad parameter\n500 \u2013 server error\n\nCreate an image\nPOST /images/create\nCreate an image, either by pull it from the registry or by importing i\nExample request:\n POST /images/create?fromImage=ubuntu HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\"status\":\"Pulling...\"}\n {\"status\":\"Pulling\", \"progress\":\"1/? (n/a)\"}\n {\"error\":\"Invalid...\"}\n ...\n\nWhen using this endpoint to pull an image from the registry, the\n`X-Registry-Auth` header can be used to include\na base64-encoded AuthConfig object.\n\nQuery Parameters:\n\nfromImage \u2013 name of the image to pull\nfromSrc \u2013 source to import, - means stdin\nrepo \u2013 repository\ntag \u2013 tag\nregistry \u2013 the registry to pull from\n\nStatus Codes:\n\n200 \u2013 no error\n500 \u2013 server error\n\nInsert a file in an image\nPOST /images/(name)/insert\nInsert a file from url in the image name at path\nExample request:\n POST /images/test/insert?path=/usrurl=myurl HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\"status\":\"Inserting...\"}\n {\"status\":\"Inserting\", \"progress\":\"1/? (n/a)\"}\n {\"error\":\"Invalid...\"}\n ...\n\nQuery Parameters:\n\nurl \u2013 The url from where the file is taken\npath \u2013 The path where the file is stored\n\nStatus Codes:\n\n200 \u2013 no error\n500 \u2013 server error\n\nInspect an image\nGET /images/(name)/json\nReturn low-level information on the image name\nExample request:\n GET /images/centos/json HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\n \"id\":\"b750fe79269d2ec9a3c593ef05b4332b1d1a02a62b4accb2c21d589ff2f5f2dc\",\n \"parent\":\"27cf784147099545\",\n \"created\":\"2013-03-23T22:24:18.818426-07:00\",\n \"container\":\"3d67245a8d72ecf13f33dffac9f79dcdf70f75acb84d308770391510e0c23ad0\",\n \"container_config\":\n {\n \"Hostname\":\"\",\n \"User\":\"\",\n \"Memory\":0,\n \"MemorySwap\":0,\n \"AttachStdin\":false,\n \"AttachStdout\":false,\n \"AttachStderr\":false,\n \"PortSpecs\":null,\n \"Tty\":true,\n \"OpenStdin\":true,\n \"StdinOnce\":false,\n \"Env\":null,\n \"Cmd\": [\"/bin/bash\"],\n \"Dns\":null,\n \"Image\":\"centos\",\n \"Volumes\":null,\n \"VolumesFrom\":\"\",\n \"WorkingDir\":\"\"\n },\n \"Size\": 6824592\n }\n\nStatus Codes:\n\n200 \u2013 no error\n404 \u2013 no such image\n500 \u2013 server error\n\nGet the history of an image\nGET /images/(name)/history\nReturn the history of the image name\nExample request:\n GET /images/fedora/history HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n [\n {\n \"Id\":\"b750fe79269d\",\n \"Created\":1364102658,\n \"CreatedBy\":\"/bin/bash\"\n },\n {\n \"Id\":\"27cf78414709\",\n \"Created\":1364068391,\n \"CreatedBy\":\"\"\n }\n ]\n\nStatus Codes:\n\n200 \u2013 no error\n404 \u2013 no such image\n500 \u2013 server error\n\nPush an image on the registry\nPOST /images/(name)/push\nPush the image name on the registry\nExample request:\n POST /images/test/push HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\"status\":\"Pushing...\"}\n {\"status\":\"Pushing\", \"progress\":\"1/? (n/a)\"}\n {\"error\":\"Invalid...\"}\n ...\n\nThe `X-Registry-Auth` header can be used to\ninclude a base64-encoded AuthConfig object.\n\nStatus Codes:\n\n200 \u2013 no error\n404 \u2013 no such image\n500 \u2013 server error\n\nTag an image into a repository\nPOST /images/(name)/tag\nTag the image name into a repository\nExample request:\n POST /images/test/tag?repo=myrepoforce=0tag=v42 HTTP/1.1\n\nExample response:\n HTTP/1.1 201 OK\n\nQuery Parameters:\n\nrepo \u2013 The repository to tag in\nforce \u2013 1/True/true or 0/False/false, default false\ntag - The new tag name\n\nStatus Codes:\n\n201 \u2013 no error\n400 \u2013 bad parameter\n404 \u2013 no such image\n409 \u2013 conflict\n500 \u2013 server error\n\nRemove an image\nDELETE /images/(name)\nRemove the image name from the filesystem\nExample request:\n DELETE /images/test HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-type: application/json\n\n [\n {\"Untagged\":\"3e2f21a89f\"},\n {\"Deleted\":\"3e2f21a89f\"},\n {\"Deleted\":\"53b4f83ac9\"}\n ]\n\nStatus Codes:\n\n200 \u2013 no error\n404 \u2013 no such image\n409 \u2013 conflict\n500 \u2013 server error\n\nSearch images\nGET /images/search\nSearch for an image on Docker Hub\nExample request:\n GET /images/search?term=sshd HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n [\n {\n \"Name\":\"cespare/sshd\",\n \"Description\":\"\"\n },\n {\n \"Name\":\"johnfuller/sshd\",\n \"Description\":\"\"\n },\n {\n \"Name\":\"dhrp/mongodb-sshd\",\n \"Description\":\"\"\n }\n ]\n\nQuery Parameters:\n\nterm \u2013 term to search\n\nStatus Codes:\n\n200 \u2013 no error\n500 \u2013 server error\n\n2.3 Misc\nBuild an image from Dockerfile via stdin\nPOST /build\nBuild an image from Dockerfile via stdin\nExample request:\n POST /build HTTP/1.1\n\n {{ TAR STREAM }}\n\nExample response:\n HTTP/1.1 200 OK\n\n {{ STREAM }}\n\nThe stream must be a tar archive compressed with one of the\nfollowing algorithms: identity (no compression), gzip, bzip2, xz.\nThe archive must include a file called Dockerfile at its root. I\nmay include any number of other files, which will be accessible in\nthe build context (See the ADD build command).\n\nThe Content-type header should be set to \"application/tar\".\n\nQuery Parameters:\n\nt \u2013 repository name (and optionally a tag) to be applied to\n the resulting image in case of success\nremote \u2013 build source URI (git or HTTPS/HTTP)\nq \u2013 suppress verbose build output\nnocache \u2013 do not use the cache when building the image\nrm \u2013 remove intermediate containers after a successful build\n\nStatus Codes:\n\n200 \u2013 no error\n500 \u2013 server error\n\nCheck auth configuration\nPOST /auth\nGet the default username and email\nExample request:\n POST /auth HTTP/1.1\n Content-Type: application/json\n\n {\n \"username\":\"hannibal\",\n \"password:\"xxxx\",\n \"email\":\"hannibal@a-team.com\",\n \"serveraddress\":\"https://index.docker.io/v1/\"\n }\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: text/plain\n\nStatus Codes:\n\n200 \u2013 no error\n204 \u2013 no error\n500 \u2013 server error\n\nDisplay system-wide information\nGET /info\nDisplay system-wide information\nExample request:\n GET /info HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\n \"Containers\":11,\n \"Images\":16,\n \"Debug\":false,\n \"NFd\": 11,\n \"NGoroutines\":21,\n \"MemoryLimit\":true,\n \"SwapLimit\":false,\n \"IPv4Forwarding\":true\n }\n\nStatus Codes:\n\n200 \u2013 no error\n500 \u2013 server error\n\nShow the docker version information\nGET /version\nShow the docker version information\nExample request:\n GET /version HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\n \"Version\":\"0.2.2\",\n \"GitCommit\":\"5a2a5cc+CHANGES\",\n \"GoVersion\":\"go1.0.3\"\n }\n\nStatus Codes:\n\n200 \u2013 no error\n500 \u2013 server error\n\nCreate a new image from a container's changes\nPOST /commit\nCreate a new image from a container's changes\nExample request:\n POST /commit?container=44c004db4b17m=messagerepo=myrepo HTTP/1.1\n Content-Type: application/json\n\n {\n \"Cmd\": [\"cat\", \"/world\"],\n \"PortSpecs\":[\"22\"]\n }\n\nExample response:\n HTTP/1.1 201 OK\n Content-Type: application/vnd.docker.raw-stream\n\n {\"Id\": \"596069db4bf5\"}\n\nQuery Parameters:\n\ncontainer \u2013 source container\nrepo \u2013 repository\ntag \u2013 tag\nm \u2013 commit message\nauthor \u2013 author (e.g., \"John Hannibal Smith\n hannibal@a-team.com\")\n\nStatus Codes:\n\n201 \u2013 no error\n404 \u2013 no such container\n500 \u2013 server error\n\nMonitor Docker's events\nGET /events\nGet events from docker, either in real time via streaming, or via\npolling (using since).\nDocker containers will report the following events:\ncreate, destroy, die, export, kill, pause, restart, start, stop, unpause\n\nand Docker images will report:\nuntag, delete\n\nExample request:\n GET /events?since=1374067924\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\"status\":\"create\",\"id\":\"dfdf82bd3881\",\"from\":\"ubuntu:latest\",\"time\":1374067924}\n {\"status\":\"start\",\"id\":\"dfdf82bd3881\",\"from\":\"ubuntu:latest\",\"time\":1374067924}\n {\"status\":\"stop\",\"id\":\"dfdf82bd3881\",\"from\":\"ubuntu:latest\",\"time\":1374067966}\n {\"status\":\"destroy\",\"id\":\"dfdf82bd3881\",\"from\":\"ubuntu:latest\",\"time\":1374067970}\n\nQuery Parameters:\n\nsince \u2013 timestamp used for polling\n\nStatus Codes:\n\n200 \u2013 no error\n500 \u2013 server error\n\n3. Going further\n3.1 Inside docker run\nHere are the steps of docker run:\n\nCreate the container\nIf the status code is 404, it means the image doesn't exist:\n Try to pull it - Then retry to create the container\nStart the container\nIf you are not in detached mode:\n Attach to the container, using logs=1 (to have stdout and stderr\n from the container's start) and stream=1\nIf in detached mode or only stdin is attached:\n Display the container's id\n\n3.2 Hijacking\nIn this version of the API, /attach, uses hijacking to transport stdin,\nstdout and stderr on the same socket. This might change in the future.\n3.3 CORS Requests\nTo enable cross origin requests to the remote api add the flag\n\"--api-enable-cors\" when running docker in daemon mode.\n$ docker -d -H=\"192.168.1.9:2375\" --api-enable-cors",
|
|
"title": "**HIDDEN**"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api_v1.5#docker-remote-api-v15",
|
|
"tags": "",
|
|
"text": "",
|
|
"title": "Docker Remote API v1.5"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api_v1.5#1-brief-introduction",
|
|
"tags": "",
|
|
"text": "The Remote API is replacing rcli Default port in the docker daemon is 2375 The API tends to be REST, but for some complex commands, like attach\n or pull, the HTTP connection is hijacked to transport stdout stdin\n and stderr",
|
|
"title": "1. Brief introduction"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api_v1.5#2-endpoints",
|
|
"tags": "",
|
|
"text": "",
|
|
"title": "2. Endpoints"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api_v1.5#21-containers",
|
|
"tags": "",
|
|
"text": "List containers GET /containers/json List containers Example request : GET /containers/json?all=1 before=8dfafdbc3a40 size=1 HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n [\n {\n \"Id\": \"8dfafdbc3a40\",\n \"Image\": \"ubuntu:latest\",\n \"Command\": \"echo 1\",\n \"Created\": 1367854155,\n \"Status\": \"Exit 0\",\n \"Ports\":[{\"PrivatePort\": 2222, \"PublicPort\": 3333, \"Type\": \"tcp\"}],\n \"SizeRw\":12288,\n \"SizeRootFs\":0\n },\n {\n \"Id\": \"9cd87474be90\",\n \"Image\": \"ubuntu:latest\",\n \"Command\": \"echo 222222\",\n \"Created\": 1367854155,\n \"Status\": \"Exit 0\",\n \"Ports\":[],\n \"SizeRw\":12288,\n \"SizeRootFs\":0\n },\n {\n \"Id\": \"3176a2479c92\",\n \"Image\": \"centos:latest\",\n \"Command\": \"echo 3333333333333333\",\n \"Created\": 1367854154,\n \"Status\": \"Exit 0\",\n \"Ports\":[],\n \"SizeRw\":12288,\n \"SizeRootFs\":0\n },\n {\n \"Id\": \"4cb07b47f9fb\",\n \"Image\": \"fedora:latest\",\n \"Command\": \"echo 444444444444444444444444444444444\",\n \"Created\": 1367854152,\n \"Status\": \"Exit 0\",\n \"Ports\":[],\n \"SizeRw\":12288,\n \"SizeRootFs\":0\n }\n ] Query Parameters: all \u2013 1/True/true or 0/False/false, Show all containers.\n Only running containers are shown by default (i.e., this defaults to false) limit \u2013 Show limit last created containers, include non-running ones. since \u2013 Show only containers created since Id, include non-running ones. before \u2013 Show only containers created before Id, include non-running ones. size \u2013 1/True/true or 0/False/false, Show the containers sizes Status Codes: 200 \u2013 no error 400 \u2013 bad parameter 500 \u2013 server error Create a container POST /containers/create Create a container Example request : POST /containers/create HTTP/1.1\n Content-Type: application/json\n\n {\n \"Hostname\":\"\",\n \"User\":\"\",\n \"Memory\":0,\n \"MemorySwap\":0,\n \"AttachStdin\":false,\n \"AttachStdout\":true,\n \"AttachStderr\":true,\n \"PortSpecs\":null,\n \"Privileged\": false,\n \"Tty\":false,\n \"OpenStdin\":false,\n \"StdinOnce\":false,\n \"Env\":null,\n \"Cmd\":[\n \"date\"\n ],\n \"Dns\":null,\n \"Image\":\"ubuntu\",\n \"Volumes\":{},\n \"VolumesFrom\":\"\",\n \"WorkingDir\":\"\"\n } Example response : HTTP/1.1 201 Created\n Content-Type: application/json\n\n {\n \"Id\":\"e90e34656806\"\n \"Warnings\":[]\n } Json Parameters: config \u2013 the container's configuration Status Codes: 201 \u2013 no error 404 \u2013 no such container 406 \u2013 impossible to attach (container not running) 500 \u2013 server error Inspect a container GET /containers/(id)/json Return low-level information on the container id Example request : GET /containers/4fa6e0f0c678/json HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\n \"Id\": \"4fa6e0f0c6786287e131c3852c58a2e01cc697a68231826813597e4994f1d6e2\",\n \"Created\": \"2013-05-07T14:51:42.041847+02:00\",\n \"Path\": \"date\",\n \"Args\": [],\n \"Config\": {\n \"Hostname\": \"4fa6e0f0c678\",\n \"User\": \"\",\n \"Memory\": 0,\n \"MemorySwap\": 0,\n \"AttachStdin\": false,\n \"AttachStdout\": true,\n \"AttachStderr\": true,\n \"PortSpecs\": null,\n \"Tty\": false,\n \"OpenStdin\": false,\n \"StdinOnce\": false,\n \"Env\": null,\n \"Cmd\": [\n \"date\"\n ],\n \"Dns\": null,\n \"Image\": \"ubuntu\",\n \"Volumes\": {},\n \"VolumesFrom\": \"\",\n \"WorkingDir\":\"\"\n },\n \"State\": {\n \"Running\": false,\n \"Pid\": 0,\n \"ExitCode\": 0,\n \"StartedAt\": \"2013-05-07T14:51:42.087658+02:01360\",\n \"Ghost\": false\n },\n \"Image\": \"b750fe79269d2ec9a3c593ef05b4332b1d1a02a62b4accb2c21d589ff2f5f2dc\",\n \"NetworkSettings\": {\n \"IpAddress\": \"\",\n \"IpPrefixLen\": 0,\n \"Gateway\": \"\",\n \"Bridge\": \"\",\n \"PortMapping\": null\n },\n \"SysInitPath\": \"/home/kitty/go/src/github.com/docker/docker/bin/docker\",\n \"ResolvConfPath\": \"/etc/resolv.conf\",\n \"Volumes\": {}\n } Status Codes: 200 \u2013 no error 404 \u2013 no such container 500 \u2013 server error List processes running inside a container GET /containers/(id)/top List processes running inside the container id Example request : GET /containers/4fa6e0f0c678/top HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\n \"Titles\":[\n \"USER\",\n \"PID\",\n \"%CPU\",\n \"%MEM\",\n \"VSZ\",\n \"RSS\",\n \"TTY\",\n \"STAT\",\n \"START\",\n \"TIME\",\n \"COMMAND\"\n ],\n \"Processes\":[\n [\"root\",\"20147\",\"0.0\",\"0.1\",\"18060\",\"1864\",\"pts/4\",\"S\",\"10:06\",\"0:00\",\"bash\"],\n [\"root\",\"20271\",\"0.0\",\"0.0\",\"4312\",\"352\",\"pts/4\",\"S+\",\"10:07\",\"0:00\",\"sleep\",\"10\"]\n ]\n } Query Parameters: ps_args \u2013 ps arguments to use (e.g., aux) Status Codes: 200 \u2013 no error 404 \u2013 no such container 500 \u2013 server error Inspect changes on a container's filesystem GET /containers/(id)/changes Inspect changes on container id 's filesystem Example request : GET /containers/4fa6e0f0c678/changes HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n [\n {\n \"Path\":\"/dev\",\n \"Kind\":0\n },\n {\n \"Path\":\"/dev/kmsg\",\n \"Kind\":1\n },\n {\n \"Path\":\"/test\",\n \"Kind\":1\n }\n ] Status Codes: 200 \u2013 no error 404 \u2013 no such container 500 \u2013 server error Export a container GET /containers/(id)/export Export the contents of container id Example request : GET /containers/4fa6e0f0c678/export HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/octet-stream\n\n {{ TAR STREAM }} Status Codes: 200 \u2013 no error 404 \u2013 no such container 500 \u2013 server error Start a container POST /containers/(id)/start Start the container id Example request : POST /containers/(id)/start HTTP/1.1\n Content-Type: application/json\n\n {\n \"Binds\":[\"/tmp:/tmp\"],\n \"LxcConf\":[{\"Key\":\"lxc.utsname\",\"Value\":\"docker\"}]\n } Example response : HTTP/1.1 204 No Content\n Content-Type: text/plain Json Parameters: hostConfig \u2013 the container's host configuration (optional) Status Codes: 204 \u2013 no error 404 \u2013 no such container 500 \u2013 server error Stop a container POST /containers/(id)/stop Stop the container id Example request : POST /containers/e90e34656806/stop?t=5 HTTP/1.1 Example response : HTTP/1.1 204 OK Query Parameters: t \u2013 number of seconds to wait before killing the container Status Codes: 204 \u2013 no error 404 \u2013 no such container 500 \u2013 server error Restart a container POST /containers/(id)/restart Restart the container id Example request : POST /containers/e90e34656806/restart?t=5 HTTP/1.1 Example response : HTTP/1.1 204 No Content Query Parameters: t \u2013 number of seconds to wait before killing the container Status Codes: 204 \u2013 no error 404 \u2013 no such container 500 \u2013 server error Kill a container POST /containers/(id)/kill Kill the container id Example request : POST /containers/e90e34656806/kill HTTP/1.1 Example response : HTTP/1.1 204 No Content Status Codes: 204 \u2013 no error 404 \u2013 no such container 500 \u2013 server error Attach to a container POST /containers/(id)/attach Attach to the container id Example request : POST /containers/16253994b7c4/attach?logs=1 stream=0 stdout=1 HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/vnd.docker.raw-stream\n\n {{ STREAM }} Query Parameters: logs \u2013 1/True/true or 0/False/false, return logs. Defaul\n false stream \u2013 1/True/true or 0/False/false, return stream.\n Default false stdin \u2013 1/True/true or 0/False/false, if stream=true, attach\n to stdin. Default false stdout \u2013 1/True/true or 0/False/false, if logs=true, return\n stdout log, if stream=true, attach to stdout. Default false stderr \u2013 1/True/true or 0/False/false, if logs=true, return\n stderr log, if stream=true, attach to stderr. Default false Status Codes: 200 \u2013 no error 400 \u2013 bad parameter 404 \u2013 no such container 500 \u2013 server error Attach to a container (websocket) GET /containers/(id)/attach/ws Attach to the container id via websocket Implements websocket protocol handshake according to RFC 6455 Example request GET /containers/e90e34656806/attach/ws?logs=0 stream=1 stdin=1 stdout=1 stderr=1 HTTP/1.1 Example response {{ STREAM }} Query Parameters: logs \u2013 1/True/true or 0/False/false, return logs. Default false stream \u2013 1/True/true or 0/False/false, return stream.\n Default false stdin \u2013 1/True/true or 0/False/false, if stream=true, attach\n to stdin. Default false stdout \u2013 1/True/true or 0/False/false, if logs=true, return\n stdout log, if stream=true, attach to stdout. Default false stderr \u2013 1/True/true or 0/False/false, if logs=true, return\n stderr log, if stream=true, attach to stderr. Default false Status Codes: 200 \u2013 no error 400 \u2013 bad parameter 404 \u2013 no such container 500 \u2013 server error Wait a container POST /containers/(id)/wait Block until container id stops, then returns the exit code Example request : POST /containers/16253994b7c4/wait HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\"StatusCode\": 0} Status Codes: 200 \u2013 no error 404 \u2013 no such container 500 \u2013 server error Remove a container DELETE /containers/(id) Remove the container id from the filesystem Example request : DELETE /containers/16253994b7c4?v=1 HTTP/1.1 Example response : HTTP/1.1 204 No Content Query Parameters: v \u2013 1/True/true or 0/False/false, Remove the volumes\n associated to the container. Default false Status Codes: 204 \u2013 no error 400 \u2013 bad parameter 404 \u2013 no such container 500 \u2013 server error Copy files or folders from a container POST /containers/(id)/copy Copy files or folders of container id Example request : POST /containers/4fa6e0f0c678/copy HTTP/1.1\n Content-Type: application/json\n\n {\n \"Resource\":\"test.txt\"\n } Example response : HTTP/1.1 200 OK\n Content-Type: application/octet-stream\n\n {{ TAR STREAM }} Status Codes: 200 \u2013 no error 404 \u2013 no such container 500 \u2013 server error",
|
|
"title": "2.1 Containers"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api_v1.5#22-images",
|
|
"tags": "",
|
|
"text": "List Images GET /images/(format) List images format could be json or viz (json default) Example request : GET /images/json?all=0 HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n [\n {\n \"Repository\":\"ubuntu\",\n \"Tag\":\"precise\",\n \"Id\":\"b750fe79269d\",\n \"Created\":1364102658,\n \"Size\":24653,\n \"VirtualSize\":180116135\n },\n {\n \"Repository\":\"ubuntu\",\n \"Tag\":\"12.04\",\n \"Id\":\"b750fe79269d\",\n \"Created\":1364102658,\n \"Size\":24653,\n \"VirtualSize\":180116135\n }\n ] Example request : GET /images/viz HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: text/plain\n\n digraph docker {\n \"d82cbacda43a\" - \"074be284591f\"\n \"1496068ca813\" - \"08306dc45919\"\n \"08306dc45919\" - \"0e7893146ac2\"\n \"b750fe79269d\" - \"1496068ca813\"\n base - \"27cf78414709\" [style=invis]\n \"f71189fff3de\" - \"9a33b36209ed\"\n \"27cf78414709\" - \"b750fe79269d\"\n \"0e7893146ac2\" - \"d6434d954665\"\n \"d6434d954665\" - \"d82cbacda43a\"\n base - \"e9aa60c60128\" [style=invis]\n \"074be284591f\" - \"f71189fff3de\"\n \"b750fe79269d\" [label=\"b750fe79269d\\nubuntu\",shape=box,fillcolor=\"paleturquoise\",style=\"filled,rounded\"];\n \"e9aa60c60128\" [label=\"e9aa60c60128\\ncentos\",shape=box,fillcolor=\"paleturquoise\",style=\"filled,rounded\"];\n \"9a33b36209ed\" [label=\"9a33b36209ed\\nfedora\",shape=box,fillcolor=\"paleturquoise\",style=\"filled,rounded\"];\n base [style=invisible]\n } Query Parameters: all \u2013 1/True/true or 0/False/false, Show all containers.\n Only running containers are shown by defaul Status Codes: 200 \u2013 no error 400 \u2013 bad parameter 500 \u2013 server error Create an image POST /images/create Create an image, either by pull it from the registry or by importing i Example request : POST /images/create?fromImage=ubuntu HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\"status\":\"Pulling...\"}\n {\"status\":\"Pulling\", \"progress\":\"1/? (n/a)\"}\n {\"error\":\"Invalid...\"}\n ...\n\nWhen using this endpoint to pull an image from the registry, the\n`X-Registry-Auth` header can be used to include\na base64-encoded AuthConfig object. Query Parameters: fromImage \u2013 name of the image to pull fromSrc \u2013 source to import, - means stdin repo \u2013 repository tag \u2013 tag registry \u2013 the registry to pull from Status Codes: 200 \u2013 no error 500 \u2013 server error Insert a file in an image POST /images/(name)/insert Insert a file from url in the image name at path Example request : POST /images/test/insert?path=/usr url=myurl HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\"status\":\"Inserting...\"}\n {\"status\":\"Inserting\", \"progress\":\"1/? (n/a)\"}\n {\"error\":\"Invalid...\"}\n ... Query Parameters: url \u2013 The url from where the file is taken path \u2013 The path where the file is stored Status Codes: 200 \u2013 no error 500 \u2013 server error Inspect an image GET /images/(name)/json Return low-level information on the image name Example request : GET /images/centos/json HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\n \"id\":\"b750fe79269d2ec9a3c593ef05b4332b1d1a02a62b4accb2c21d589ff2f5f2dc\",\n \"parent\":\"27cf784147099545\",\n \"created\":\"2013-03-23T22:24:18.818426-07:00\",\n \"container\":\"3d67245a8d72ecf13f33dffac9f79dcdf70f75acb84d308770391510e0c23ad0\",\n \"container_config\":\n {\n \"Hostname\":\"\",\n \"User\":\"\",\n \"Memory\":0,\n \"MemorySwap\":0,\n \"AttachStdin\":false,\n \"AttachStdout\":false,\n \"AttachStderr\":false,\n \"PortSpecs\":null,\n \"Tty\":true,\n \"OpenStdin\":true,\n \"StdinOnce\":false,\n \"Env\":null,\n \"Cmd\": [\"/bin/bash\"],\n \"Dns\":null,\n \"Image\":\"centos\",\n \"Volumes\":null,\n \"VolumesFrom\":\"\",\n \"WorkingDir\":\"\"\n },\n \"Size\": 6824592\n } Status Codes: 200 \u2013 no error 404 \u2013 no such image 500 \u2013 server error Get the history of an image GET /images/(name)/history Return the history of the image name Example request : GET /images/fedora/history HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n [\n {\n \"Id\":\"b750fe79269d\",\n \"Created\":1364102658,\n \"CreatedBy\":\"/bin/bash\"\n },\n {\n \"Id\":\"27cf78414709\",\n \"Created\":1364068391,\n \"CreatedBy\":\"\"\n }\n ] Status Codes: 200 \u2013 no error 404 \u2013 no such image 500 \u2013 server error Push an image on the registry POST /images/(name)/push Push the image name on the registry Example request : POST /images/test/push HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\"status\":\"Pushing...\"}\n {\"status\":\"Pushing\", \"progress\":\"1/? (n/a)\"}\n {\"error\":\"Invalid...\"}\n ...\n\nThe `X-Registry-Auth` header can be used to\ninclude a base64-encoded AuthConfig object. Status Codes: 200 \u2013 no error 404 \u2013 no such image 500 \u2013 server error Tag an image into a repository POST /images/(name)/tag Tag the image name into a repository Example request : POST /images/test/tag?repo=myrepo force=0 tag=v42 HTTP/1.1 Example response : HTTP/1.1 201 OK Query Parameters: repo \u2013 The repository to tag in force \u2013 1/True/true or 0/False/false, default false tag - The new tag name Status Codes: 201 \u2013 no error 400 \u2013 bad parameter 404 \u2013 no such image 409 \u2013 conflict 500 \u2013 server error Remove an image DELETE /images/(name) Remove the image name from the filesystem Example request : DELETE /images/test HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-type: application/json\n\n [\n {\"Untagged\":\"3e2f21a89f\"},\n {\"Deleted\":\"3e2f21a89f\"},\n {\"Deleted\":\"53b4f83ac9\"}\n ] Status Codes: 200 \u2013 no error 404 \u2013 no such image 409 \u2013 conflict 500 \u2013 server error Search images GET /images/search Search for an image on Docker Hub Example request : GET /images/search?term=sshd HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n [\n {\n \"Name\":\"cespare/sshd\",\n \"Description\":\"\"\n },\n {\n \"Name\":\"johnfuller/sshd\",\n \"Description\":\"\"\n },\n {\n \"Name\":\"dhrp/mongodb-sshd\",\n \"Description\":\"\"\n }\n ] Query Parameters: term \u2013 term to search Status Codes: 200 \u2013 no error 500 \u2013 server error",
|
|
"title": "2.2 Images"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api_v1.5#23-misc",
|
|
"tags": "",
|
|
"text": "Build an image from Dockerfile via stdin POST /build Build an image from Dockerfile via stdin Example request : POST /build HTTP/1.1\n\n {{ TAR STREAM }} Example response : HTTP/1.1 200 OK\n\n {{ STREAM }}\n\nThe stream must be a tar archive compressed with one of the\nfollowing algorithms: identity (no compression), gzip, bzip2, xz.\nThe archive must include a file called Dockerfile at its root. I\nmay include any number of other files, which will be accessible in\nthe build context (See the ADD build command).\n\nThe Content-type header should be set to \"application/tar\". Query Parameters: t \u2013 repository name (and optionally a tag) to be applied to\n the resulting image in case of success remote \u2013 build source URI (git or HTTPS/HTTP) q \u2013 suppress verbose build output nocache \u2013 do not use the cache when building the image rm \u2013 remove intermediate containers after a successful build Status Codes: 200 \u2013 no error 500 \u2013 server error Check auth configuration POST /auth Get the default username and email Example request : POST /auth HTTP/1.1\n Content-Type: application/json\n\n {\n \"username\":\"hannibal\",\n \"password:\"xxxx\",\n \"email\":\"hannibal@a-team.com\",\n \"serveraddress\":\"https://index.docker.io/v1/\"\n } Example response : HTTP/1.1 200 OK\n Content-Type: text/plain Status Codes: 200 \u2013 no error 204 \u2013 no error 500 \u2013 server error Display system-wide information GET /info Display system-wide information Example request : GET /info HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\n \"Containers\":11,\n \"Images\":16,\n \"Debug\":false,\n \"NFd\": 11,\n \"NGoroutines\":21,\n \"MemoryLimit\":true,\n \"SwapLimit\":false,\n \"IPv4Forwarding\":true\n } Status Codes: 200 \u2013 no error 500 \u2013 server error Show the docker version information GET /version Show the docker version information Example request : GET /version HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\n \"Version\":\"0.2.2\",\n \"GitCommit\":\"5a2a5cc+CHANGES\",\n \"GoVersion\":\"go1.0.3\"\n } Status Codes: 200 \u2013 no error 500 \u2013 server error Create a new image from a container's changes POST /commit Create a new image from a container's changes Example request : POST /commit?container=44c004db4b17 m=message repo=myrepo HTTP/1.1\n Content-Type: application/json\n\n {\n \"Cmd\": [\"cat\", \"/world\"],\n \"PortSpecs\":[\"22\"]\n } Example response : HTTP/1.1 201 OK\n Content-Type: application/vnd.docker.raw-stream\n\n {\"Id\": \"596069db4bf5\"} Query Parameters: container \u2013 source container repo \u2013 repository tag \u2013 tag m \u2013 commit message author \u2013 author (e.g., \"John Hannibal Smith\n hannibal@a-team.com \") Status Codes: 201 \u2013 no error 404 \u2013 no such container 500 \u2013 server error Monitor Docker's events GET /events Get events from docker, either in real time via streaming, or via\npolling (using since). Docker containers will report the following events: create, destroy, die, export, kill, pause, restart, start, stop, unpause and Docker images will report: untag, delete Example request : GET /events?since=1374067924 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\"status\":\"create\",\"id\":\"dfdf82bd3881\",\"from\":\"ubuntu:latest\",\"time\":1374067924}\n {\"status\":\"start\",\"id\":\"dfdf82bd3881\",\"from\":\"ubuntu:latest\",\"time\":1374067924}\n {\"status\":\"stop\",\"id\":\"dfdf82bd3881\",\"from\":\"ubuntu:latest\",\"time\":1374067966}\n {\"status\":\"destroy\",\"id\":\"dfdf82bd3881\",\"from\":\"ubuntu:latest\",\"time\":1374067970} Query Parameters: since \u2013 timestamp used for polling Status Codes: 200 \u2013 no error 500 \u2013 server error",
|
|
"title": "2.3 Misc"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api_v1.5#3-going-further",
|
|
"tags": "",
|
|
"text": "",
|
|
"title": "3. Going further"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api_v1.5#31-inside-docker-run",
|
|
"tags": "",
|
|
"text": "Here are the steps of docker run : Create the container If the status code is 404, it means the image doesn't exist:\n Try to pull it - Then retry to create the container Start the container If you are not in detached mode:\n Attach to the container, using logs=1 (to have stdout and stderr\n from the container's start) and stream=1 If in detached mode or only stdin is attached:\n Display the container's id",
|
|
"title": "3.1 Inside docker run"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api_v1.5#32-hijacking",
|
|
"tags": "",
|
|
"text": "In this version of the API, /attach, uses hijacking to transport stdin,\nstdout and stderr on the same socket. This might change in the future.",
|
|
"title": "3.2 Hijacking"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api_v1.5#33-cors-requests",
|
|
"tags": "",
|
|
"text": "To enable cross origin requests to the remote api add the flag\n\"--api-enable-cors\" when running docker in daemon mode. $ docker -d -H=\"192.168.1.9:2375\" --api-enable-cors",
|
|
"title": "3.3 CORS Requests"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api_v1.4/",
|
|
"tags": "",
|
|
"text": "Docker Remote API v1.4\n1. Brief introduction\n\nThe Remote API is replacing rcli\nDefault port in the docker daemon is 2375\nThe API tends to be REST, but for some complex commands, like attach\n or pull, the HTTP connection is hijacked to transport stdout stdin\n and stderr\n\n2. Endpoints\n2.1 Containers\nList containers\nGET /containers/json\nList containers\nExample request:\n GET /containers/json?all=1before=8dfafdbc3a40size=1 HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n [\n {\n \"Id\": \"8dfafdbc3a40\",\n \"Image\": \"ubuntu:latest\",\n \"Command\": \"echo 1\",\n \"Created\": 1367854155,\n \"Status\": \"Exit 0\",\n \"Ports\":\"\",\n \"SizeRw\":12288,\n \"SizeRootFs\":0\n },\n {\n \"Id\": \"9cd87474be90\",\n \"Image\": \"ubuntu:latest\",\n \"Command\": \"echo 222222\",\n \"Created\": 1367854155,\n \"Status\": \"Exit 0\",\n \"Ports\":\"\",\n \"SizeRw\":12288,\n \"SizeRootFs\":0\n },\n {\n \"Id\": \"3176a2479c92\",\n \"Image\": \"centos:latest\",\n \"Command\": \"echo 3333333333333333\",\n \"Created\": 1367854154,\n \"Status\": \"Exit 0\",\n \"Ports\":\"\",\n \"SizeRw\":12288,\n \"SizeRootFs\":0\n },\n {\n \"Id\": \"4cb07b47f9fb\",\n \"Image\": \"fedora:latest\",\n \"Command\": \"echo 444444444444444444444444444444444\",\n \"Created\": 1367854152,\n \"Status\": \"Exit 0\",\n \"Ports\":\"\",\n \"SizeRw\":12288,\n \"SizeRootFs\":0\n }\n ]\n\nQuery Parameters:\n\n\n\nall \u2013 1/True/true or 0/False/false, Show all containers.\n Only running containers are shown by default (i.e., this defaults to false)\nlimit \u2013 Show limit last created containers, include non-running ones.\nsince \u2013 Show only containers created since Id, include non-running ones.\nbefore \u2013 Show only containers created before Id, include non-running ones.\nsize \u2013 1/True/true or 0/False/false, Show the containers sizes\n\nStatus Codes:\n\n200 \u2013 no error\n400 \u2013 bad parameter\n500 \u2013 server error\n\nCreate a container\nPOST /containers/create\nCreate a container\nExample request:\n POST /containers/create HTTP/1.1\n Content-Type: application/json\n\n {\n \"Hostname\":\"\",\n \"User\":\"\",\n \"Memory\":0,\n \"MemorySwap\":0,\n \"AttachStdin\":false,\n \"AttachStdout\":true,\n \"AttachStderr\":true,\n \"PortSpecs\":null,\n \"Privileged\": false,\n \"Tty\":false,\n \"OpenStdin\":false,\n \"StdinOnce\":false,\n \"Env\":null,\n \"Cmd\":[\n \"date\"\n ],\n \"Dns\":null,\n \"Image\":\"ubuntu\",\n \"Volumes\":{},\n \"VolumesFrom\":\"\",\n \"WorkingDir\":\"\"\n\n }\n\nExample response:\n HTTP/1.1 201 Created\n Content-Type: application/json\n\n {\n \"Id\":\"e90e34656806\"\n \"Warnings\":[]\n }\n\nJson Parameters:\n\nconfig \u2013 the container's configuration\n\nStatus Codes:\n\n201 \u2013 no error\n404 \u2013 no such container\n406 \u2013 impossible to attach (container not running)\n500 \u2013 server error\n\nInspect a container\nGET /containers/(id)/json\nReturn low-level information on the container id\nExample request:\n GET /containers/4fa6e0f0c678/json HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\n \"Id\": \"4fa6e0f0c6786287e131c3852c58a2e01cc697a68231826813597e4994f1d6e2\",\n \"Created\": \"2013-05-07T14:51:42.041847+02:00\",\n \"Path\": \"date\",\n \"Args\": [],\n \"Config\": {\n \"Hostname\": \"4fa6e0f0c678\",\n \"User\": \"\",\n \"Memory\": 0,\n \"MemorySwap\": 0,\n \"AttachStdin\": false,\n \"AttachStdout\": true,\n \"AttachStderr\": true,\n \"PortSpecs\": null,\n \"Tty\": false,\n \"OpenStdin\": false,\n \"StdinOnce\": false,\n \"Env\": null,\n \"Cmd\": [\n \"date\"\n ],\n \"Dns\": null,\n \"Image\": \"ubuntu\",\n \"Volumes\": {},\n \"VolumesFrom\": \"\",\n \"WorkingDir\": \"\"\n },\n \"State\": {\n \"Running\": false,\n \"Pid\": 0,\n \"ExitCode\": 0,\n \"StartedAt\": \"2013-05-07T14:51:42.087658+02:01360\",\n \"Ghost\": false\n },\n \"Image\": \"b750fe79269d2ec9a3c593ef05b4332b1d1a02a62b4accb2c21d589ff2f5f2dc\",\n \"NetworkSettings\": {\n \"IpAddress\": \"\",\n \"IpPrefixLen\": 0,\n \"Gateway\": \"\",\n \"Bridge\": \"\",\n \"PortMapping\": null\n },\n \"SysInitPath\": \"/home/kitty/go/src/github.com/docker/docker/bin/docker\",\n \"ResolvConfPath\": \"/etc/resolv.conf\",\n \"Volumes\": {}\n }\n\nStatus Codes:\n\n200 \u2013 no error\n404 \u2013 no such container\n409 \u2013 conflict between containers and images\n500 \u2013 server error\n\nList processes running inside a container\nGET /containers/(id)/top\nList processes running inside the container id\nExample request:\n GET /containers/4fa6e0f0c678/top HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\n \"Titles\": [\n \"USER\",\n \"PID\",\n \"%CPU\",\n \"%MEM\",\n \"VSZ\",\n \"RSS\",\n \"TTY\",\n \"STAT\",\n \"START\",\n \"TIME\",\n \"COMMAND\"\n ],\n \"Processes\": [\n [\"root\",\"20147\",\"0.0\",\"0.1\",\"18060\",\"1864\",\"pts/4\",\"S\",\"10:06\",\"0:00\",\"bash\"],\n [\"root\",\"20271\",\"0.0\",\"0.0\",\"4312\",\"352\",\"pts/4\",\"S+\",\"10:07\",\"0:00\",\"sleep\",\"10\"]\n ]\n }\n\nQuery Parameters:\n\nps_args \u2013 ps arguments to use (e.g., aux)\n\nStatus Codes:\n\n200 \u2013 no error\n404 \u2013 no such container\n500 \u2013 server error\n\nInspect changes on a container's filesystem\nGET /containers/(id)/changes\nInspect changes on container id's filesystem\nExample request:\n GET /containers/4fa6e0f0c678/changes HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n [\n {\n \"Path\": \"/dev\",\n \"Kind\": 0\n },\n {\n \"Path\": \"/dev/kmsg\",\n \"Kind\": 1\n },\n {\n \"Path\": \"/test\",\n \"Kind\": 1\n }\n ]\n\nStatus Codes:\n\n200 \u2013 no error\n404 \u2013 no such container\n500 \u2013 server error\n\nExport a container\nGET /containers/(id)/export\nExport the contents of container id\nExample request:\n GET /containers/4fa6e0f0c678/export HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/octet-stream\n\n {{ TAR STREAM }}\n\nStatus Codes:\n\n200 \u2013 no error\n404 \u2013 no such container\n500 \u2013 server error\n\nStart a container\nPOST /containers/(id)/start\nStart the container id\nExample request:\n POST /containers/(id)/start HTTP/1.1\n Content-Type: application/json\n\n {\n \"Binds\":[\"/tmp:/tmp\"],\n \"LxcConf\":[{\"Key\":\"lxc.utsname\",\"Value\":\"docker\"}]\n }\n\nExample response:\n HTTP/1.1 204 No Content\n Content-Type: text/plain\n\nJson Parameters:\n\n\n\nhostConfig \u2013 the container's host configuration (optional)\n\nStatus Codes:\n\n204 \u2013 no error\n404 \u2013 no such container\n500 \u2013 server error\n\nStop a container\nPOST /containers/(id)/stop\nStop the container id\nExample request:\n POST /containers/e90e34656806/stop?t=5 HTTP/1.1\n\nExample response:\n HTTP/1.1 204 OK\n\nQuery Parameters:\n\nt \u2013 number of seconds to wait before killing the container\n\nStatus Codes:\n\n204 \u2013 no error\n404 \u2013 no such container\n500 \u2013 server error\n\nRestart a container\nPOST /containers/(id)/restart\nRestart the container id\nExample request:\n POST /containers/e90e34656806/restart?t=5 HTTP/1.1\n\nExample response:\n HTTP/1.1 204 No Content\n\nQuery Parameters:\n\nt \u2013 number of seconds to wait before killing the container\n\nStatus Codes:\n\n204 \u2013 no error\n404 \u2013 no such container\n500 \u2013 server error\n\nKill a container\nPOST /containers/(id)/kill\nKill the container id\nExample request:\n POST /containers/e90e34656806/kill HTTP/1.1\n\nExample response:\n HTTP/1.1 204 No Content\n\nStatus Codes:\n\n204 \u2013 no error\n404 \u2013 no such container\n500 \u2013 server error\n\nAttach to a container\nPOST /containers/(id)/attach\nAttach to the container id\nExample request:\n POST /containers/16253994b7c4/attach?logs=1stream=0stdout=1 HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/vnd.docker.raw-stream\n\n {{ STREAM }}\n\nQuery Parameters:\n\nlogs \u2013 1/True/true or 0/False/false, return logs. Defaul\n false\nstream \u2013 1/True/true or 0/False/false, return stream.\n Default false\nstdin \u2013 1/True/true or 0/False/false, if stream=true, attach\n to stdin. Default false\nstdout \u2013 1/True/true or 0/False/false, if logs=true, return\n stdout log, if stream=true, attach to stdout. Default false\nstderr \u2013 1/True/true or 0/False/false, if logs=true, return\n stderr log, if stream=true, attach to stderr. Default false\n\nStatus Codes:\n\n200 \u2013 no error\n400 \u2013 bad parameter\n404 \u2013 no such container\n500 \u2013 server error\n\nAttach to a container (websocket)\nGET /containers/(id)/attach/ws\nAttach to the container id via websocket\nImplements websocket protocol handshake according to RFC 6455\nExample request\n GET /containers/e90e34656806/attach/ws?logs=0stream=1stdin=1stdout=1stderr=1 HTTP/1.1\n\nExample response\n {{ STREAM }}\n\nQuery Parameters:\n\nlogs \u2013 1/True/true or 0/False/false, return logs. Default false\nstream \u2013 1/True/true or 0/False/false, return stream.\n Default false\nstdin \u2013 1/True/true or 0/False/false, if stream=true, attach\n to stdin. Default false\nstdout \u2013 1/True/true or 0/False/false, if logs=true, return\n stdout log, if stream=true, attach to stdout. Default false\nstderr \u2013 1/True/true or 0/False/false, if logs=true, return\n stderr log, if stream=true, attach to stderr. Default false\n\nStatus Codes:\n\n200 \u2013 no error\n400 \u2013 bad parameter\n404 \u2013 no such container\n500 \u2013 server error\n\nWait a container\nPOST /containers/(id)/wait\nBlock until container id stops, then returns the exit code\nExample request:\n POST /containers/16253994b7c4/wait HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\"StatusCode\": 0}\n\nStatus Codes:\n\n200 \u2013 no error\n404 \u2013 no such container\n500 \u2013 server error\n\nRemove a container\nDELETE /containers/(id)\nRemove the container id from the filesystem\nExample request:\n DELETE /containers/16253994b7c4?v=1 HTTP/1.1\n\nExample response:\n HTTP/1.1 204 No Content\n\nQuery Parameters:\n\nv \u2013 1/True/true or 0/False/false, Remove the volumes\n associated to the container. Default false\n\nStatus Codes:\n\n204 \u2013 no error\n400 \u2013 bad parameter\n404 \u2013 no such container\n500 \u2013 server error\n\nCopy files or folders from a container\nPOST /containers/(id)/copy\nCopy files or folders of container id\nExample request:\n POST /containers/4fa6e0f0c678/copy HTTP/1.1\n Content-Type: application/json\n\n {\n \"Resource\": \"test.txt\"\n }\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/octet-stream\n\n {{ TAR STREAM }}\n\nStatus Codes:\n\n200 \u2013 no error\n404 \u2013 no such container\n500 \u2013 server error\n\n2.2 Images\nList Images\nGET /images/(format)\nList images format could be json or viz (json default)\nExample request:\n GET /images/json?all=0 HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n [\n {\n \"Repository\":\"ubuntu\",\n \"Tag\":\"precise\",\n \"Id\":\"b750fe79269d\",\n \"Created\":1364102658,\n \"Size\":24653,\n \"VirtualSize\":180116135\n },\n {\n \"Repository\":\"ubuntu\",\n \"Tag\":\"12.04\",\n \"Id\":\"b750fe79269d\",\n \"Created\":1364102658,\n \"Size\":24653,\n \"VirtualSize\":180116135\n }\n ]\n\nExample request:\n GET /images/viz HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: text/plain\n\n digraph docker {\n \"d82cbacda43a\" - \"074be284591f\"\n \"1496068ca813\" - \"08306dc45919\"\n \"08306dc45919\" - \"0e7893146ac2\"\n \"b750fe79269d\" - \"1496068ca813\"\n base - \"27cf78414709\" [style=invis]\n \"f71189fff3de\" - \"9a33b36209ed\"\n \"27cf78414709\" - \"b750fe79269d\"\n \"0e7893146ac2\" - \"d6434d954665\"\n \"d6434d954665\" - \"d82cbacda43a\"\n base - \"e9aa60c60128\" [style=invis]\n \"074be284591f\" - \"f71189fff3de\"\n \"b750fe79269d\" [label=\"b750fe79269d\\nubuntu\",shape=box,fillcolor=\"paleturquoise\",style=\"filled,rounded\"];\n \"e9aa60c60128\" [label=\"e9aa60c60128\\ncentos\",shape=box,fillcolor=\"paleturquoise\",style=\"filled,rounded\"];\n \"9a33b36209ed\" [label=\"9a33b36209ed\\nfedora\",shape=box,fillcolor=\"paleturquoise\",style=\"filled,rounded\"];\n base [style=invisible]\n }\n\nQuery Parameters:\n\nall \u2013 1/True/true or 0/False/false, Show all containers.\n Only running containers are shown by defaul\n\nStatus Codes:\n\n200 \u2013 no error\n400 \u2013 bad parameter\n500 \u2013 server error\n\nCreate an image\nPOST /images/create\nCreate an image, either by pull it from the registry or by importing i\nExample request:\n POST /images/create?fromImage=ubuntu HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\"status\":\"Pulling...\"}\n {\"status\":\"Pulling\", \"progress\":\"1/? (n/a)\"}\n {\"error\":\"Invalid...\"}\n ...\n\nQuery Parameters:\n\nfromImage \u2013 name of the image to pull\nfromSrc \u2013 source to import, - means stdin\nrepo \u2013 repository\ntag \u2013 tag\nregistry \u2013 the registry to pull from\n\nStatus Codes:\n\n200 \u2013 no error\n500 \u2013 server error\n\nInsert a file in an image\nPOST /images/(name)/insert\nInsert a file from url in the image name at path\nExample request:\n POST /images/test/insert?path=/usrurl=myurl HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\"status\":\"Inserting...\"}\n {\"status\":\"Inserting\", \"progress\":\"1/? (n/a)\"}\n {\"error\":\"Invalid...\"}\n ...\n\nQuery Parameters:\n\nurl \u2013 The url from where the file is taken\npath \u2013 The path where the file is stored\n\nStatus Codes:\n\n200 \u2013 no error\n500 \u2013 server error\n\nInspect an image\nGET /images/(name)/json\nReturn low-level information on the image name\nExample request:\n GET /images/centos/json HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\n \"id\":\"b750fe79269d2ec9a3c593ef05b4332b1d1a02a62b4accb2c21d589ff2f5f2dc\",\n \"parent\":\"27cf784147099545\",\n \"created\":\"2013-03-23T22:24:18.818426-07:00\",\n \"container\":\"3d67245a8d72ecf13f33dffac9f79dcdf70f75acb84d308770391510e0c23ad0\",\n \"container_config\":\n {\n \"Hostname\":\"\",\n \"User\":\"\",\n \"Memory\":0,\n \"MemorySwap\":0,\n \"AttachStdin\":false,\n \"AttachStdout\":false,\n \"AttachStderr\":false,\n \"PortSpecs\":null,\n \"Tty\":true,\n \"OpenStdin\":true,\n \"StdinOnce\":false,\n \"Env\":null,\n \"Cmd\": [\"/bin/bash\"],\n \"Dns\":null,\n \"Image\":\"centos\",\n \"Volumes\":null,\n \"VolumesFrom\":\"\",\n \"WorkingDir\":\"\"\n },\n \"Size\": 6824592\n }\n\nStatus Codes:\n\n200 \u2013 no error\n404 \u2013 no such image\n409 \u2013 conflict between containers and images\n500 \u2013 server error\n\nGet the history of an image\nGET /images/(name)/history\nReturn the history of the image name\nExample request:\n GET /images/fedora/history HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n [\n {\n \"Id\": \"b750fe79269d\",\n \"Created\": 1364102658,\n \"CreatedBy\": \"/bin/bash\"\n },\n {\n \"Id\": \"27cf78414709\",\n \"Created\": 1364068391,\n \"CreatedBy\": \"\"\n }\n ]\n\nStatus Codes:\n\n200 \u2013 no error\n404 \u2013 no such image\n500 \u2013 server error\n\nPush an image on the registry\nPOST /images/(name)/push\nPush the image name on the registry\nExample request:\n POST /images/test/push HTTP/1.1\n {{ authConfig }}\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n{\"status\":\"Pushing...\"} {\"status\":\"Pushing\", \"progress\":\"1/? (n/a)\"}\n{\"error\":\"Invalid...\"} ...\n\nStatus Codes:\n\n200 \u2013 no error :statuscode 404: no such image :statuscode\n 500: server error\n\nTag an image into a repository\nPOST /images/(name)/tag\nTag the image name into a repository\nExample request:\n POST /images/test/tag?repo=myrepoforce=0tag=v42 HTTP/1.1\n\nExample response:\n HTTP/1.1 201 OK\n\nQuery Parameters:\n\nrepo \u2013 The repository to tag in\nforce \u2013 1/True/true or 0/False/false, default false\ntag - The new tag name\n\nStatus Codes:\n\n201 \u2013 no error\n400 \u2013 bad parameter\n404 \u2013 no such image\n409 \u2013 conflict\n500 \u2013 server error\n\nRemove an image\nDELETE /images/(name)\nRemove the image name from the filesystem\nExample request:\n DELETE /images/test HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-type: application/json\n\n [\n {\"Untagged\": \"3e2f21a89f\"},\n {\"Deleted\": \"3e2f21a89f\"},\n {\"Deleted\": \"53b4f83ac9\"}\n ]\n\nStatus Codes:\n\n200 \u2013 no error\n404 \u2013 no such image\n409 \u2013 conflict\n500 \u2013 server error\n\nSearch images\nGET /images/search\nSearch for an image on Docker Hub\nExample request:\n GET /images/search?term=sshd HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n [\n {\n \"Name\":\"cespare/sshd\",\n \"Description\":\"\"\n },\n {\n \"Name\":\"johnfuller/sshd\",\n \"Description\":\"\"\n },\n {\n \"Name\":\"dhrp/mongodb-sshd\",\n \"Description\":\"\"\n }\n ]\n\n :query term: term to search\n :statuscode 200: no error\n :statuscode 500: server error\n\n2.3 Misc\nBuild an image from Dockerfile via stdin\nPOST /build\nBuild an image from Dockerfile via stdin\nExample request:\n POST /build HTTP/1.1\n\n {{ TAR STREAM }}\n\nExample response:\n HTTP/1.1 200 OK\n\n {{ STREAM }}\n\nThe stream must be a tar archive compressed with one of the\nfollowing algorithms: identity (no compression), gzip, bzip2, xz.\nThe archive must include a file called Dockerfile at its root. I\nmay include any number of other files, which will be accessible in\nthe build context (See the ADD build command).\n\nThe Content-type header should be set to \"application/tar\".\n\nQuery Parameters:\n\nt \u2013 repository name (and optionally a tag) to be applied to\n the resulting image in case of success\nremote \u2013 build source URI (git or HTTPS/HTTP)\nq \u2013 suppress verbose build output\nnocache \u2013 do not use the cache when building the image\n\nStatus Codes:\n\n200 \u2013 no error\n500 \u2013 server error\n\nCheck auth configuration\nPOST /auth\nGet the default username and email\nExample request:\n POST /auth HTTP/1.1\n Content-Type: application/json\n\n {\n \"username\":\" hannibal\",\n \"password: \"xxxx\",\n \"email\": \"hannibal@a-team.com\",\n \"serveraddress\": \"https://index.docker.io/v1/\"\n }\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: text/plain\n\nStatus Codes:\n\n200 \u2013 no error\n204 \u2013 no error\n500 \u2013 server error\n\nDisplay system-wide information\nGET /info\nDisplay system-wide information\nExample request:\n GET /info HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\n \"Containers\":11,\n \"Images\":16,\n \"Debug\":false,\n \"NFd\": 11,\n \"NGoroutines\":21,\n \"MemoryLimit\":true,\n \"SwapLimit\":false,\n \"IPv4Forwarding\":true\n }\n\nStatus Codes:\n\n200 \u2013 no error\n500 \u2013 server error\n\nShow the docker version information\nGET /version\nShow the docker version information\nExample request:\n GET /version HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\n \"Version\":\"0.2.2\",\n \"GitCommit\":\"5a2a5cc+CHANGES\",\n \"GoVersion\":\"go1.0.3\"\n }\n\nStatus Codes:\n\n200 \u2013 no error\n500 \u2013 server error\n\nCreate a new image from a container's changes\nPOST /commit\nCreate a new image from a container's changes\nExample request:\n POST /commit?container=44c004db4b17m=messagerepo=myrepo HTTP/1.1\n Content-Type: application/json\n\n {\n \"Cmd\": [\"cat\", \"/world\"],\n \"PortSpecs\":[\"22\"]\n }\n\nExample response:\n HTTP/1.1 201 OK\n Content-Type: application/vnd.docker.raw-stream\n\n {\"Id\": \"596069db4bf5\"}\n\nQuery Parameters:\n\ncontainer \u2013 source container\nrepo \u2013 repository\ntag \u2013 tag\nm \u2013 commit message\nauthor \u2013 author (e.g., \"John Hannibal Smith\n hannibal@a-team.com\")\n\nStatus Codes:\n\n201 \u2013 no error\n404 \u2013 no such container\n500 \u2013 server error\n\nMonitor Docker's events\nGET /events\nGet events from docker, either in real time via streaming, or via\npolling (using since).\nDocker containers will report the following events:\ncreate, destroy, die, export, kill, pause, restart, start, stop, unpause\n\nand Docker images will report:\nuntag, delete\n\nExample request:\n GET /events?since=1374067924\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\"status\":\"create\",\"id\":\"dfdf82bd3881\",\"from\":\"ubuntu:latest\",\"time\":1374067924}\n {\"status\":\"start\",\"id\":\"dfdf82bd3881\",\"from\":\"ubuntu:latest\",\"time\":1374067924}\n {\"status\":\"stop\",\"id\":\"dfdf82bd3881\",\"from\":\"ubuntu:latest\",\"time\":1374067966}\n {\"status\":\"destroy\",\"id\":\"dfdf82bd3881\",\"from\":\"ubuntu:latest\",\"time\":1374067970}\n\nQuery Parameters:\n\nsince \u2013 timestamp used for polling\n\nStatus Codes:\n\n200 \u2013 no error\n500 \u2013 server error\n\n3. Going further\n3.1 Inside docker run\nHere are the steps of docker run :\n\n\nCreate the container\n\n\nIf the status code is 404, it means the image doesn't exist:\n - Try to pull it\n - Then retry to create the container\n\n\nStart the container\n\n\nIf you are not in detached mode:\n - Attach to the container, using logs=1 (to have stdout and\n stderr from the container's start) and stream=1\n\n\nIf in detached mode or only stdin is attached:\n - Display the container's id\n\n\n3.2 Hijacking\nIn this version of the API, /attach, uses hijacking to transport stdin,\nstdout and stderr on the same socket. This might change in the future.\n3.3 CORS Requests\nTo enable cross origin requests to the remote api add the flag\n\"--api-enable-cors\" when running docker in daemon mode.\n$ docker -d -H=\"192.168.1.9:2375\" --api-enable-cors",
|
|
"title": "**HIDDEN**"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api_v1.4#docker-remote-api-v14",
|
|
"tags": "",
|
|
"text": "",
|
|
"title": "Docker Remote API v1.4"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api_v1.4#1-brief-introduction",
|
|
"tags": "",
|
|
"text": "The Remote API is replacing rcli Default port in the docker daemon is 2375 The API tends to be REST, but for some complex commands, like attach\n or pull, the HTTP connection is hijacked to transport stdout stdin\n and stderr",
|
|
"title": "1. Brief introduction"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api_v1.4#2-endpoints",
|
|
"tags": "",
|
|
"text": "",
|
|
"title": "2. Endpoints"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api_v1.4#21-containers",
|
|
"tags": "",
|
|
"text": "List containers GET /containers/json List containers Example request : GET /containers/json?all=1 before=8dfafdbc3a40 size=1 HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n [\n {\n \"Id\": \"8dfafdbc3a40\",\n \"Image\": \"ubuntu:latest\",\n \"Command\": \"echo 1\",\n \"Created\": 1367854155,\n \"Status\": \"Exit 0\",\n \"Ports\":\"\",\n \"SizeRw\":12288,\n \"SizeRootFs\":0\n },\n {\n \"Id\": \"9cd87474be90\",\n \"Image\": \"ubuntu:latest\",\n \"Command\": \"echo 222222\",\n \"Created\": 1367854155,\n \"Status\": \"Exit 0\",\n \"Ports\":\"\",\n \"SizeRw\":12288,\n \"SizeRootFs\":0\n },\n {\n \"Id\": \"3176a2479c92\",\n \"Image\": \"centos:latest\",\n \"Command\": \"echo 3333333333333333\",\n \"Created\": 1367854154,\n \"Status\": \"Exit 0\",\n \"Ports\":\"\",\n \"SizeRw\":12288,\n \"SizeRootFs\":0\n },\n {\n \"Id\": \"4cb07b47f9fb\",\n \"Image\": \"fedora:latest\",\n \"Command\": \"echo 444444444444444444444444444444444\",\n \"Created\": 1367854152,\n \"Status\": \"Exit 0\",\n \"Ports\":\"\",\n \"SizeRw\":12288,\n \"SizeRootFs\":0\n }\n ] Query Parameters: all \u2013 1/True/true or 0/False/false, Show all containers.\n Only running containers are shown by default (i.e., this defaults to false) limit \u2013 Show limit last created containers, include non-running ones. since \u2013 Show only containers created since Id, include non-running ones. before \u2013 Show only containers created before Id, include non-running ones. size \u2013 1/True/true or 0/False/false, Show the containers sizes Status Codes: 200 \u2013 no error 400 \u2013 bad parameter 500 \u2013 server error Create a container POST /containers/create Create a container Example request : POST /containers/create HTTP/1.1\n Content-Type: application/json\n\n {\n \"Hostname\":\"\",\n \"User\":\"\",\n \"Memory\":0,\n \"MemorySwap\":0,\n \"AttachStdin\":false,\n \"AttachStdout\":true,\n \"AttachStderr\":true,\n \"PortSpecs\":null,\n \"Privileged\": false,\n \"Tty\":false,\n \"OpenStdin\":false,\n \"StdinOnce\":false,\n \"Env\":null,\n \"Cmd\":[\n \"date\"\n ],\n \"Dns\":null,\n \"Image\":\"ubuntu\",\n \"Volumes\":{},\n \"VolumesFrom\":\"\",\n \"WorkingDir\":\"\"\n\n } Example response : HTTP/1.1 201 Created\n Content-Type: application/json\n\n {\n \"Id\":\"e90e34656806\"\n \"Warnings\":[]\n } Json Parameters: config \u2013 the container's configuration Status Codes: 201 \u2013 no error 404 \u2013 no such container 406 \u2013 impossible to attach (container not running) 500 \u2013 server error Inspect a container GET /containers/(id)/json Return low-level information on the container id Example request : GET /containers/4fa6e0f0c678/json HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\n \"Id\": \"4fa6e0f0c6786287e131c3852c58a2e01cc697a68231826813597e4994f1d6e2\",\n \"Created\": \"2013-05-07T14:51:42.041847+02:00\",\n \"Path\": \"date\",\n \"Args\": [],\n \"Config\": {\n \"Hostname\": \"4fa6e0f0c678\",\n \"User\": \"\",\n \"Memory\": 0,\n \"MemorySwap\": 0,\n \"AttachStdin\": false,\n \"AttachStdout\": true,\n \"AttachStderr\": true,\n \"PortSpecs\": null,\n \"Tty\": false,\n \"OpenStdin\": false,\n \"StdinOnce\": false,\n \"Env\": null,\n \"Cmd\": [\n \"date\"\n ],\n \"Dns\": null,\n \"Image\": \"ubuntu\",\n \"Volumes\": {},\n \"VolumesFrom\": \"\",\n \"WorkingDir\": \"\"\n },\n \"State\": {\n \"Running\": false,\n \"Pid\": 0,\n \"ExitCode\": 0,\n \"StartedAt\": \"2013-05-07T14:51:42.087658+02:01360\",\n \"Ghost\": false\n },\n \"Image\": \"b750fe79269d2ec9a3c593ef05b4332b1d1a02a62b4accb2c21d589ff2f5f2dc\",\n \"NetworkSettings\": {\n \"IpAddress\": \"\",\n \"IpPrefixLen\": 0,\n \"Gateway\": \"\",\n \"Bridge\": \"\",\n \"PortMapping\": null\n },\n \"SysInitPath\": \"/home/kitty/go/src/github.com/docker/docker/bin/docker\",\n \"ResolvConfPath\": \"/etc/resolv.conf\",\n \"Volumes\": {}\n } Status Codes: 200 \u2013 no error 404 \u2013 no such container 409 \u2013 conflict between containers and images 500 \u2013 server error List processes running inside a container GET /containers/(id)/top List processes running inside the container id Example request : GET /containers/4fa6e0f0c678/top HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\n \"Titles\": [\n \"USER\",\n \"PID\",\n \"%CPU\",\n \"%MEM\",\n \"VSZ\",\n \"RSS\",\n \"TTY\",\n \"STAT\",\n \"START\",\n \"TIME\",\n \"COMMAND\"\n ],\n \"Processes\": [\n [\"root\",\"20147\",\"0.0\",\"0.1\",\"18060\",\"1864\",\"pts/4\",\"S\",\"10:06\",\"0:00\",\"bash\"],\n [\"root\",\"20271\",\"0.0\",\"0.0\",\"4312\",\"352\",\"pts/4\",\"S+\",\"10:07\",\"0:00\",\"sleep\",\"10\"]\n ]\n } Query Parameters: ps_args \u2013 ps arguments to use (e.g., aux) Status Codes: 200 \u2013 no error 404 \u2013 no such container 500 \u2013 server error Inspect changes on a container's filesystem GET /containers/(id)/changes Inspect changes on container id 's filesystem Example request : GET /containers/4fa6e0f0c678/changes HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n [\n {\n \"Path\": \"/dev\",\n \"Kind\": 0\n },\n {\n \"Path\": \"/dev/kmsg\",\n \"Kind\": 1\n },\n {\n \"Path\": \"/test\",\n \"Kind\": 1\n }\n ] Status Codes: 200 \u2013 no error 404 \u2013 no such container 500 \u2013 server error Export a container GET /containers/(id)/export Export the contents of container id Example request : GET /containers/4fa6e0f0c678/export HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/octet-stream\n\n {{ TAR STREAM }} Status Codes: 200 \u2013 no error 404 \u2013 no such container 500 \u2013 server error Start a container POST /containers/(id)/start Start the container id Example request : POST /containers/(id)/start HTTP/1.1\n Content-Type: application/json\n\n {\n \"Binds\":[\"/tmp:/tmp\"],\n \"LxcConf\":[{\"Key\":\"lxc.utsname\",\"Value\":\"docker\"}]\n } Example response : HTTP/1.1 204 No Content\n Content-Type: text/plain Json Parameters: hostConfig \u2013 the container's host configuration (optional) Status Codes: 204 \u2013 no error 404 \u2013 no such container 500 \u2013 server error Stop a container POST /containers/(id)/stop Stop the container id Example request : POST /containers/e90e34656806/stop?t=5 HTTP/1.1 Example response : HTTP/1.1 204 OK Query Parameters: t \u2013 number of seconds to wait before killing the container Status Codes: 204 \u2013 no error 404 \u2013 no such container 500 \u2013 server error Restart a container POST /containers/(id)/restart Restart the container id Example request : POST /containers/e90e34656806/restart?t=5 HTTP/1.1 Example response : HTTP/1.1 204 No Content Query Parameters: t \u2013 number of seconds to wait before killing the container Status Codes: 204 \u2013 no error 404 \u2013 no such container 500 \u2013 server error Kill a container POST /containers/(id)/kill Kill the container id Example request : POST /containers/e90e34656806/kill HTTP/1.1 Example response : HTTP/1.1 204 No Content Status Codes: 204 \u2013 no error 404 \u2013 no such container 500 \u2013 server error Attach to a container POST /containers/(id)/attach Attach to the container id Example request : POST /containers/16253994b7c4/attach?logs=1 stream=0 stdout=1 HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/vnd.docker.raw-stream\n\n {{ STREAM }} Query Parameters: logs \u2013 1/True/true or 0/False/false, return logs. Defaul\n false stream \u2013 1/True/true or 0/False/false, return stream.\n Default false stdin \u2013 1/True/true or 0/False/false, if stream=true, attach\n to stdin. Default false stdout \u2013 1/True/true or 0/False/false, if logs=true, return\n stdout log, if stream=true, attach to stdout. Default false stderr \u2013 1/True/true or 0/False/false, if logs=true, return\n stderr log, if stream=true, attach to stderr. Default false Status Codes: 200 \u2013 no error 400 \u2013 bad parameter 404 \u2013 no such container 500 \u2013 server error Attach to a container (websocket) GET /containers/(id)/attach/ws Attach to the container id via websocket Implements websocket protocol handshake according to RFC 6455 Example request GET /containers/e90e34656806/attach/ws?logs=0 stream=1 stdin=1 stdout=1 stderr=1 HTTP/1.1 Example response {{ STREAM }} Query Parameters: logs \u2013 1/True/true or 0/False/false, return logs. Default false stream \u2013 1/True/true or 0/False/false, return stream.\n Default false stdin \u2013 1/True/true or 0/False/false, if stream=true, attach\n to stdin. Default false stdout \u2013 1/True/true or 0/False/false, if logs=true, return\n stdout log, if stream=true, attach to stdout. Default false stderr \u2013 1/True/true or 0/False/false, if logs=true, return\n stderr log, if stream=true, attach to stderr. Default false Status Codes: 200 \u2013 no error 400 \u2013 bad parameter 404 \u2013 no such container 500 \u2013 server error Wait a container POST /containers/(id)/wait Block until container id stops, then returns the exit code Example request : POST /containers/16253994b7c4/wait HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\"StatusCode\": 0} Status Codes: 200 \u2013 no error 404 \u2013 no such container 500 \u2013 server error Remove a container DELETE /containers/(id) Remove the container id from the filesystem Example request : DELETE /containers/16253994b7c4?v=1 HTTP/1.1 Example response : HTTP/1.1 204 No Content Query Parameters: v \u2013 1/True/true or 0/False/false, Remove the volumes\n associated to the container. Default false Status Codes: 204 \u2013 no error 400 \u2013 bad parameter 404 \u2013 no such container 500 \u2013 server error Copy files or folders from a container POST /containers/(id)/copy Copy files or folders of container id Example request : POST /containers/4fa6e0f0c678/copy HTTP/1.1\n Content-Type: application/json\n\n {\n \"Resource\": \"test.txt\"\n } Example response : HTTP/1.1 200 OK\n Content-Type: application/octet-stream\n\n {{ TAR STREAM }} Status Codes: 200 \u2013 no error 404 \u2013 no such container 500 \u2013 server error",
|
|
"title": "2.1 Containers"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api_v1.4#22-images",
|
|
"tags": "",
|
|
"text": "List Images GET /images/(format) List images format could be json or viz (json default) Example request : GET /images/json?all=0 HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n [\n {\n \"Repository\":\"ubuntu\",\n \"Tag\":\"precise\",\n \"Id\":\"b750fe79269d\",\n \"Created\":1364102658,\n \"Size\":24653,\n \"VirtualSize\":180116135\n },\n {\n \"Repository\":\"ubuntu\",\n \"Tag\":\"12.04\",\n \"Id\":\"b750fe79269d\",\n \"Created\":1364102658,\n \"Size\":24653,\n \"VirtualSize\":180116135\n }\n ] Example request : GET /images/viz HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: text/plain\n\n digraph docker {\n \"d82cbacda43a\" - \"074be284591f\"\n \"1496068ca813\" - \"08306dc45919\"\n \"08306dc45919\" - \"0e7893146ac2\"\n \"b750fe79269d\" - \"1496068ca813\"\n base - \"27cf78414709\" [style=invis]\n \"f71189fff3de\" - \"9a33b36209ed\"\n \"27cf78414709\" - \"b750fe79269d\"\n \"0e7893146ac2\" - \"d6434d954665\"\n \"d6434d954665\" - \"d82cbacda43a\"\n base - \"e9aa60c60128\" [style=invis]\n \"074be284591f\" - \"f71189fff3de\"\n \"b750fe79269d\" [label=\"b750fe79269d\\nubuntu\",shape=box,fillcolor=\"paleturquoise\",style=\"filled,rounded\"];\n \"e9aa60c60128\" [label=\"e9aa60c60128\\ncentos\",shape=box,fillcolor=\"paleturquoise\",style=\"filled,rounded\"];\n \"9a33b36209ed\" [label=\"9a33b36209ed\\nfedora\",shape=box,fillcolor=\"paleturquoise\",style=\"filled,rounded\"];\n base [style=invisible]\n } Query Parameters: all \u2013 1/True/true or 0/False/false, Show all containers.\n Only running containers are shown by defaul Status Codes: 200 \u2013 no error 400 \u2013 bad parameter 500 \u2013 server error Create an image POST /images/create Create an image, either by pull it from the registry or by importing i Example request : POST /images/create?fromImage=ubuntu HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\"status\":\"Pulling...\"}\n {\"status\":\"Pulling\", \"progress\":\"1/? (n/a)\"}\n {\"error\":\"Invalid...\"}\n ... Query Parameters: fromImage \u2013 name of the image to pull fromSrc \u2013 source to import, - means stdin repo \u2013 repository tag \u2013 tag registry \u2013 the registry to pull from Status Codes: 200 \u2013 no error 500 \u2013 server error Insert a file in an image POST /images/(name)/insert Insert a file from url in the image name at path Example request : POST /images/test/insert?path=/usr url=myurl HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\"status\":\"Inserting...\"}\n {\"status\":\"Inserting\", \"progress\":\"1/? (n/a)\"}\n {\"error\":\"Invalid...\"}\n ... Query Parameters: url \u2013 The url from where the file is taken path \u2013 The path where the file is stored Status Codes: 200 \u2013 no error 500 \u2013 server error Inspect an image GET /images/(name)/json Return low-level information on the image name Example request : GET /images/centos/json HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\n \"id\":\"b750fe79269d2ec9a3c593ef05b4332b1d1a02a62b4accb2c21d589ff2f5f2dc\",\n \"parent\":\"27cf784147099545\",\n \"created\":\"2013-03-23T22:24:18.818426-07:00\",\n \"container\":\"3d67245a8d72ecf13f33dffac9f79dcdf70f75acb84d308770391510e0c23ad0\",\n \"container_config\":\n {\n \"Hostname\":\"\",\n \"User\":\"\",\n \"Memory\":0,\n \"MemorySwap\":0,\n \"AttachStdin\":false,\n \"AttachStdout\":false,\n \"AttachStderr\":false,\n \"PortSpecs\":null,\n \"Tty\":true,\n \"OpenStdin\":true,\n \"StdinOnce\":false,\n \"Env\":null,\n \"Cmd\": [\"/bin/bash\"],\n \"Dns\":null,\n \"Image\":\"centos\",\n \"Volumes\":null,\n \"VolumesFrom\":\"\",\n \"WorkingDir\":\"\"\n },\n \"Size\": 6824592\n } Status Codes: 200 \u2013 no error 404 \u2013 no such image 409 \u2013 conflict between containers and images 500 \u2013 server error Get the history of an image GET /images/(name)/history Return the history of the image name Example request : GET /images/fedora/history HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n [\n {\n \"Id\": \"b750fe79269d\",\n \"Created\": 1364102658,\n \"CreatedBy\": \"/bin/bash\"\n },\n {\n \"Id\": \"27cf78414709\",\n \"Created\": 1364068391,\n \"CreatedBy\": \"\"\n }\n ] Status Codes: 200 \u2013 no error 404 \u2013 no such image 500 \u2013 server error Push an image on the registry POST /images/(name)/push Push the image name on the registry Example request : POST /images/test/push HTTP/1.1\n {{ authConfig }} Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n{\"status\":\"Pushing...\"} {\"status\":\"Pushing\", \"progress\":\"1/? (n/a)\"}\n{\"error\":\"Invalid...\"} ... Status Codes: 200 \u2013 no error :statuscode 404: no such image :statuscode\n 500: server error Tag an image into a repository POST /images/(name)/tag Tag the image name into a repository Example request : POST /images/test/tag?repo=myrepo force=0 tag=v42 HTTP/1.1 Example response : HTTP/1.1 201 OK Query Parameters: repo \u2013 The repository to tag in force \u2013 1/True/true or 0/False/false, default false tag - The new tag name Status Codes: 201 \u2013 no error 400 \u2013 bad parameter 404 \u2013 no such image 409 \u2013 conflict 500 \u2013 server error Remove an image DELETE /images/(name) Remove the image name from the filesystem Example request : DELETE /images/test HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-type: application/json\n\n [\n {\"Untagged\": \"3e2f21a89f\"},\n {\"Deleted\": \"3e2f21a89f\"},\n {\"Deleted\": \"53b4f83ac9\"}\n ] Status Codes: 200 \u2013 no error 404 \u2013 no such image 409 \u2013 conflict 500 \u2013 server error Search images GET /images/search Search for an image on Docker Hub Example request : GET /images/search?term=sshd HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n [\n {\n \"Name\":\"cespare/sshd\",\n \"Description\":\"\"\n },\n {\n \"Name\":\"johnfuller/sshd\",\n \"Description\":\"\"\n },\n {\n \"Name\":\"dhrp/mongodb-sshd\",\n \"Description\":\"\"\n }\n ]\n\n :query term: term to search\n :statuscode 200: no error\n :statuscode 500: server error",
|
|
"title": "2.2 Images"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api_v1.4#23-misc",
|
|
"tags": "",
|
|
"text": "Build an image from Dockerfile via stdin POST /build Build an image from Dockerfile via stdin Example request : POST /build HTTP/1.1\n\n {{ TAR STREAM }} Example response : HTTP/1.1 200 OK\n\n {{ STREAM }}\n\nThe stream must be a tar archive compressed with one of the\nfollowing algorithms: identity (no compression), gzip, bzip2, xz.\nThe archive must include a file called Dockerfile at its root. I\nmay include any number of other files, which will be accessible in\nthe build context (See the ADD build command).\n\nThe Content-type header should be set to \"application/tar\". Query Parameters: t \u2013 repository name (and optionally a tag) to be applied to\n the resulting image in case of success remote \u2013 build source URI (git or HTTPS/HTTP) q \u2013 suppress verbose build output nocache \u2013 do not use the cache when building the image Status Codes: 200 \u2013 no error 500 \u2013 server error Check auth configuration POST /auth Get the default username and email Example request : POST /auth HTTP/1.1\n Content-Type: application/json\n\n {\n \"username\":\" hannibal\",\n \"password: \"xxxx\",\n \"email\": \"hannibal@a-team.com\",\n \"serveraddress\": \"https://index.docker.io/v1/\"\n } Example response : HTTP/1.1 200 OK\n Content-Type: text/plain Status Codes: 200 \u2013 no error 204 \u2013 no error 500 \u2013 server error Display system-wide information GET /info Display system-wide information Example request : GET /info HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\n \"Containers\":11,\n \"Images\":16,\n \"Debug\":false,\n \"NFd\": 11,\n \"NGoroutines\":21,\n \"MemoryLimit\":true,\n \"SwapLimit\":false,\n \"IPv4Forwarding\":true\n } Status Codes: 200 \u2013 no error 500 \u2013 server error Show the docker version information GET /version Show the docker version information Example request : GET /version HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\n \"Version\":\"0.2.2\",\n \"GitCommit\":\"5a2a5cc+CHANGES\",\n \"GoVersion\":\"go1.0.3\"\n } Status Codes: 200 \u2013 no error 500 \u2013 server error Create a new image from a container's changes POST /commit Create a new image from a container's changes Example request : POST /commit?container=44c004db4b17 m=message repo=myrepo HTTP/1.1\n Content-Type: application/json\n\n {\n \"Cmd\": [\"cat\", \"/world\"],\n \"PortSpecs\":[\"22\"]\n } Example response : HTTP/1.1 201 OK\n Content-Type: application/vnd.docker.raw-stream\n\n {\"Id\": \"596069db4bf5\"} Query Parameters: container \u2013 source container repo \u2013 repository tag \u2013 tag m \u2013 commit message author \u2013 author (e.g., \"John Hannibal Smith\n hannibal@a-team.com \") Status Codes: 201 \u2013 no error 404 \u2013 no such container 500 \u2013 server error Monitor Docker's events GET /events Get events from docker, either in real time via streaming, or via\npolling (using since). Docker containers will report the following events: create, destroy, die, export, kill, pause, restart, start, stop, unpause and Docker images will report: untag, delete Example request : GET /events?since=1374067924 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\"status\":\"create\",\"id\":\"dfdf82bd3881\",\"from\":\"ubuntu:latest\",\"time\":1374067924}\n {\"status\":\"start\",\"id\":\"dfdf82bd3881\",\"from\":\"ubuntu:latest\",\"time\":1374067924}\n {\"status\":\"stop\",\"id\":\"dfdf82bd3881\",\"from\":\"ubuntu:latest\",\"time\":1374067966}\n {\"status\":\"destroy\",\"id\":\"dfdf82bd3881\",\"from\":\"ubuntu:latest\",\"time\":1374067970} Query Parameters: since \u2013 timestamp used for polling Status Codes: 200 \u2013 no error 500 \u2013 server error",
|
|
"title": "2.3 Misc"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api_v1.4#3-going-further",
|
|
"tags": "",
|
|
"text": "",
|
|
"title": "3. Going further"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api_v1.4#31-inside-docker-run",
|
|
"tags": "",
|
|
"text": "Here are the steps of docker run : Create the container If the status code is 404, it means the image doesn't exist:\n - Try to pull it\n - Then retry to create the container Start the container If you are not in detached mode:\n - Attach to the container, using logs=1 (to have stdout and\n stderr from the container's start) and stream=1 If in detached mode or only stdin is attached:\n - Display the container's id",
|
|
"title": "3.1 Inside docker run"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api_v1.4#32-hijacking",
|
|
"tags": "",
|
|
"text": "In this version of the API, /attach, uses hijacking to transport stdin,\nstdout and stderr on the same socket. This might change in the future.",
|
|
"title": "3.2 Hijacking"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api_v1.4#33-cors-requests",
|
|
"tags": "",
|
|
"text": "To enable cross origin requests to the remote api add the flag\n\"--api-enable-cors\" when running docker in daemon mode. $ docker -d -H=\"192.168.1.9:2375\" --api-enable-cors",
|
|
"title": "3.3 CORS Requests"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api_v1.3/",
|
|
"tags": "",
|
|
"text": "Docker Remote API v1.3\n1. Brief introduction\n\nThe Remote API is replacing rcli\nDefault port in the docker daemon is 2375\nThe API tends to be REST, but for some complex commands, like attach\n or pull, the HTTP connection is hijacked to transport stdout stdin\n and stderr\n\n2. Endpoints\n2.1 Containers\nList containers\nGET /containers/json\nList containers\nExample request:\n GET /containers/json?all=1before=8dfafdbc3a40size=1 HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n [\n {\n \"Id\": \"8dfafdbc3a40\",\n \"Image\": \"ubuntu:latest\",\n \"Command\": \"echo 1\",\n \"Created\": 1367854155,\n \"Status\": \"Exit 0\",\n \"Ports\":\"\",\n \"SizeRw\":12288,\n \"SizeRootFs\":0\n },\n {\n \"Id\": \"9cd87474be90\",\n \"Image\": \"ubuntu:latest\",\n \"Command\": \"echo 222222\",\n \"Created\": 1367854155,\n \"Status\": \"Exit 0\",\n \"Ports\":\"\",\n \"SizeRw\":12288,\n \"SizeRootFs\":0\n },\n {\n \"Id\": \"3176a2479c92\",\n \"Image\": \"centos:latest\",\n \"Command\": \"echo 3333333333333333\",\n \"Created\": 1367854154,\n \"Status\": \"Exit 0\",\n \"Ports\":\"\",\n \"SizeRw\":12288,\n \"SizeRootFs\":0\n },\n {\n \"Id\": \"4cb07b47f9fb\",\n \"Image\": \"fedora:latest\",\n \"Command\": \"echo 444444444444444444444444444444444\",\n \"Created\": 1367854152,\n \"Status\": \"Exit 0\",\n \"Ports\":\"\",\n \"SizeRw\":12288,\n \"SizeRootFs\":0\n }\n ]\n\nQuery Parameters:\n\n\n\nall \u2013 1/True/true or 0/False/false, Show all containers.\n Only running containers are shown by default (i.e., this defaults to false)\nlimit \u2013 Show limit last created containers, include non-running ones.\nsince \u2013 Show only containers created since Id, include non-running ones.\nbefore \u2013 Show only containers created before Id, include non-running ones.\nsize \u2013 1/True/true or 0/False/false, Show the containers sizes\n\nStatus Codes:\n\n200 \u2013 no error\n400 \u2013 bad parameter\n500 \u2013 server error\n\nCreate a container\nPOST /containers/create\nCreate a container\nExample request:\n POST /containers/create HTTP/1.1\n Content-Type: application/json\n\n {\n \"Hostname\":\"\",\n \"User\":\"\",\n \"Memory\":0,\n \"MemorySwap\":0,\n \"AttachStdin\":false,\n \"AttachStdout\":true,\n \"AttachStderr\":true,\n \"PortSpecs\":null,\n \"Tty\":false,\n \"OpenStdin\":false,\n \"StdinOnce\":false,\n \"Env\":null,\n \"Cmd\":[\n \"date\"\n ],\n \"Dns\":null,\n \"Image\":\"ubuntu\",\n \"Volumes\":{},\n \"VolumesFrom\":\"\"\n }\n\nExample response:\n HTTP/1.1 201 Created\n Content-Type: application/json\n\n {\n \"Id\":\"e90e34656806\"\n \"Warnings\":[]\n }\n\nJson Parameters:\n\nconfig \u2013 the container's configuration\n\nStatus Codes:\n\n201 \u2013 no error\n404 \u2013 no such container\n406 \u2013 impossible to attach (container not running)\n500 \u2013 server error\n\nInspect a container\nGET /containers/(id)/json\nReturn low-level information on the container id\nExample request:\n GET /containers/4fa6e0f0c678/json HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\n \"Id\": \"4fa6e0f0c6786287e131c3852c58a2e01cc697a68231826813597e4994f1d6e2\",\n \"Created\": \"2013-05-07T14:51:42.041847+02:00\",\n \"Path\": \"date\",\n \"Args\": [],\n \"Config\": {\n \"Hostname\": \"4fa6e0f0c678\",\n \"User\": \"\",\n \"Memory\": 0,\n \"MemorySwap\": 0,\n \"AttachStdin\": false,\n \"AttachStdout\": true,\n \"AttachStderr\": true,\n \"PortSpecs\": null,\n \"Tty\": false,\n \"OpenStdin\": false,\n \"StdinOnce\": false,\n \"Env\": null,\n \"Cmd\": [\n \"date\"\n ],\n \"Dns\": null,\n \"Image\": \"ubuntu\",\n \"Volumes\": {},\n \"VolumesFrom\": \"\"\n },\n \"State\": {\n \"Running\": false,\n \"Pid\": 0,\n \"ExitCode\": 0,\n \"StartedAt\": \"2013-05-07T14:51:42.087658+02:01360\",\n \"Ghost\": false\n },\n \"Image\": \"b750fe79269d2ec9a3c593ef05b4332b1d1a02a62b4accb2c21d589ff2f5f2dc\",\n \"NetworkSettings\": {\n \"IpAddress\": \"\",\n \"IpPrefixLen\": 0,\n \"Gateway\": \"\",\n \"Bridge\": \"\",\n \"PortMapping\": null\n },\n \"SysInitPath\": \"/home/kitty/go/src/github.com/docker/docker/bin/docker\",\n \"ResolvConfPath\": \"/etc/resolv.conf\",\n \"Volumes\": {}\n }\n\nStatus Codes:\n\n200 \u2013 no error\n404 \u2013 no such container\n500 \u2013 server error\n\nList processes running inside a container\nGET /containers/(id)/top\nList processes running inside the container id\nExample request:\n GET /containers/4fa6e0f0c678/top HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n [\n {\n \"PID\":\"11935\",\n \"Tty\":\"pts/2\",\n \"Time\":\"00:00:00\",\n \"Cmd\":\"sh\"\n },\n {\n \"PID\":\"12140\",\n \"Tty\":\"pts/2\",\n \"Time\":\"00:00:00\",\n \"Cmd\":\"sleep\"\n }\n ]\n\nStatus Codes:\n\n200 \u2013 no error\n404 \u2013 no such container\n500 \u2013 server error\n\nInspect changes on a container's filesystem\nGET /containers/(id)/changes\nInspect changes on container id's filesystem\nExample request:\n GET /containers/4fa6e0f0c678/changes HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n [\n {\n \"Path\": \"/dev\",\n \"Kind\": 0\n },\n {\n \"Path\": \"/dev/kmsg\",\n \"Kind\": 1\n },\n {\n \"Path\": \"/test\",\n \"Kind\": 1\n }\n ]\n\nStatus Codes:\n\n200 \u2013 no error\n404 \u2013 no such container\n500 \u2013 server error\n\nExport a container\nGET /containers/(id)/export\nExport the contents of container id\nExample request:\n GET /containers/4fa6e0f0c678/export HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/octet-stream\n\n {{ TAR STREAM }}\n\nStatus Codes:\n\n200 \u2013 no error\n404 \u2013 no such container\n500 \u2013 server error\n\nStart a container\nPOST /containers/(id)/start\nStart the container id\nExample request:\n POST /containers/(id)/start HTTP/1.1\n Content-Type: application/json\n\n {\n \"Binds\":[\"/tmp:/tmp\"]\n }\n\nExample response:\n HTTP/1.1 204 No Content\n Content-Type: text/plain\n\nJson Parameters:\n\n\n\nhostConfig \u2013 the container's host configuration (optional)\n\nStatus Codes:\n\n200 \u2013 no error\n404 \u2013 no such container\n500 \u2013 server error\n\nStop a container\nPOST /containers/(id)/stop\nStop the container id\nExample request:\n POST /containers/e90e34656806/stop?t=5 HTTP/1.1\n\nExample response:\n HTTP/1.1 204 OK\n\nQuery Parameters:\n\nt \u2013 number of seconds to wait before killing the container\n\nStatus Codes:\n\n204 \u2013 no error\n404 \u2013 no such container\n500 \u2013 server error\n\nRestart a container\nPOST /containers/(id)/restart\nRestart the container id\nExample request:\n POST /containers/e90e34656806/restart?t=5 HTTP/1.1\n\nExample response:\n HTTP/1.1 204 No Content\n\nQuery Parameters:\n\nt \u2013 number of seconds to wait before killing the container\n\nStatus Codes:\n\n204 \u2013 no error\n404 \u2013 no such container\n500 \u2013 server error\n\nKill a container\nPOST /containers/(id)/kill\nKill the container id\nExample request:\n POST /containers/e90e34656806/kill HTTP/1.1\n\nExample response:\n HTTP/1.1 204 No Content\n\nStatus Codes:\n\n204 \u2013 no error\n404 \u2013 no such container\n500 \u2013 server error\n\nAttach to a container\nPOST /containers/(id)/attach\nAttach to the container id\nExample request:\n POST /containers/16253994b7c4/attach?logs=1stream=0stdout=1 HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/vnd.docker.raw-stream\n\n {{ STREAM }}\n\nQuery Parameters:\n\nlogs \u2013 1/True/true or 0/False/false, return logs. Defaul\n false\nstream \u2013 1/True/true or 0/False/false, return stream.\n Default false\nstdin \u2013 1/True/true or 0/False/false, if stream=true, attach\n to stdin. Default false\nstdout \u2013 1/True/true or 0/False/false, if logs=true, return\n stdout log, if stream=true, attach to stdout. Default false\nstderr \u2013 1/True/true or 0/False/false, if logs=true, return\n stderr log, if stream=true, attach to stderr. Default false\n\nStatus Codes:\n\n200 \u2013 no error\n400 \u2013 bad parameter\n404 \u2013 no such container\n500 \u2013 server error\n\nAttach to a container (websocket)\nGET /containers/(id)/attach/ws\nAttach to the container id via websocket\nImplements websocket protocol handshake according to RFC 6455\nExample request\n GET /containers/e90e34656806/attach/ws?logs=0stream=1stdin=1stdout=1stderr=1 HTTP/1.1\n\nExample response\n {{ STREAM }}\n\nQuery Parameters:\n\nlogs \u2013 1/True/true or 0/False/false, return logs. Default false\nstream \u2013 1/True/true or 0/False/false, return stream.\n Default false\nstdin \u2013 1/True/true or 0/False/false, if stream=true, attach\n to stdin. Default false\nstdout \u2013 1/True/true or 0/False/false, if logs=true, return\n stdout log, if stream=true, attach to stdout. Default false\nstderr \u2013 1/True/true or 0/False/false, if logs=true, return\n stderr log, if stream=true, attach to stderr. Default false\n\nStatus Codes:\n\n200 \u2013 no error\n400 \u2013 bad parameter\n404 \u2013 no such container\n500 \u2013 server error\n\nWait a container\nPOST /containers/(id)/wait\nBlock until container id stops, then returns the exit code\nExample request:\n POST /containers/16253994b7c4/wait HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\"StatusCode\": 0}\n\nStatus Codes:\n\n200 \u2013 no error\n404 \u2013 no such container\n500 \u2013 server error\n\nRemove a container\nDELETE /containers/(id)\nRemove the container id from the filesystem\nExample request:\n DELETE /containers/16253994b7c4?v=1 HTTP/1.1\n\nExample response:\n HTTP/1.1 204 No Content\n\nQuery Parameters:\n\nv \u2013 1/True/true or 0/False/false, Remove the volumes\n associated to the container. Default false\n\nStatus Codes:\n\n204 \u2013 no error\n400 \u2013 bad parameter\n404 \u2013 no such container\n500 \u2013 server error\n\n2.2 Images\nList Images\nGET /images/(format)\nList images format could be json or viz (json default)\nExample request:\n GET /images/json?all=0 HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n [\n {\n \"Repository\":\"ubuntu\",\n \"Tag\":\"precise\",\n \"Id\":\"b750fe79269d\",\n \"Created\":1364102658,\n \"Size\":24653,\n \"VirtualSize\":180116135\n },\n {\n \"Repository\":\"ubuntu\",\n \"Tag\":\"12.04\",\n \"Id\":\"b750fe79269d\",\n \"Created\":1364102658,\n \"Size\":24653,\n \"VirtualSize\":180116135\n }\n ]\n\nExample request:\n GET /images/viz HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: text/plain\n\n digraph docker {\n \"d82cbacda43a\" - \"074be284591f\"\n \"1496068ca813\" - \"08306dc45919\"\n \"08306dc45919\" - \"0e7893146ac2\"\n \"b750fe79269d\" - \"1496068ca813\"\n base - \"27cf78414709\" [style=invis]\n \"f71189fff3de\" - \"9a33b36209ed\"\n \"27cf78414709\" - \"b750fe79269d\"\n \"0e7893146ac2\" - \"d6434d954665\"\n \"d6434d954665\" - \"d82cbacda43a\"\n base - \"e9aa60c60128\" [style=invis]\n \"074be284591f\" - \"f71189fff3de\"\n \"b750fe79269d\" [label=\"b750fe79269d\\nubuntu\",shape=box,fillcolor=\"paleturquoise\",style=\"filled,rounded\"];\n \"e9aa60c60128\" [label=\"e9aa60c60128\\ncentos\",shape=box,fillcolor=\"paleturquoise\",style=\"filled,rounded\"];\n \"9a33b36209ed\" [label=\"9a33b36209ed\\nfedora\",shape=box,fillcolor=\"paleturquoise\",style=\"filled,rounded\"];\n base [style=invisible]\n }\n\nQuery Parameters:\n\nall \u2013 1/True/true or 0/False/false, Show all containers.\n Only running containers are shown by defaul\n\nStatus Codes:\n\n200 \u2013 no error\n400 \u2013 bad parameter\n500 \u2013 server error\n\nCreate an image\nPOST /images/create\nCreate an image, either by pull it from the registry or by importing i\nExample request:\n POST /images/create?fromImage=ubuntu HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\"status\":\"Pulling...\"}\n {\"status\":\"Pulling\", \"progress\":\"1/? (n/a)\"}\n {\"error\":\"Invalid...\"}\n ...\n\nQuery Parameters:\n\nfromImage \u2013 name of the image to pull\nfromSrc \u2013 source to import, - means stdin\nrepo \u2013 repository\ntag \u2013 tag\nregistry \u2013 the registry to pull from\n\nStatus Codes:\n\n200 \u2013 no error\n500 \u2013 server error\n\nInsert a file in an image\nPOST /images/(name)/insert\nInsert a file from url in the image name at path\nExample request:\n POST /images/test/insert?path=/usrurl=myurl HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\"status\":\"Inserting...\"}\n {\"status\":\"Inserting\", \"progress\":\"1/? (n/a)\"}\n {\"error\":\"Invalid...\"}\n ...\n\nQuery Parameters:\n\nurl \u2013 The url from where the file is taken\npath \u2013 The path where the file is stored\n\nStatus Codes:\n\n200 \u2013 no error\n500 \u2013 server error\n\nInspect an image\nGET /images/(name)/json\nReturn low-level information on the image name\nExample request:\n GET /images/centos/json HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\n \"id\":\"b750fe79269d2ec9a3c593ef05b4332b1d1a02a62b4accb2c21d589ff2f5f2dc\",\n \"parent\":\"27cf784147099545\",\n \"created\":\"2013-03-23T22:24:18.818426-07:00\",\n \"container\":\"3d67245a8d72ecf13f33dffac9f79dcdf70f75acb84d308770391510e0c23ad0\",\n \"container_config\":\n {\n \"Hostname\":\"\",\n \"User\":\"\",\n \"Memory\":0,\n \"MemorySwap\":0,\n \"AttachStdin\":false,\n \"AttachStdout\":false,\n \"AttachStderr\":false,\n \"PortSpecs\":null,\n \"Tty\":true,\n \"OpenStdin\":true,\n \"StdinOnce\":false,\n \"Env\":null,\n \"Cmd\": [\"/bin/bash\"],\n \"Dns\":null,\n \"Image\":\"centos\",\n \"Volumes\":null,\n \"VolumesFrom\":\"\"\n },\n \"Size\": 6824592\n }\n\nStatus Codes:\n\n200 \u2013 no error\n404 \u2013 no such image\n500 \u2013 server error\n\nGet the history of an image\nGET /images/(name)/history\nReturn the history of the image name\nExample request:\n GET /images/fedora/history HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n [\n {\n \"Id\": \"b750fe79269d\",\n \"Created\": 1364102658,\n \"CreatedBy\": \"/bin/bash\"\n },\n {\n \"Id\": \"27cf78414709\",\n \"Created\": 1364068391,\n \"CreatedBy\": \"\"\n }\n ]\n\nStatus Codes:\n\n200 \u2013 no error\n404 \u2013 no such image\n500 \u2013 server error\n\nPush an image on the registry\nPOST /images/(name)/push\nPush the image name on the registry\n **Example request**:\n\n POST /images/test/push HTTP/1.1\n {{ authConfig }}\n\n **Example response**:\n\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\"status\":\"Pushing...\"}\n {\"status\":\"Pushing\", \"progress\":\"1/? (n/a)\"}\n {\"error\":\"Invalid...\"}\n ...\n\nStatus Codes:\n\n200 \u2013 no error\n404 \u2013 no such image\n500 \u2013 server error\n\nTag an image into a repository\nPOST /images/(name)/tag\nTag the image name into a repository\nExample request:\n POST /images/test/tag?repo=myrepoforce=0tag=v42 HTTP/1.1\n\nExample response:\n HTTP/1.1 201 OK\n\nQuery Parameters:\n\nrepo \u2013 The repository to tag in\nforce \u2013 1/True/true or 0/False/false, default false\ntag - The new tag name\n\nStatus Codes:\n\n201 \u2013 no error\n400 \u2013 bad parameter\n404 \u2013 no such image\n409 \u2013 conflict\n500 \u2013 server error\n\nRemove an image\nDELETE /images/(name)\nRemove the image name from the filesystem\nExample request:\n DELETE /images/test HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-type: application/json\n\n [\n {\"Untagged\": \"3e2f21a89f\"},\n {\"Deleted\": \"3e2f21a89f\"},\n {\"Deleted\": \"53b4f83ac9\"}\n ]\n\nStatus Codes:\n\n200 \u2013 no error\n404 \u2013 no such image\n409 \u2013 conflict\n500 \u2013 server error\n\nSearch images\nGET /images/search\nSearch for an image on Docker Hub\nExample request:\n GET /images/search?term=sshd HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n [\n {\n \"Name\":\"cespare/sshd\",\n \"Description\":\"\"\n },\n {\n \"Name\":\"johnfuller/sshd\",\n \"Description\":\"\"\n },\n {\n \"Name\":\"dhrp/mongodb-sshd\",\n \"Description\":\"\"\n }\n ]\n\n :query term: term to search\n :statuscode 200: no error\n :statuscode 500: server error\n\n2.3 Misc\nBuild an image from Dockerfile via stdin\nPOST /build\nBuild an image from Dockerfile via stdin\nExample request:\n POST /build HTTP/1.1\n\n {{ TAR STREAM }}\n\nExample response:\n HTTP/1.1 200 OK\n\n {{ STREAM }}\n\nThe stream must be a tar archive compressed with one of the\nfollowing algorithms: identity (no compression), gzip, bzip2, xz.\nThe archive must include a file called Dockerfile at its root. I\nmay include any number of other files, which will be accessible in\nthe build context (See the ADD build command).\n\nThe Content-type header should be set to \"application/tar\".\n\nQuery Parameters:\n\nt \u2013 repository name (and optionally a tag) to be applied to\n the resulting image in case of success\nremote \u2013 build source URI (git or HTTPS/HTTP)\nq \u2013 suppress verbose build output\n\nStatus Codes:\n\n200 \u2013 no error\n500 \u2013 server error\n\nCheck auth configuration\nPOST /auth\nGet the default username and email\nExample request:\n POST /auth HTTP/1.1\n Content-Type: application/json\n\n {\n \"username\":\"hannibal\",\n \"password:\"xxxx\",\n \"email\":\"hannibal@a-team.com\"\n }\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: text/plain\n\nStatus Codes:\n\n200 \u2013 no error\n204 \u2013 no error\n500 \u2013 server error\n\nDisplay system-wide information\nGET /info\nDisplay system-wide information\nExample request:\n GET /info HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\n \"Containers\":11,\n \"Images\":16,\n \"Debug\":false,\n \"NFd\": 11,\n \"NGoroutines\":21,\n \"MemoryLimit\":true,\n \"SwapLimit\":false,\n \"EventsListeners\":\"0\",\n \"LXCVersion\":\"0.7.5\",\n \"KernelVersion\":\"3.8.0-19-generic\"\n }\n\nStatus Codes:\n\n200 \u2013 no error\n500 \u2013 server error\n\nShow the docker version information\nGET /version\nShow the docker version information\nExample request:\n GET /version HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\n \"Version\":\"0.2.2\",\n \"GitCommit\":\"5a2a5cc+CHANGES\",\n \"GoVersion\":\"go1.0.3\"\n }\n\nStatus Codes:\n\n200 \u2013 no error\n500 \u2013 server error\n\nCreate a new image from a container's changes\nPOST /commit\nCreate a new image from a container's changes\nExample request:\n POST /commit?container=44c004db4b17m=messagerepo=myrepo HTTP/1.1\n Content-Type: application/json\n\n {\n \"Cmd\": [\"cat\", \"/world\"],\n \"PortSpecs\":[\"22\"]\n }\n\nExample response:\n HTTP/1.1 201 OK\n Content-Type: application/vnd.docker.raw-stream\n\n {\"Id\": \"596069db4bf5\"}\n\nQuery Parameters:\n\ncontainer \u2013 source container\nrepo \u2013 repository\ntag \u2013 tag\nm \u2013 commit message\nauthor \u2013 author (e.g., \"John Hannibal Smith\n hannibal@a-team.com\")\n\nStatus Codes:\n\n201 \u2013 no error\n404 \u2013 no such container\n500 \u2013 server error\n\nMonitor Docker's events\nGET /events\nGet events from docker, either in real time via streaming, or via\npolling (using since).\nDocker containers will report the following events:\ncreate, destroy, die, export, kill, pause, restart, start, stop, unpause\n\nand Docker images will report:\nuntag, delete\n\nExample request:\n GET /events?since=1374067924\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\"status\":\"create\",\"id\":\"dfdf82bd3881\",\"time\":1374067924}\n {\"status\":\"start\",\"id\":\"dfdf82bd3881\",\"time\":1374067924}\n {\"status\":\"stop\",\"id\":\"dfdf82bd3881\",\"time\":1374067966}\n {\"status\":\"destroy\",\"id\":\"dfdf82bd3881\",\"time\":1374067970}\n\nQuery Parameters:\n\nsince \u2013 timestamp used for polling\n\nStatus Codes:\n\n200 \u2013 no error\n500 \u2013 server error\n\n3. Going further\n3.1 Inside docker run\nHere are the steps of docker run :\n\n\nCreate the container\n\n\nIf the status code is 404, it means the image doesn't exist:\n - Try to pull it\n - Then retry to create the container\n\n\nStart the container\n\n\nIf you are not in detached mode:\n - Attach to the container, using logs=1 (to have stdout and\n stderr from the container's start) and stream=1\n\n\nIf in detached mode or only stdin is attached:\n - Display the container's id\n\n\n3.2 Hijacking\nIn this version of the API, /attach, uses hijacking to transport stdin,\nstdout and stderr on the same socket. This might change in the future.\n3.3 CORS Requests\nTo enable cross origin requests to the remote api add the flag\n\"--api-enable-cors\" when running docker in daemon mode.\n\ndocker -d -H=\"192.168.1.9:2375\" -api-enable-cors",
|
|
"title": "**HIDDEN**"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api_v1.3#docker-remote-api-v13",
|
|
"tags": "",
|
|
"text": "",
|
|
"title": "Docker Remote API v1.3"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api_v1.3#1-brief-introduction",
|
|
"tags": "",
|
|
"text": "The Remote API is replacing rcli Default port in the docker daemon is 2375 The API tends to be REST, but for some complex commands, like attach\n or pull, the HTTP connection is hijacked to transport stdout stdin\n and stderr",
|
|
"title": "1. Brief introduction"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api_v1.3#2-endpoints",
|
|
"tags": "",
|
|
"text": "",
|
|
"title": "2. Endpoints"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api_v1.3#21-containers",
|
|
"tags": "",
|
|
"text": "List containers GET /containers/json List containers Example request : GET /containers/json?all=1 before=8dfafdbc3a40 size=1 HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n [\n {\n \"Id\": \"8dfafdbc3a40\",\n \"Image\": \"ubuntu:latest\",\n \"Command\": \"echo 1\",\n \"Created\": 1367854155,\n \"Status\": \"Exit 0\",\n \"Ports\":\"\",\n \"SizeRw\":12288,\n \"SizeRootFs\":0\n },\n {\n \"Id\": \"9cd87474be90\",\n \"Image\": \"ubuntu:latest\",\n \"Command\": \"echo 222222\",\n \"Created\": 1367854155,\n \"Status\": \"Exit 0\",\n \"Ports\":\"\",\n \"SizeRw\":12288,\n \"SizeRootFs\":0\n },\n {\n \"Id\": \"3176a2479c92\",\n \"Image\": \"centos:latest\",\n \"Command\": \"echo 3333333333333333\",\n \"Created\": 1367854154,\n \"Status\": \"Exit 0\",\n \"Ports\":\"\",\n \"SizeRw\":12288,\n \"SizeRootFs\":0\n },\n {\n \"Id\": \"4cb07b47f9fb\",\n \"Image\": \"fedora:latest\",\n \"Command\": \"echo 444444444444444444444444444444444\",\n \"Created\": 1367854152,\n \"Status\": \"Exit 0\",\n \"Ports\":\"\",\n \"SizeRw\":12288,\n \"SizeRootFs\":0\n }\n ] Query Parameters: all \u2013 1/True/true or 0/False/false, Show all containers.\n Only running containers are shown by default (i.e., this defaults to false) limit \u2013 Show limit last created containers, include non-running ones. since \u2013 Show only containers created since Id, include non-running ones. before \u2013 Show only containers created before Id, include non-running ones. size \u2013 1/True/true or 0/False/false, Show the containers sizes Status Codes: 200 \u2013 no error 400 \u2013 bad parameter 500 \u2013 server error Create a container POST /containers/create Create a container Example request : POST /containers/create HTTP/1.1\n Content-Type: application/json\n\n {\n \"Hostname\":\"\",\n \"User\":\"\",\n \"Memory\":0,\n \"MemorySwap\":0,\n \"AttachStdin\":false,\n \"AttachStdout\":true,\n \"AttachStderr\":true,\n \"PortSpecs\":null,\n \"Tty\":false,\n \"OpenStdin\":false,\n \"StdinOnce\":false,\n \"Env\":null,\n \"Cmd\":[\n \"date\"\n ],\n \"Dns\":null,\n \"Image\":\"ubuntu\",\n \"Volumes\":{},\n \"VolumesFrom\":\"\"\n } Example response : HTTP/1.1 201 Created\n Content-Type: application/json\n\n {\n \"Id\":\"e90e34656806\"\n \"Warnings\":[]\n } Json Parameters: config \u2013 the container's configuration Status Codes: 201 \u2013 no error 404 \u2013 no such container 406 \u2013 impossible to attach (container not running) 500 \u2013 server error Inspect a container GET /containers/(id)/json Return low-level information on the container id Example request : GET /containers/4fa6e0f0c678/json HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\n \"Id\": \"4fa6e0f0c6786287e131c3852c58a2e01cc697a68231826813597e4994f1d6e2\",\n \"Created\": \"2013-05-07T14:51:42.041847+02:00\",\n \"Path\": \"date\",\n \"Args\": [],\n \"Config\": {\n \"Hostname\": \"4fa6e0f0c678\",\n \"User\": \"\",\n \"Memory\": 0,\n \"MemorySwap\": 0,\n \"AttachStdin\": false,\n \"AttachStdout\": true,\n \"AttachStderr\": true,\n \"PortSpecs\": null,\n \"Tty\": false,\n \"OpenStdin\": false,\n \"StdinOnce\": false,\n \"Env\": null,\n \"Cmd\": [\n \"date\"\n ],\n \"Dns\": null,\n \"Image\": \"ubuntu\",\n \"Volumes\": {},\n \"VolumesFrom\": \"\"\n },\n \"State\": {\n \"Running\": false,\n \"Pid\": 0,\n \"ExitCode\": 0,\n \"StartedAt\": \"2013-05-07T14:51:42.087658+02:01360\",\n \"Ghost\": false\n },\n \"Image\": \"b750fe79269d2ec9a3c593ef05b4332b1d1a02a62b4accb2c21d589ff2f5f2dc\",\n \"NetworkSettings\": {\n \"IpAddress\": \"\",\n \"IpPrefixLen\": 0,\n \"Gateway\": \"\",\n \"Bridge\": \"\",\n \"PortMapping\": null\n },\n \"SysInitPath\": \"/home/kitty/go/src/github.com/docker/docker/bin/docker\",\n \"ResolvConfPath\": \"/etc/resolv.conf\",\n \"Volumes\": {}\n } Status Codes: 200 \u2013 no error 404 \u2013 no such container 500 \u2013 server error List processes running inside a container GET /containers/(id)/top List processes running inside the container id Example request : GET /containers/4fa6e0f0c678/top HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n [\n {\n \"PID\":\"11935\",\n \"Tty\":\"pts/2\",\n \"Time\":\"00:00:00\",\n \"Cmd\":\"sh\"\n },\n {\n \"PID\":\"12140\",\n \"Tty\":\"pts/2\",\n \"Time\":\"00:00:00\",\n \"Cmd\":\"sleep\"\n }\n ] Status Codes: 200 \u2013 no error 404 \u2013 no such container 500 \u2013 server error Inspect changes on a container's filesystem GET /containers/(id)/changes Inspect changes on container id 's filesystem Example request : GET /containers/4fa6e0f0c678/changes HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n [\n {\n \"Path\": \"/dev\",\n \"Kind\": 0\n },\n {\n \"Path\": \"/dev/kmsg\",\n \"Kind\": 1\n },\n {\n \"Path\": \"/test\",\n \"Kind\": 1\n }\n ] Status Codes: 200 \u2013 no error 404 \u2013 no such container 500 \u2013 server error Export a container GET /containers/(id)/export Export the contents of container id Example request : GET /containers/4fa6e0f0c678/export HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/octet-stream\n\n {{ TAR STREAM }} Status Codes: 200 \u2013 no error 404 \u2013 no such container 500 \u2013 server error Start a container POST /containers/(id)/start Start the container id Example request : POST /containers/(id)/start HTTP/1.1\n Content-Type: application/json\n\n {\n \"Binds\":[\"/tmp:/tmp\"]\n } Example response : HTTP/1.1 204 No Content\n Content-Type: text/plain Json Parameters: hostConfig \u2013 the container's host configuration (optional) Status Codes: 200 \u2013 no error 404 \u2013 no such container 500 \u2013 server error Stop a container POST /containers/(id)/stop Stop the container id Example request : POST /containers/e90e34656806/stop?t=5 HTTP/1.1 Example response : HTTP/1.1 204 OK Query Parameters: t \u2013 number of seconds to wait before killing the container Status Codes: 204 \u2013 no error 404 \u2013 no such container 500 \u2013 server error Restart a container POST /containers/(id)/restart Restart the container id Example request : POST /containers/e90e34656806/restart?t=5 HTTP/1.1 Example response : HTTP/1.1 204 No Content Query Parameters: t \u2013 number of seconds to wait before killing the container Status Codes: 204 \u2013 no error 404 \u2013 no such container 500 \u2013 server error Kill a container POST /containers/(id)/kill Kill the container id Example request : POST /containers/e90e34656806/kill HTTP/1.1 Example response : HTTP/1.1 204 No Content Status Codes: 204 \u2013 no error 404 \u2013 no such container 500 \u2013 server error Attach to a container POST /containers/(id)/attach Attach to the container id Example request : POST /containers/16253994b7c4/attach?logs=1 stream=0 stdout=1 HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/vnd.docker.raw-stream\n\n {{ STREAM }} Query Parameters: logs \u2013 1/True/true or 0/False/false, return logs. Defaul\n false stream \u2013 1/True/true or 0/False/false, return stream.\n Default false stdin \u2013 1/True/true or 0/False/false, if stream=true, attach\n to stdin. Default false stdout \u2013 1/True/true or 0/False/false, if logs=true, return\n stdout log, if stream=true, attach to stdout. Default false stderr \u2013 1/True/true or 0/False/false, if logs=true, return\n stderr log, if stream=true, attach to stderr. Default false Status Codes: 200 \u2013 no error 400 \u2013 bad parameter 404 \u2013 no such container 500 \u2013 server error Attach to a container (websocket) GET /containers/(id)/attach/ws Attach to the container id via websocket Implements websocket protocol handshake according to RFC 6455 Example request GET /containers/e90e34656806/attach/ws?logs=0 stream=1 stdin=1 stdout=1 stderr=1 HTTP/1.1 Example response {{ STREAM }} Query Parameters: logs \u2013 1/True/true or 0/False/false, return logs. Default false stream \u2013 1/True/true or 0/False/false, return stream.\n Default false stdin \u2013 1/True/true or 0/False/false, if stream=true, attach\n to stdin. Default false stdout \u2013 1/True/true or 0/False/false, if logs=true, return\n stdout log, if stream=true, attach to stdout. Default false stderr \u2013 1/True/true or 0/False/false, if logs=true, return\n stderr log, if stream=true, attach to stderr. Default false Status Codes: 200 \u2013 no error 400 \u2013 bad parameter 404 \u2013 no such container 500 \u2013 server error Wait a container POST /containers/(id)/wait Block until container id stops, then returns the exit code Example request : POST /containers/16253994b7c4/wait HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\"StatusCode\": 0} Status Codes: 200 \u2013 no error 404 \u2013 no such container 500 \u2013 server error Remove a container DELETE /containers/(id) Remove the container id from the filesystem Example request : DELETE /containers/16253994b7c4?v=1 HTTP/1.1 Example response : HTTP/1.1 204 No Content Query Parameters: v \u2013 1/True/true or 0/False/false, Remove the volumes\n associated to the container. Default false Status Codes: 204 \u2013 no error 400 \u2013 bad parameter 404 \u2013 no such container 500 \u2013 server error",
|
|
"title": "2.1 Containers"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api_v1.3#22-images",
|
|
"tags": "",
|
|
"text": "List Images GET /images/(format) List images format could be json or viz (json default) Example request : GET /images/json?all=0 HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n [\n {\n \"Repository\":\"ubuntu\",\n \"Tag\":\"precise\",\n \"Id\":\"b750fe79269d\",\n \"Created\":1364102658,\n \"Size\":24653,\n \"VirtualSize\":180116135\n },\n {\n \"Repository\":\"ubuntu\",\n \"Tag\":\"12.04\",\n \"Id\":\"b750fe79269d\",\n \"Created\":1364102658,\n \"Size\":24653,\n \"VirtualSize\":180116135\n }\n ] Example request : GET /images/viz HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: text/plain\n\n digraph docker {\n \"d82cbacda43a\" - \"074be284591f\"\n \"1496068ca813\" - \"08306dc45919\"\n \"08306dc45919\" - \"0e7893146ac2\"\n \"b750fe79269d\" - \"1496068ca813\"\n base - \"27cf78414709\" [style=invis]\n \"f71189fff3de\" - \"9a33b36209ed\"\n \"27cf78414709\" - \"b750fe79269d\"\n \"0e7893146ac2\" - \"d6434d954665\"\n \"d6434d954665\" - \"d82cbacda43a\"\n base - \"e9aa60c60128\" [style=invis]\n \"074be284591f\" - \"f71189fff3de\"\n \"b750fe79269d\" [label=\"b750fe79269d\\nubuntu\",shape=box,fillcolor=\"paleturquoise\",style=\"filled,rounded\"];\n \"e9aa60c60128\" [label=\"e9aa60c60128\\ncentos\",shape=box,fillcolor=\"paleturquoise\",style=\"filled,rounded\"];\n \"9a33b36209ed\" [label=\"9a33b36209ed\\nfedora\",shape=box,fillcolor=\"paleturquoise\",style=\"filled,rounded\"];\n base [style=invisible]\n } Query Parameters: all \u2013 1/True/true or 0/False/false, Show all containers.\n Only running containers are shown by defaul Status Codes: 200 \u2013 no error 400 \u2013 bad parameter 500 \u2013 server error Create an image POST /images/create Create an image, either by pull it from the registry or by importing i Example request : POST /images/create?fromImage=ubuntu HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\"status\":\"Pulling...\"}\n {\"status\":\"Pulling\", \"progress\":\"1/? (n/a)\"}\n {\"error\":\"Invalid...\"}\n ... Query Parameters: fromImage \u2013 name of the image to pull fromSrc \u2013 source to import, - means stdin repo \u2013 repository tag \u2013 tag registry \u2013 the registry to pull from Status Codes: 200 \u2013 no error 500 \u2013 server error Insert a file in an image POST /images/(name)/insert Insert a file from url in the image name at path Example request : POST /images/test/insert?path=/usr url=myurl HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\"status\":\"Inserting...\"}\n {\"status\":\"Inserting\", \"progress\":\"1/? (n/a)\"}\n {\"error\":\"Invalid...\"}\n ... Query Parameters: url \u2013 The url from where the file is taken path \u2013 The path where the file is stored Status Codes: 200 \u2013 no error 500 \u2013 server error Inspect an image GET /images/(name)/json Return low-level information on the image name Example request : GET /images/centos/json HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\n \"id\":\"b750fe79269d2ec9a3c593ef05b4332b1d1a02a62b4accb2c21d589ff2f5f2dc\",\n \"parent\":\"27cf784147099545\",\n \"created\":\"2013-03-23T22:24:18.818426-07:00\",\n \"container\":\"3d67245a8d72ecf13f33dffac9f79dcdf70f75acb84d308770391510e0c23ad0\",\n \"container_config\":\n {\n \"Hostname\":\"\",\n \"User\":\"\",\n \"Memory\":0,\n \"MemorySwap\":0,\n \"AttachStdin\":false,\n \"AttachStdout\":false,\n \"AttachStderr\":false,\n \"PortSpecs\":null,\n \"Tty\":true,\n \"OpenStdin\":true,\n \"StdinOnce\":false,\n \"Env\":null,\n \"Cmd\": [\"/bin/bash\"],\n \"Dns\":null,\n \"Image\":\"centos\",\n \"Volumes\":null,\n \"VolumesFrom\":\"\"\n },\n \"Size\": 6824592\n } Status Codes: 200 \u2013 no error 404 \u2013 no such image 500 \u2013 server error Get the history of an image GET /images/(name)/history Return the history of the image name Example request : GET /images/fedora/history HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n [\n {\n \"Id\": \"b750fe79269d\",\n \"Created\": 1364102658,\n \"CreatedBy\": \"/bin/bash\"\n },\n {\n \"Id\": \"27cf78414709\",\n \"Created\": 1364068391,\n \"CreatedBy\": \"\"\n }\n ] Status Codes: 200 \u2013 no error 404 \u2013 no such image 500 \u2013 server error Push an image on the registry POST /images/(name)/push Push the image name on the registry **Example request**: POST /images/test/push HTTP/1.1 {{ authConfig }} **Example response**:\n\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\"status\":\"Pushing...\"}\n {\"status\":\"Pushing\", \"progress\":\"1/? (n/a)\"}\n {\"error\":\"Invalid...\"}\n ... Status Codes: 200 \u2013 no error 404 \u2013 no such image 500 \u2013 server error Tag an image into a repository POST /images/(name)/tag Tag the image name into a repository Example request : POST /images/test/tag?repo=myrepo force=0 tag=v42 HTTP/1.1 Example response : HTTP/1.1 201 OK Query Parameters: repo \u2013 The repository to tag in force \u2013 1/True/true or 0/False/false, default false tag - The new tag name Status Codes: 201 \u2013 no error 400 \u2013 bad parameter 404 \u2013 no such image 409 \u2013 conflict 500 \u2013 server error Remove an image DELETE /images/(name) Remove the image name from the filesystem Example request : DELETE /images/test HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-type: application/json\n\n [\n {\"Untagged\": \"3e2f21a89f\"},\n {\"Deleted\": \"3e2f21a89f\"},\n {\"Deleted\": \"53b4f83ac9\"}\n ] Status Codes: 200 \u2013 no error 404 \u2013 no such image 409 \u2013 conflict 500 \u2013 server error Search images GET /images/search Search for an image on Docker Hub Example request : GET /images/search?term=sshd HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n [\n {\n \"Name\":\"cespare/sshd\",\n \"Description\":\"\"\n },\n {\n \"Name\":\"johnfuller/sshd\",\n \"Description\":\"\"\n },\n {\n \"Name\":\"dhrp/mongodb-sshd\",\n \"Description\":\"\"\n }\n ]\n\n :query term: term to search\n :statuscode 200: no error\n :statuscode 500: server error",
|
|
"title": "2.2 Images"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api_v1.3#23-misc",
|
|
"tags": "",
|
|
"text": "Build an image from Dockerfile via stdin POST /build Build an image from Dockerfile via stdin Example request : POST /build HTTP/1.1\n\n {{ TAR STREAM }} Example response : HTTP/1.1 200 OK\n\n {{ STREAM }}\n\nThe stream must be a tar archive compressed with one of the\nfollowing algorithms: identity (no compression), gzip, bzip2, xz.\nThe archive must include a file called Dockerfile at its root. I\nmay include any number of other files, which will be accessible in\nthe build context (See the ADD build command).\n\nThe Content-type header should be set to \"application/tar\". Query Parameters: t \u2013 repository name (and optionally a tag) to be applied to\n the resulting image in case of success remote \u2013 build source URI (git or HTTPS/HTTP) q \u2013 suppress verbose build output Status Codes: 200 \u2013 no error 500 \u2013 server error Check auth configuration POST /auth Get the default username and email Example request : POST /auth HTTP/1.1\n Content-Type: application/json\n\n {\n \"username\":\"hannibal\",\n \"password:\"xxxx\",\n \"email\":\"hannibal@a-team.com\"\n } Example response : HTTP/1.1 200 OK\n Content-Type: text/plain Status Codes: 200 \u2013 no error 204 \u2013 no error 500 \u2013 server error Display system-wide information GET /info Display system-wide information Example request : GET /info HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\n \"Containers\":11,\n \"Images\":16,\n \"Debug\":false,\n \"NFd\": 11,\n \"NGoroutines\":21,\n \"MemoryLimit\":true,\n \"SwapLimit\":false,\n \"EventsListeners\":\"0\",\n \"LXCVersion\":\"0.7.5\",\n \"KernelVersion\":\"3.8.0-19-generic\"\n } Status Codes: 200 \u2013 no error 500 \u2013 server error Show the docker version information GET /version Show the docker version information Example request : GET /version HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\n \"Version\":\"0.2.2\",\n \"GitCommit\":\"5a2a5cc+CHANGES\",\n \"GoVersion\":\"go1.0.3\"\n } Status Codes: 200 \u2013 no error 500 \u2013 server error Create a new image from a container's changes POST /commit Create a new image from a container's changes Example request : POST /commit?container=44c004db4b17 m=message repo=myrepo HTTP/1.1\n Content-Type: application/json\n\n {\n \"Cmd\": [\"cat\", \"/world\"],\n \"PortSpecs\":[\"22\"]\n } Example response : HTTP/1.1 201 OK\n Content-Type: application/vnd.docker.raw-stream\n\n {\"Id\": \"596069db4bf5\"} Query Parameters: container \u2013 source container repo \u2013 repository tag \u2013 tag m \u2013 commit message author \u2013 author (e.g., \"John Hannibal Smith\n hannibal@a-team.com \") Status Codes: 201 \u2013 no error 404 \u2013 no such container 500 \u2013 server error Monitor Docker's events GET /events Get events from docker, either in real time via streaming, or via\npolling (using since). Docker containers will report the following events: create, destroy, die, export, kill, pause, restart, start, stop, unpause and Docker images will report: untag, delete Example request : GET /events?since=1374067924 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\"status\":\"create\",\"id\":\"dfdf82bd3881\",\"time\":1374067924}\n {\"status\":\"start\",\"id\":\"dfdf82bd3881\",\"time\":1374067924}\n {\"status\":\"stop\",\"id\":\"dfdf82bd3881\",\"time\":1374067966}\n {\"status\":\"destroy\",\"id\":\"dfdf82bd3881\",\"time\":1374067970} Query Parameters: since \u2013 timestamp used for polling Status Codes: 200 \u2013 no error 500 \u2013 server error",
|
|
"title": "2.3 Misc"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api_v1.3#3-going-further",
|
|
"tags": "",
|
|
"text": "",
|
|
"title": "3. Going further"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api_v1.3#31-inside-docker-run",
|
|
"tags": "",
|
|
"text": "Here are the steps of docker run : Create the container If the status code is 404, it means the image doesn't exist:\n - Try to pull it\n - Then retry to create the container Start the container If you are not in detached mode:\n - Attach to the container, using logs=1 (to have stdout and\n stderr from the container's start) and stream=1 If in detached mode or only stdin is attached:\n - Display the container's id",
|
|
"title": "3.1 Inside docker run"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api_v1.3#32-hijacking",
|
|
"tags": "",
|
|
"text": "In this version of the API, /attach, uses hijacking to transport stdin,\nstdout and stderr on the same socket. This might change in the future.",
|
|
"title": "3.2 Hijacking"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api_v1.3#33-cors-requests",
|
|
"tags": "",
|
|
"text": "To enable cross origin requests to the remote api add the flag\n\"--api-enable-cors\" when running docker in daemon mode. docker -d -H=\"192.168.1.9:2375\" -api-enable-cors",
|
|
"title": "3.3 CORS Requests"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api_v1.2/",
|
|
"tags": "",
|
|
"text": "Docker Remote API v1.2\n1. Brief introduction\n\nThe Remote API is replacing rcli\nDefault port in the docker daemon is 2375\nThe API tends to be REST, but for some complex commands, like attach\n or pull, the HTTP connection is hijacked to transport stdout stdin\n and stderr\n\n2. Endpoints\n2.1 Containers\nList containers\nGET /containers/json\nList containers\nExample request:\n GET /containers/json?all=1before=8dfafdbc3a40 HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n [\n {\n \"Id\": \"8dfafdbc3a40\",\n \"Image\": \"ubuntu:latest\",\n \"Command\": \"echo 1\",\n \"Created\": 1367854155,\n \"Status\": \"Exit 0\",\n \"Ports\":\"\",\n \"SizeRw\":12288,\n \"SizeRootFs\":0\n },\n {\n \"Id\": \"9cd87474be90\",\n \"Image\": \"ubuntu:latest\",\n \"Command\": \"echo 222222\",\n \"Created\": 1367854155,\n \"Status\": \"Exit 0\",\n \"Ports\":\"\",\n \"SizeRw\":12288,\n \"SizeRootFs\":0\n },\n {\n \"Id\": \"3176a2479c92\",\n \"Image\": \"centos:latest\",\n \"Command\": \"echo 3333333333333333\",\n \"Created\": 1367854154,\n \"Status\": \"Exit 0\",\n \"Ports\":\"\",\n \"SizeRw\":12288,\n \"SizeRootFs\":0\n },\n {\n \"Id\": \"4cb07b47f9fb\",\n \"Image\": \"fedora:latest\",\n \"Command\": \"echo 444444444444444444444444444444444\",\n \"Created\": 1367854152,\n \"Status\": \"Exit 0\",\n \"Ports\":\"\",\n \"SizeRw\":12288,\n \"SizeRootFs\":0\n }\n ]\n\nQuery Parameters:\n\nall \u2013 1/True/true or 0/False/false, Show all containers.\n Only running containers are shown by default\nlimit \u2013 Show limit last created\n containers, include non-running ones.\nsince \u2013 Show only containers created since Id, include\n non-running ones.\nbefore \u2013 Show only containers created before Id, include\n non-running ones.\n\nStatus Codes:\n\n200 \u2013 no error\n400 \u2013 bad parameter\n500 \u2013 server error\n\nCreate a container\nPOST /containers/create\nCreate a container\nExample request:\n POST /containers/create HTTP/1.1\n Content-Type: application/json\n\n {\n \"Hostname\":\"\",\n \"User\":\"\",\n \"Memory\":0,\n \"MemorySwap\":0,\n \"AttachStdin\":false,\n \"AttachStdout\":true,\n \"AttachStderr\":true,\n \"PortSpecs\":null,\n \"Tty\":false,\n \"OpenStdin\":false,\n \"StdinOnce\":false,\n \"Env\":null,\n \"Cmd\":[\n \"date\"\n ],\n \"Dns\":null,\n \"Image\":\"ubuntu\",\n \"Volumes\":{},\n \"VolumesFrom\":\"\"\n }\n\nExample response:\n HTTP/1.1 201 Created\n Content-Type: application/json\n\n {\n \"Id\":\"e90e34656806\"\n \"Warnings\":[]\n }\n\nJson Parameters:\n\nconfig \u2013 the container's configuration\n\nStatus Codes:\n\n201 \u2013 no error\n404 \u2013 no such container\n406 \u2013 impossible to attach (container not running)\n500 \u2013 server error\n\nInspect a container\nGET /containers/(id)/json\nReturn low-level information on the container id\nExample request:\n GET /containers/4fa6e0f0c678/json HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\n \"Id\": \"4fa6e0f0c6786287e131c3852c58a2e01cc697a68231826813597e4994f1d6e2\",\n \"Created\": \"2013-05-07T14:51:42.041847+02:00\",\n \"Path\": \"date\",\n \"Args\": [],\n \"Config\": {\n \"Hostname\": \"4fa6e0f0c678\",\n \"User\": \"\",\n \"Memory\": 0,\n \"MemorySwap\": 0,\n \"AttachStdin\": false,\n \"AttachStdout\": true,\n \"AttachStderr\": true,\n \"PortSpecs\": null,\n \"Tty\": false,\n \"OpenStdin\": false,\n \"StdinOnce\": false,\n \"Env\": null,\n \"Cmd\": [\n \"date\"\n ],\n \"Dns\": null,\n \"Image\": \"ubuntu\",\n \"Volumes\": {},\n \"VolumesFrom\": \"\"\n },\n \"State\": {\n \"Running\": false,\n \"Pid\": 0,\n \"ExitCode\": 0,\n \"StartedAt\": \"2013-05-07T14:51:42.087658+02:01360\",\n \"Ghost\": false\n },\n \"Image\": \"b750fe79269d2ec9a3c593ef05b4332b1d1a02a62b4accb2c21d589ff2f5f2dc\",\n \"NetworkSettings\": {\n \"IpAddress\": \"\",\n \"IpPrefixLen\": 0,\n \"Gateway\": \"\",\n \"Bridge\": \"\",\n \"PortMapping\": null\n },\n \"SysInitPath\": \"/home/kitty/go/src/github.com/docker/docker/bin/docker\",\n \"ResolvConfPath\": \"/etc/resolv.conf\",\n \"Volumes\": {}\n }\n\nStatus Codes:\n\n200 \u2013 no error\n404 \u2013 no such container\n500 \u2013 server error\n\nInspect changes on a container's filesystem\nGET /containers/(id)/changes\nInspect changes on container id's filesystem\nExample request:\n GET /containers/4fa6e0f0c678/changes HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n [\n {\n \"Path\": \"/dev\",\n \"Kind\": 0\n },\n {\n \"Path\": \"/dev/kmsg\",\n \"Kind\": 1\n },\n {\n \"Path\": \"/test\",\n \"Kind\": 1\n }\n ]\n\nStatus Codes:\n\n200 \u2013 no error\n404 \u2013 no such container\n500 \u2013 server error\n\nExport a container\nGET /containers/(id)/export\nExport the contents of container id\nExample request:\n GET /containers/4fa6e0f0c678/export HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/octet-stream\n\n {{ TAR STREAM }}\n\nStatus Codes:\n\n200 \u2013 no error\n404 \u2013 no such container\n500 \u2013 server error\n\nStart a container\nPOST /containers/(id)/start\nStart the container id\nExample request:\n POST /containers/e90e34656806/start HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n\nStatus Codes:\n\n200 \u2013 no error\n404 \u2013 no such container\n500 \u2013 server error\n\nStop a container\nPOST /containers/(id)/stop\nStop the container id\nExample request:\n POST /containers/e90e34656806/stop?t=5 HTTP/1.1\n\nExample response:\n HTTP/1.1 204 OK\n\nQuery Parameters:\n\nt \u2013 number of seconds to wait before killing the container\n\nStatus Codes:\n\n204 \u2013 no error\n404 \u2013 no such container\n500 \u2013 server error\n\nRestart a container\nPOST /containers/(id)/restart\nRestart the container id\nExample request:\n POST /containers/e90e34656806/restart?t=5 HTTP/1.1\n\nExample response:\n HTTP/1.1 204 No Content\n\nQuery Parameters:\n\nt \u2013 number of seconds to wait before killing the container\n\nStatus Codes:\n\n204 \u2013 no error\n404 \u2013 no such container\n500 \u2013 server error\n\nKill a container\nPOST /containers/(id)/kill\nKill the container id\nExample request:\n POST /containers/e90e34656806/kill HTTP/1.1\n\nExample response:\n HTTP/1.1 204 No Content\n\nStatus Codes:\n\n204 \u2013 no error\n404 \u2013 no such container\n500 \u2013 server error\n\nAttach to a container\nPOST /containers/(id)/attach\nAttach to the container id\nExample request:\n POST /containers/16253994b7c4/attach?logs=1stream=0stdout=1 HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/vnd.docker.raw-stream\n\n {{ STREAM }}\n\nQuery Parameters:\n\nlogs \u2013 1/True/true or 0/False/false, return logs. Defaul\n false\nstream \u2013 1/True/true or 0/False/false, return stream.\n Default false\nstdin \u2013 1/True/true or 0/False/false, if stream=true, attach\n to stdin. Default false\nstdout \u2013 1/True/true or 0/False/false, if logs=true, return\n stdout log, if stream=true, attach to stdout. Default false\nstderr \u2013 1/True/true or 0/False/false, if logs=true, return\n stderr log, if stream=true, attach to stderr. Default false\n\nStatus Codes:\n\n200 \u2013 no error\n400 \u2013 bad parameter\n404 \u2013 no such container\n500 \u2013 server error\n\nAttach to a container (websocket)\nGET /containers/(id)/attach/ws\nAttach to the container id via websocket\nImplements websocket protocol handshake according to RFC 6455\nExample request\n GET /containers/e90e34656806/attach/ws?logs=0stream=1stdin=1stdout=1stderr=1 HTTP/1.1\n\nExample response\n {{ STREAM }}\n\nQuery Parameters:\n\nlogs \u2013 1/True/true or 0/False/false, return logs. Default false\nstream \u2013 1/True/true or 0/False/false, return stream.\n Default false\nstdin \u2013 1/True/true or 0/False/false, if stream=true, attach\n to stdin. Default false\nstdout \u2013 1/True/true or 0/False/false, if logs=true, return\n stdout log, if stream=true, attach to stdout. Default false\nstderr \u2013 1/True/true or 0/False/false, if logs=true, return\n stderr log, if stream=true, attach to stderr. Default false\n\nStatus Codes:\n\n200 \u2013 no error\n400 \u2013 bad parameter\n404 \u2013 no such container\n500 \u2013 server error\n\nWait a container\nPOST /containers/(id)/wait\nBlock until container id stops, then returns the exit code\nExample request:\n POST /containers/16253994b7c4/wait HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\"StatusCode\": 0}\n\nStatus Codes:\n\n200 \u2013 no error\n404 \u2013 no such container\n500 \u2013 server error\n\nRemove a container\nDELETE /containers/(id)\nRemove the container id from the filesystem\nExample request:\n DELETE /containers/16253994b7c4?v=1 HTTP/1.1\n\nExample response:\n HTTP/1.1 204 No Content\n\nQuery Parameters:\n\nv \u2013 1/True/true or 0/False/false, Remove the volumes\n associated to the container. Default false\n\nStatus Codes:\n\n204 \u2013 no error\n400 \u2013 bad parameter\n404 \u2013 no such container\n500 \u2013 server error\n\n2.2 Images\nList Images\nGET /images/(format)\nList images format could be json or viz (json default)\nExample request:\n GET /images/json?all=0 HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n [\n {\n \"Repository\":\"ubuntu\",\n \"Tag\":\"precise\",\n \"Id\":\"b750fe79269d\",\n \"Created\":1364102658,\n \"Size\":24653,\n \"VirtualSize\":180116135\n },\n {\n \"Repository\":\"ubuntu\",\n \"Tag\":\"12.04\",\n \"Id\":\"b750fe79269d\",\n \"Created\":1364102658,\n \"Size\":24653,\n \"VirtualSize\":180116135\n }\n ]\n\nExample request:\n GET /images/viz HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: text/plain\n\n digraph docker {\n \"d82cbacda43a\" - \"074be284591f\"\n \"1496068ca813\" - \"08306dc45919\"\n \"08306dc45919\" - \"0e7893146ac2\"\n \"b750fe79269d\" - \"1496068ca813\"\n base - \"27cf78414709\" [style=invis]\n \"f71189fff3de\" - \"9a33b36209ed\"\n \"27cf78414709\" - \"b750fe79269d\"\n \"0e7893146ac2\" - \"d6434d954665\"\n \"d6434d954665\" - \"d82cbacda43a\"\n base - \"e9aa60c60128\" [style=invis]\n \"074be284591f\" - \"f71189fff3de\"\n \"b750fe79269d\" [label=\"b750fe79269d\\nubuntu\",shape=box,fillcolor=\"paleturquoise\",style=\"filled,rounded\"];\n \"e9aa60c60128\" [label=\"e9aa60c60128\\ncentos\",shape=box,fillcolor=\"paleturquoise\",style=\"filled,rounded\"];\n \"9a33b36209ed\" [label=\"9a33b36209ed\\nfedora\",shape=box,fillcolor=\"paleturquoise\",style=\"filled,rounded\"];\n base [style=invisible]\n }\n\nQuery Parameters:\n\nall \u2013 1/True/true or 0/False/false, Show all containers.\n Only running containers are shown by defaul\n\nStatus Codes:\n\n200 \u2013 no error\n400 \u2013 bad parameter\n500 \u2013 server error\n\nCreate an image\nPOST /images/create\nCreate an image, either by pull it from the registry or by importing i\nExample request:\n POST /images/create?fromImage=ubuntu HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\"status\":\"Pulling...\"}\n {\"status\":\"Pulling\", \"progress\":\"1/? (n/a)\"}\n {\"error\":\"Invalid...\"}\n ...\n\nQuery Parameters:\n\nfromImage \u2013 name of the image to pull\nfromSrc \u2013 source to import, - means stdin\nrepo \u2013 repository\ntag \u2013 tag\nregistry \u2013 the registry to pull from\n\nStatus Codes:\n\n200 \u2013 no error\n500 \u2013 server error\n\nInsert a file in an image\nPOST /images/(name)/insert\nInsert a file from url in the image name at path\nExample request:\n POST /images/test/insert?path=/usrurl=myurl HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\"status\":\"Inserting...\"}\n {\"status\":\"Inserting\", \"progress\":\"1/? (n/a)\"}\n {\"error\":\"Invalid...\"}\n ...\n\nQuery Parameters:\n\nurl \u2013 The url from where the file is taken\npath \u2013 The path where the file is stored\n\nStatus Codes:\n\n200 \u2013 no error\n500 \u2013 server error\n\nInspect an image\nGET /images/(name)/json\nReturn low-level information on the image name\nExample request:\n GET /images/centos/json HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\n \"id\":\"b750fe79269d2ec9a3c593ef05b4332b1d1a02a62b4accb2c21d589ff2f5f2dc\",\n \"parent\":\"27cf784147099545\",\n \"created\":\"2013-03-23T22:24:18.818426-07:00\",\n \"container\":\"3d67245a8d72ecf13f33dffac9f79dcdf70f75acb84d308770391510e0c23ad0\",\n \"container_config\":\n {\n \"Hostname\":\"\",\n \"User\":\"\",\n \"Memory\":0,\n \"MemorySwap\":0,\n \"AttachStdin\":false,\n \"AttachStdout\":false,\n \"AttachStderr\":false,\n \"PortSpecs\":null,\n \"Tty\":true,\n \"OpenStdin\":true,\n \"StdinOnce\":false,\n \"Env\":null,\n \"Cmd\": [\"/bin/bash\"],\n \"Dns\":null,\n \"Image\":\"centos\",\n \"Volumes\":null,\n \"VolumesFrom\":\"\"\n },\n \"Size\": 6824592\n }\n\nStatus Codes:\n\n200 \u2013 no error\n404 \u2013 no such image\n500 \u2013 server error\n\nGet the history of an image\nGET /images/(name)/history\nReturn the history of the image name\nExample request:\n GET /images/fedora/history HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n [\n {\n \"Id\":\"b750fe79269d\",\n \"Tag\":[\"ubuntu:latest\"],\n \"Created\":1364102658,\n \"CreatedBy\":\"/bin/bash\"\n },\n {\n \"Id\":\"27cf78414709\",\n \"Created\":1364068391,\n \"CreatedBy\":\"\"\n }\n ]\n\nStatus Codes:\n\n200 \u2013 no error\n404 \u2013 no such image\n500 \u2013 server error\n\nPush an image on the registry\nPOST /images/(name)/push\nPush the image name on the registry\n **Example request**:\n\n POST /images/test/push HTTP/1.1\n {{ authConfig }}\n\n **Example response**:\n\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\"status\":\"Pushing...\"}\n {\"status\":\"Pushing\", \"progress\":\"1/? (n/a)\"}\n {\"error\":\"Invalid...\"}\n ...\n\nStatus Codes:\n\n200 \u2013 no error\n404 \u2013 no such image\n500 \u2013 server error\n\nTag an image into a repository\nPOST /images/(name)/tag\nTag the image name into a repository\nExample request:\n POST /images/test/tag?repo=myrepoforce=0tag=v42 HTTP/1.1\n\nExample response:\n HTTP/1.1 201 OK\n\nQuery Parameters:\n\nrepo \u2013 The repository to tag in\nforce \u2013 1/True/true or 0/False/false, default false\ntag - The new tag name\n\nStatus Codes:\n\n201 \u2013 no error\n400 \u2013 bad parameter\n404 \u2013 no such image\n409 \u2013 conflict\n500 \u2013 server error\n\nRemove an image\nDELETE /images/(name)\nRemove the image name from the filesystem\nExample request:\n DELETE /images/test HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-type: application/json\n\n [\n {\"Untagged\": \"3e2f21a89f\"},\n {\"Deleted\": \"3e2f21a89f\"},\n {\"Deleted\": \"53b4f83ac9\"}\n ]\n\nStatus Codes:\n\n204 \u2013 no error\n404 \u2013 no such image\n409 \u2013 conflict\n500 \u2013 server error\n\nSearch images\nGET /images/search\nSearch for an image on Docker Hub\nExample request:\n GET /images/search?term=sshd HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n [\n {\n \"Name\":\"cespare/sshd\",\n \"Description\":\"\"\n },\n {\n \"Name\":\"johnfuller/sshd\",\n \"Description\":\"\"\n },\n {\n \"Name\":\"dhrp/mongodb-sshd\",\n \"Description\":\"\"\n }\n ]\n\n :query term: term to search\n :statuscode 200: no error\n :statuscode 500: server error\n\n2.3 Misc\nBuild an image from Dockerfile via stdin\nPOST /build\nBuild an image from Dockerfile\nExample request:\n POST /build HTTP/1.1\n\n {{ TAR STREAM }}\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: text/plain\n\n {{ STREAM }}\n\nQuery Parameters:\n\nt \u2013 repository name to be applied to the resulting image in\n case of success\nremote \u2013 resource to fetch, as URI\n\nStatus Codes:\n\n200 \u2013 no error\n500 \u2013 server error\n\n{{ STREAM }} is the raw text output of the build command. It uses the\nHTTP Hijack method in order to stream.\nCheck auth configuration\nPOST /auth\nGet the default username and email\nExample request:\n POST /auth HTTP/1.1\n Content-Type: application/json\n\n {\n \"username\":\"hannibal\",\n \"password:\"xxxx\",\n \"email\":\"hannibal@a-team.com\"\n }\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\n \"Status\": \"Login Succeeded\"\n }\n\nStatus Codes:\n\n200 \u2013 no error\n204 \u2013 no error\n401 \u2013 unauthorized\n403 \u2013 forbidden\n500 \u2013 server error\n\nDisplay system-wide information\nGET /info\nDisplay system-wide information\nExample request:\n GET /info HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\n \"Containers\":11,\n \"Images\":16,\n \"Debug\":false,\n \"NFd\": 11,\n \"NGoroutines\":21,\n \"MemoryLimit\":true,\n \"SwapLimit\":false\n }\n\nStatus Codes:\n\n200 \u2013 no error\n500 \u2013 server error\n\nShow the docker version information\nGET /version\nShow the docker version information\nExample request:\n GET /version HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\n \"Version\":\"0.2.2\",\n \"GitCommit\":\"5a2a5cc+CHANGES\",\n \"GoVersion\":\"go1.0.3\"\n }\n\nStatus Codes:\n\n200 \u2013 no error\n500 \u2013 server error\n\nCreate a new image from a container's changes\nPOST /commit\nCreate a new image from a container's changes\nExample request:\n POST /commit?container=44c004db4b17m=messagerepo=myrepo HTTP/1.1\n Content-Type: application/json\n\n {\n \"Cmd\": [\"cat\", \"/world\"],\n \"PortSpecs\":[\"22\"]\n }\n\nExample response:\n HTTP/1.1 201 OK\n Content-Type: application/vnd.docker.raw-stream\n\n {\"Id\": \"596069db4bf5\"}\n\nQuery Parameters:\n\ncontainer \u2013 source container\nrepo \u2013 repository\ntag \u2013 tag\nm \u2013 commit message\nauthor \u2013 author (e.g., \"John Hannibal Smith\n hannibal@a-team.com\")\n\nStatus Codes:\n\n201 \u2013 no error\n404 \u2013 no such container\n500 \u2013 server error\n\n3. Going further\n3.1 Inside docker run\nHere are the steps of docker run :\n\n\nCreate the container\n\n\nIf the status code is 404, it means the image doesn't exist:\n - Try to pull it\n - Then retry to create the container\n\n\nStart the container\n\n\nIf you are not in detached mode:\n - Attach to the container, using logs=1 (to have stdout and\n stderr from the container's start) and stream=1\n\n\nIf in detached mode or only stdin is attached:\n - Display the container's\n\n\n3.2 Hijacking\nIn this version of the API, /attach, uses hijacking to transport stdin,\nstdout and stderr on the same socket. This might change in the future.\n3.3 CORS Requests\nTo enable cross origin requests to the remote api add the flag\n\"--api-enable-cors\" when running docker in daemon mode.\n\ndocker -d -H=\"tcp://192.168.1.9:2375\"\n-api-enable-cors",
|
|
"title": "**HIDDEN**"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api_v1.2#docker-remote-api-v12",
|
|
"tags": "",
|
|
"text": "",
|
|
"title": "Docker Remote API v1.2"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api_v1.2#1-brief-introduction",
|
|
"tags": "",
|
|
"text": "The Remote API is replacing rcli Default port in the docker daemon is 2375 The API tends to be REST, but for some complex commands, like attach\n or pull, the HTTP connection is hijacked to transport stdout stdin\n and stderr",
|
|
"title": "1. Brief introduction"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api_v1.2#2-endpoints",
|
|
"tags": "",
|
|
"text": "",
|
|
"title": "2. Endpoints"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api_v1.2#21-containers",
|
|
"tags": "",
|
|
"text": "List containers GET /containers/json List containers Example request : GET /containers/json?all=1 before=8dfafdbc3a40 HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n [\n {\n \"Id\": \"8dfafdbc3a40\",\n \"Image\": \"ubuntu:latest\",\n \"Command\": \"echo 1\",\n \"Created\": 1367854155,\n \"Status\": \"Exit 0\",\n \"Ports\":\"\",\n \"SizeRw\":12288,\n \"SizeRootFs\":0\n },\n {\n \"Id\": \"9cd87474be90\",\n \"Image\": \"ubuntu:latest\",\n \"Command\": \"echo 222222\",\n \"Created\": 1367854155,\n \"Status\": \"Exit 0\",\n \"Ports\":\"\",\n \"SizeRw\":12288,\n \"SizeRootFs\":0\n },\n {\n \"Id\": \"3176a2479c92\",\n \"Image\": \"centos:latest\",\n \"Command\": \"echo 3333333333333333\",\n \"Created\": 1367854154,\n \"Status\": \"Exit 0\",\n \"Ports\":\"\",\n \"SizeRw\":12288,\n \"SizeRootFs\":0\n },\n {\n \"Id\": \"4cb07b47f9fb\",\n \"Image\": \"fedora:latest\",\n \"Command\": \"echo 444444444444444444444444444444444\",\n \"Created\": 1367854152,\n \"Status\": \"Exit 0\",\n \"Ports\":\"\",\n \"SizeRw\":12288,\n \"SizeRootFs\":0\n }\n ] Query Parameters: all \u2013 1/True/true or 0/False/false, Show all containers.\n Only running containers are shown by default limit \u2013 Show limit last created\n containers, include non-running ones. since \u2013 Show only containers created since Id, include\n non-running ones. before \u2013 Show only containers created before Id, include\n non-running ones. Status Codes: 200 \u2013 no error 400 \u2013 bad parameter 500 \u2013 server error Create a container POST /containers/create Create a container Example request : POST /containers/create HTTP/1.1\n Content-Type: application/json\n\n {\n \"Hostname\":\"\",\n \"User\":\"\",\n \"Memory\":0,\n \"MemorySwap\":0,\n \"AttachStdin\":false,\n \"AttachStdout\":true,\n \"AttachStderr\":true,\n \"PortSpecs\":null,\n \"Tty\":false,\n \"OpenStdin\":false,\n \"StdinOnce\":false,\n \"Env\":null,\n \"Cmd\":[\n \"date\"\n ],\n \"Dns\":null,\n \"Image\":\"ubuntu\",\n \"Volumes\":{},\n \"VolumesFrom\":\"\"\n } Example response : HTTP/1.1 201 Created\n Content-Type: application/json\n\n {\n \"Id\":\"e90e34656806\"\n \"Warnings\":[]\n } Json Parameters: config \u2013 the container's configuration Status Codes: 201 \u2013 no error 404 \u2013 no such container 406 \u2013 impossible to attach (container not running) 500 \u2013 server error Inspect a container GET /containers/(id)/json Return low-level information on the container id Example request : GET /containers/4fa6e0f0c678/json HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\n \"Id\": \"4fa6e0f0c6786287e131c3852c58a2e01cc697a68231826813597e4994f1d6e2\",\n \"Created\": \"2013-05-07T14:51:42.041847+02:00\",\n \"Path\": \"date\",\n \"Args\": [],\n \"Config\": {\n \"Hostname\": \"4fa6e0f0c678\",\n \"User\": \"\",\n \"Memory\": 0,\n \"MemorySwap\": 0,\n \"AttachStdin\": false,\n \"AttachStdout\": true,\n \"AttachStderr\": true,\n \"PortSpecs\": null,\n \"Tty\": false,\n \"OpenStdin\": false,\n \"StdinOnce\": false,\n \"Env\": null,\n \"Cmd\": [\n \"date\"\n ],\n \"Dns\": null,\n \"Image\": \"ubuntu\",\n \"Volumes\": {},\n \"VolumesFrom\": \"\"\n },\n \"State\": {\n \"Running\": false,\n \"Pid\": 0,\n \"ExitCode\": 0,\n \"StartedAt\": \"2013-05-07T14:51:42.087658+02:01360\",\n \"Ghost\": false\n },\n \"Image\": \"b750fe79269d2ec9a3c593ef05b4332b1d1a02a62b4accb2c21d589ff2f5f2dc\",\n \"NetworkSettings\": {\n \"IpAddress\": \"\",\n \"IpPrefixLen\": 0,\n \"Gateway\": \"\",\n \"Bridge\": \"\",\n \"PortMapping\": null\n },\n \"SysInitPath\": \"/home/kitty/go/src/github.com/docker/docker/bin/docker\",\n \"ResolvConfPath\": \"/etc/resolv.conf\",\n \"Volumes\": {}\n } Status Codes: 200 \u2013 no error 404 \u2013 no such container 500 \u2013 server error Inspect changes on a container's filesystem GET /containers/(id)/changes Inspect changes on container id 's filesystem Example request : GET /containers/4fa6e0f0c678/changes HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n [\n {\n \"Path\": \"/dev\",\n \"Kind\": 0\n },\n {\n \"Path\": \"/dev/kmsg\",\n \"Kind\": 1\n },\n {\n \"Path\": \"/test\",\n \"Kind\": 1\n }\n ] Status Codes: 200 \u2013 no error 404 \u2013 no such container 500 \u2013 server error Export a container GET /containers/(id)/export Export the contents of container id Example request : GET /containers/4fa6e0f0c678/export HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/octet-stream\n\n {{ TAR STREAM }} Status Codes: 200 \u2013 no error 404 \u2013 no such container 500 \u2013 server error Start a container POST /containers/(id)/start Start the container id Example request : POST /containers/e90e34656806/start HTTP/1.1 Example response : HTTP/1.1 200 OK Status Codes: 200 \u2013 no error 404 \u2013 no such container 500 \u2013 server error Stop a container POST /containers/(id)/stop Stop the container id Example request : POST /containers/e90e34656806/stop?t=5 HTTP/1.1 Example response : HTTP/1.1 204 OK Query Parameters: t \u2013 number of seconds to wait before killing the container Status Codes: 204 \u2013 no error 404 \u2013 no such container 500 \u2013 server error Restart a container POST /containers/(id)/restart Restart the container id Example request : POST /containers/e90e34656806/restart?t=5 HTTP/1.1 Example response : HTTP/1.1 204 No Content Query Parameters: t \u2013 number of seconds to wait before killing the container Status Codes: 204 \u2013 no error 404 \u2013 no such container 500 \u2013 server error Kill a container POST /containers/(id)/kill Kill the container id Example request : POST /containers/e90e34656806/kill HTTP/1.1 Example response : HTTP/1.1 204 No Content Status Codes: 204 \u2013 no error 404 \u2013 no such container 500 \u2013 server error Attach to a container POST /containers/(id)/attach Attach to the container id Example request : POST /containers/16253994b7c4/attach?logs=1 stream=0 stdout=1 HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/vnd.docker.raw-stream\n\n {{ STREAM }} Query Parameters: logs \u2013 1/True/true or 0/False/false, return logs. Defaul\n false stream \u2013 1/True/true or 0/False/false, return stream.\n Default false stdin \u2013 1/True/true or 0/False/false, if stream=true, attach\n to stdin. Default false stdout \u2013 1/True/true or 0/False/false, if logs=true, return\n stdout log, if stream=true, attach to stdout. Default false stderr \u2013 1/True/true or 0/False/false, if logs=true, return\n stderr log, if stream=true, attach to stderr. Default false Status Codes: 200 \u2013 no error 400 \u2013 bad parameter 404 \u2013 no such container 500 \u2013 server error Attach to a container (websocket) GET /containers/(id)/attach/ws Attach to the container id via websocket Implements websocket protocol handshake according to RFC 6455 Example request GET /containers/e90e34656806/attach/ws?logs=0 stream=1 stdin=1 stdout=1 stderr=1 HTTP/1.1 Example response {{ STREAM }} Query Parameters: logs \u2013 1/True/true or 0/False/false, return logs. Default false stream \u2013 1/True/true or 0/False/false, return stream.\n Default false stdin \u2013 1/True/true or 0/False/false, if stream=true, attach\n to stdin. Default false stdout \u2013 1/True/true or 0/False/false, if logs=true, return\n stdout log, if stream=true, attach to stdout. Default false stderr \u2013 1/True/true or 0/False/false, if logs=true, return\n stderr log, if stream=true, attach to stderr. Default false Status Codes: 200 \u2013 no error 400 \u2013 bad parameter 404 \u2013 no such container 500 \u2013 server error Wait a container POST /containers/(id)/wait Block until container id stops, then returns the exit code Example request : POST /containers/16253994b7c4/wait HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\"StatusCode\": 0} Status Codes: 200 \u2013 no error 404 \u2013 no such container 500 \u2013 server error Remove a container DELETE /containers/(id) Remove the container id from the filesystem Example request : DELETE /containers/16253994b7c4?v=1 HTTP/1.1 Example response : HTTP/1.1 204 No Content Query Parameters: v \u2013 1/True/true or 0/False/false, Remove the volumes\n associated to the container. Default false Status Codes: 204 \u2013 no error 400 \u2013 bad parameter 404 \u2013 no such container 500 \u2013 server error",
|
|
"title": "2.1 Containers"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api_v1.2#22-images",
|
|
"tags": "",
|
|
"text": "List Images GET /images/(format) List images format could be json or viz (json default) Example request : GET /images/json?all=0 HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n [\n {\n \"Repository\":\"ubuntu\",\n \"Tag\":\"precise\",\n \"Id\":\"b750fe79269d\",\n \"Created\":1364102658,\n \"Size\":24653,\n \"VirtualSize\":180116135\n },\n {\n \"Repository\":\"ubuntu\",\n \"Tag\":\"12.04\",\n \"Id\":\"b750fe79269d\",\n \"Created\":1364102658,\n \"Size\":24653,\n \"VirtualSize\":180116135\n }\n ] Example request : GET /images/viz HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: text/plain\n\n digraph docker {\n \"d82cbacda43a\" - \"074be284591f\"\n \"1496068ca813\" - \"08306dc45919\"\n \"08306dc45919\" - \"0e7893146ac2\"\n \"b750fe79269d\" - \"1496068ca813\"\n base - \"27cf78414709\" [style=invis]\n \"f71189fff3de\" - \"9a33b36209ed\"\n \"27cf78414709\" - \"b750fe79269d\"\n \"0e7893146ac2\" - \"d6434d954665\"\n \"d6434d954665\" - \"d82cbacda43a\"\n base - \"e9aa60c60128\" [style=invis]\n \"074be284591f\" - \"f71189fff3de\"\n \"b750fe79269d\" [label=\"b750fe79269d\\nubuntu\",shape=box,fillcolor=\"paleturquoise\",style=\"filled,rounded\"];\n \"e9aa60c60128\" [label=\"e9aa60c60128\\ncentos\",shape=box,fillcolor=\"paleturquoise\",style=\"filled,rounded\"];\n \"9a33b36209ed\" [label=\"9a33b36209ed\\nfedora\",shape=box,fillcolor=\"paleturquoise\",style=\"filled,rounded\"];\n base [style=invisible]\n } Query Parameters: all \u2013 1/True/true or 0/False/false, Show all containers.\n Only running containers are shown by defaul Status Codes: 200 \u2013 no error 400 \u2013 bad parameter 500 \u2013 server error Create an image POST /images/create Create an image, either by pull it from the registry or by importing i Example request : POST /images/create?fromImage=ubuntu HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\"status\":\"Pulling...\"}\n {\"status\":\"Pulling\", \"progress\":\"1/? (n/a)\"}\n {\"error\":\"Invalid...\"}\n ... Query Parameters: fromImage \u2013 name of the image to pull fromSrc \u2013 source to import, - means stdin repo \u2013 repository tag \u2013 tag registry \u2013 the registry to pull from Status Codes: 200 \u2013 no error 500 \u2013 server error Insert a file in an image POST /images/(name)/insert Insert a file from url in the image name at path Example request : POST /images/test/insert?path=/usr url=myurl HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\"status\":\"Inserting...\"}\n {\"status\":\"Inserting\", \"progress\":\"1/? (n/a)\"}\n {\"error\":\"Invalid...\"}\n ... Query Parameters: url \u2013 The url from where the file is taken path \u2013 The path where the file is stored Status Codes: 200 \u2013 no error 500 \u2013 server error Inspect an image GET /images/(name)/json Return low-level information on the image name Example request : GET /images/centos/json HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\n \"id\":\"b750fe79269d2ec9a3c593ef05b4332b1d1a02a62b4accb2c21d589ff2f5f2dc\",\n \"parent\":\"27cf784147099545\",\n \"created\":\"2013-03-23T22:24:18.818426-07:00\",\n \"container\":\"3d67245a8d72ecf13f33dffac9f79dcdf70f75acb84d308770391510e0c23ad0\",\n \"container_config\":\n {\n \"Hostname\":\"\",\n \"User\":\"\",\n \"Memory\":0,\n \"MemorySwap\":0,\n \"AttachStdin\":false,\n \"AttachStdout\":false,\n \"AttachStderr\":false,\n \"PortSpecs\":null,\n \"Tty\":true,\n \"OpenStdin\":true,\n \"StdinOnce\":false,\n \"Env\":null,\n \"Cmd\": [\"/bin/bash\"],\n \"Dns\":null,\n \"Image\":\"centos\",\n \"Volumes\":null,\n \"VolumesFrom\":\"\"\n },\n \"Size\": 6824592\n } Status Codes: 200 \u2013 no error 404 \u2013 no such image 500 \u2013 server error Get the history of an image GET /images/(name)/history Return the history of the image name Example request : GET /images/fedora/history HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n [\n {\n \"Id\":\"b750fe79269d\",\n \"Tag\":[\"ubuntu:latest\"],\n \"Created\":1364102658,\n \"CreatedBy\":\"/bin/bash\"\n },\n {\n \"Id\":\"27cf78414709\",\n \"Created\":1364068391,\n \"CreatedBy\":\"\"\n }\n ] Status Codes: 200 \u2013 no error 404 \u2013 no such image 500 \u2013 server error Push an image on the registry POST /images/(name)/push Push the image name on the registry **Example request**: POST /images/test/push HTTP/1.1 {{ authConfig }} **Example response**:\n\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\"status\":\"Pushing...\"}\n {\"status\":\"Pushing\", \"progress\":\"1/? (n/a)\"}\n {\"error\":\"Invalid...\"}\n ... Status Codes: 200 \u2013 no error 404 \u2013 no such image 500 \u2013 server error Tag an image into a repository POST /images/(name)/tag Tag the image name into a repository Example request : POST /images/test/tag?repo=myrepo force=0 tag=v42 HTTP/1.1 Example response : HTTP/1.1 201 OK Query Parameters: repo \u2013 The repository to tag in force \u2013 1/True/true or 0/False/false, default false tag - The new tag name Status Codes: 201 \u2013 no error 400 \u2013 bad parameter 404 \u2013 no such image 409 \u2013 conflict 500 \u2013 server error Remove an image DELETE /images/(name) Remove the image name from the filesystem Example request : DELETE /images/test HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-type: application/json\n\n [\n {\"Untagged\": \"3e2f21a89f\"},\n {\"Deleted\": \"3e2f21a89f\"},\n {\"Deleted\": \"53b4f83ac9\"}\n ] Status Codes: 204 \u2013 no error 404 \u2013 no such image 409 \u2013 conflict 500 \u2013 server error Search images GET /images/search Search for an image on Docker Hub Example request : GET /images/search?term=sshd HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n [\n {\n \"Name\":\"cespare/sshd\",\n \"Description\":\"\"\n },\n {\n \"Name\":\"johnfuller/sshd\",\n \"Description\":\"\"\n },\n {\n \"Name\":\"dhrp/mongodb-sshd\",\n \"Description\":\"\"\n }\n ]\n\n :query term: term to search\n :statuscode 200: no error\n :statuscode 500: server error",
|
|
"title": "2.2 Images"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api_v1.2#23-misc",
|
|
"tags": "",
|
|
"text": "Build an image from Dockerfile via stdin POST /build Build an image from Dockerfile Example request : POST /build HTTP/1.1\n\n {{ TAR STREAM }} Example response : HTTP/1.1 200 OK\n Content-Type: text/plain\n\n {{ STREAM }} Query Parameters: t \u2013 repository name to be applied to the resulting image in\n case of success remote \u2013 resource to fetch, as URI Status Codes: 200 \u2013 no error 500 \u2013 server error {{ STREAM }} is the raw text output of the build command. It uses the\nHTTP Hijack method in order to stream. Check auth configuration POST /auth Get the default username and email Example request : POST /auth HTTP/1.1\n Content-Type: application/json\n\n {\n \"username\":\"hannibal\",\n \"password:\"xxxx\",\n \"email\":\"hannibal@a-team.com\"\n } Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\n \"Status\": \"Login Succeeded\"\n } Status Codes: 200 \u2013 no error 204 \u2013 no error 401 \u2013 unauthorized 403 \u2013 forbidden 500 \u2013 server error Display system-wide information GET /info Display system-wide information Example request : GET /info HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\n \"Containers\":11,\n \"Images\":16,\n \"Debug\":false,\n \"NFd\": 11,\n \"NGoroutines\":21,\n \"MemoryLimit\":true,\n \"SwapLimit\":false\n } Status Codes: 200 \u2013 no error 500 \u2013 server error Show the docker version information GET /version Show the docker version information Example request : GET /version HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\n \"Version\":\"0.2.2\",\n \"GitCommit\":\"5a2a5cc+CHANGES\",\n \"GoVersion\":\"go1.0.3\"\n } Status Codes: 200 \u2013 no error 500 \u2013 server error Create a new image from a container's changes POST /commit Create a new image from a container's changes Example request : POST /commit?container=44c004db4b17 m=message repo=myrepo HTTP/1.1\n Content-Type: application/json\n\n {\n \"Cmd\": [\"cat\", \"/world\"],\n \"PortSpecs\":[\"22\"]\n } Example response : HTTP/1.1 201 OK\n Content-Type: application/vnd.docker.raw-stream\n\n {\"Id\": \"596069db4bf5\"} Query Parameters: container \u2013 source container repo \u2013 repository tag \u2013 tag m \u2013 commit message author \u2013 author (e.g., \"John Hannibal Smith\n hannibal@a-team.com \") Status Codes: 201 \u2013 no error 404 \u2013 no such container 500 \u2013 server error",
|
|
"title": "2.3 Misc"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api_v1.2#3-going-further",
|
|
"tags": "",
|
|
"text": "",
|
|
"title": "3. Going further"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api_v1.2#31-inside-docker-run",
|
|
"tags": "",
|
|
"text": "Here are the steps of docker run : Create the container If the status code is 404, it means the image doesn't exist:\n - Try to pull it\n - Then retry to create the container Start the container If you are not in detached mode:\n - Attach to the container, using logs=1 (to have stdout and\n stderr from the container's start) and stream=1 If in detached mode or only stdin is attached:\n - Display the container's",
|
|
"title": "3.1 Inside docker run"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api_v1.2#32-hijacking",
|
|
"tags": "",
|
|
"text": "In this version of the API, /attach, uses hijacking to transport stdin,\nstdout and stderr on the same socket. This might change in the future.",
|
|
"title": "3.2 Hijacking"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api_v1.2#33-cors-requests",
|
|
"tags": "",
|
|
"text": "To enable cross origin requests to the remote api add the flag\n\"--api-enable-cors\" when running docker in daemon mode. docker -d -H=\" tcp://192.168.1.9:2375 \"\n-api-enable-cors",
|
|
"title": "3.3 CORS Requests"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api_v1.1/",
|
|
"tags": "",
|
|
"text": "Docker Remote API v1.1\n1. Brief introduction\n\nThe Remote API is replacing rcli\nDefault port in the docker daemon is 2375\nThe API tends to be REST, but for some complex commands, like attach\n or pull, the HTTP connection is hijacked to transport stdout stdin\n and stderr\n\n2. Endpoints\n2.1 Containers\nList containers\nGET /containers/json\nList containers\nExample request:\n GET /containers/json?all=1before=8dfafdbc3a40 HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n [\n {\n \"Id\": \"8dfafdbc3a40\",\n \"Image\": \"ubuntu:latest\",\n \"Command\": \"echo 1\",\n \"Created\": 1367854155,\n \"Status\": \"Exit 0\"\n },\n {\n \"Id\": \"9cd87474be90\",\n \"Image\": \"ubuntu:latest\",\n \"Command\": \"echo 222222\",\n \"Created\": 1367854155,\n \"Status\": \"Exit 0\"\n },\n {\n \"Id\": \"3176a2479c92\",\n \"Image\": \"centos:latest\",\n \"Command\": \"echo 3333333333333333\",\n \"Created\": 1367854154,\n \"Status\": \"Exit 0\"\n },\n {\n \"Id\": \"4cb07b47f9fb\",\n \"Image\": \"fedora:latest\",\n \"Command\": \"echo 444444444444444444444444444444444\",\n \"Created\": 1367854152,\n \"Status\": \"Exit 0\"\n }\n ]\n\nQuery Parameters:\n\nall \u2013 1/True/true or 0/False/false, Show all containers.\n Only running containers are shown by default\nlimit \u2013 Show limit last created\n containers, include non-running ones.\nsince \u2013 Show only containers created since Id, include\n non-running ones.\nbefore \u2013 Show only containers created before Id, include\n non-running ones.\n\nStatus Codes:\n\n200 \u2013 no error\n400 \u2013 bad parameter\n500 \u2013 server error\n\nCreate a container\nPOST /containers/create\nCreate a container\nExample request:\n POST /containers/create HTTP/1.1\n Content-Type: application/json\n\n {\n \"Hostname\":\"\",\n \"User\":\"\",\n \"Memory\":0,\n \"MemorySwap\":0,\n \"AttachStdin\":false,\n \"AttachStdout\":true,\n \"AttachStderr\":true,\n \"PortSpecs\":null,\n \"Tty\":false,\n \"OpenStdin\":false,\n \"StdinOnce\":false,\n \"Env\":null,\n \"Cmd\":[\n \"date\"\n ],\n \"Dns\":null,\n \"Image\":\"ubuntu\",\n \"Volumes\":{},\n \"VolumesFrom\":\"\"\n }\n\nExample response:\n HTTP/1.1 201 Created\n Content-Type: application/json\n\n {\n \"Id\":\"e90e34656806\"\n \"Warnings\":[]\n }\n\nJson Parameters:\n\nconfig \u2013 the container's configuration\n\nStatus Codes:\n\n201 \u2013 no error\n404 \u2013 no such container\n406 \u2013 impossible to attach (container not running)\n500 \u2013 server error\n\nInspect a container\nGET /containers/(id)/json\nReturn low-level information on the container id\nExample request:\n GET /containers/4fa6e0f0c678/json HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\n \"Id\": \"4fa6e0f0c6786287e131c3852c58a2e01cc697a68231826813597e4994f1d6e2\",\n \"Created\": \"2013-05-07T14:51:42.041847+02:00\",\n \"Path\": \"date\",\n \"Args\": [],\n \"Config\": {\n \"Hostname\": \"4fa6e0f0c678\",\n \"User\": \"\",\n \"Memory\": 0,\n \"MemorySwap\": 0,\n \"AttachStdin\": false,\n \"AttachStdout\": true,\n \"AttachStderr\": true,\n \"PortSpecs\": null,\n \"Tty\": false,\n \"OpenStdin\": false,\n \"StdinOnce\": false,\n \"Env\": null,\n \"Cmd\": [\n \"date\"\n ],\n \"Dns\": null,\n \"Image\": \"ubuntu\",\n \"Volumes\": {},\n \"VolumesFrom\": \"\"\n },\n \"State\": {\n \"Running\": false,\n \"Pid\": 0,\n \"ExitCode\": 0,\n \"StartedAt\": \"2013-05-07T14:51:42.087658+02:01360\",\n \"Ghost\": false\n },\n \"Image\": \"b750fe79269d2ec9a3c593ef05b4332b1d1a02a62b4accb2c21d589ff2f5f2dc\",\n \"NetworkSettings\": {\n \"IpAddress\": \"\",\n \"IpPrefixLen\": 0,\n \"Gateway\": \"\",\n \"Bridge\": \"\",\n \"PortMapping\": null\n },\n \"SysInitPath\": \"/home/kitty/go/src/github.com/docker/docker/bin/docker\",\n \"ResolvConfPath\": \"/etc/resolv.conf\",\n \"Volumes\": {}\n }\n\nStatus Codes:\n\n200 \u2013 no error\n404 \u2013 no such container\n500 \u2013 server error\n\nInspect changes on a container's filesystem\nGET /containers/(id)/changes\nInspect changes on container id's filesystem\nExample request:\n GET /containers/4fa6e0f0c678/changes HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n [\n {\n \"Path\": \"/dev\",\n \"Kind\": 0\n },\n {\n \"Path\": \"/dev/kmsg\",\n \"Kind\": 1\n },\n {\n \"Path\": \"/test\",\n \"Kind\": 1\n }\n ]\n\nStatus Codes:\n\n200 \u2013 no error\n404 \u2013 no such container\n500 \u2013 server error\n\nExport a container\nGET /containers/(id)/export\nExport the contents of container id\nExample request:\n GET /containers/4fa6e0f0c678/export HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/octet-stream\n\n {{ STREAM }}\n\nStatus Codes:\n\n200 \u2013 no error\n404 \u2013 no such container\n500 \u2013 server error\n\nStart a container\nPOST /containers/(id)/start\nStart the container id\nExample request:\n POST /containers/e90e34656806/start HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n\nStatus Codes:\n\n200 \u2013 no error\n404 \u2013 no such container\n500 \u2013 server error\n\nStop a container\nPOST /containers/(id)/stop\nStop the container id\nExample request:\n POST /containers/e90e34656806/stop?t=5 HTTP/1.1\n\nExample response:\n HTTP/1.1 204 OK\n\nQuery Parameters:\n\nt \u2013 number of seconds to wait before killing the container\n\nStatus Codes:\n\n204 \u2013 no error\n404 \u2013 no such container\n500 \u2013 server error\n\nRestart a container\nPOST /containers/(id)/restart\nRestart the container id\nExample request:\n POST /containers/e90e34656806/restart?t=5 HTTP/1.1\n\nExample response:\n HTTP/1.1 204 No Content\n\nQuery Parameters:\n\nt \u2013 number of seconds to wait before killing the container\n\nStatus Codes:\n\n204 \u2013 no error\n404 \u2013 no such container\n500 \u2013 server error\n\nKill a container\nPOST /containers/(id)/kill\nKill the container id\nExample request:\n POST /containers/e90e34656806/kill HTTP/1.1\n\nExample response:\n HTTP/1.1 204 No Content\n\nStatus Codes:\n\n204 \u2013 no error\n404 \u2013 no such container\n500 \u2013 server error\n\nAttach to a container\nPOST /containers/(id)/attach\nAttach to the container id\nExample request:\n POST /containers/16253994b7c4/attach?logs=1stream=0stdout=1 HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/vnd.docker.raw-stream\n\n {{ STREAM }}\n\nQuery Parameters:\n\nlogs \u2013 1/True/true or 0/False/false, return logs. Defaul\n false\nstream \u2013 1/True/true or 0/False/false, return stream.\n Default false\nstdin \u2013 1/True/true or 0/False/false, if stream=true, attach\n to stdin. Default false\nstdout \u2013 1/True/true or 0/False/false, if logs=true, return\n stdout log, if stream=true, attach to stdout. Default false\nstderr \u2013 1/True/true or 0/False/false, if logs=true, return\n stderr log, if stream=true, attach to stderr. Default false\n\nStatus Codes:\n\n200 \u2013 no error\n400 \u2013 bad parameter\n404 \u2013 no such container\n500 \u2013 server error\n\nAttach to a container (websocket)\nGET /containers/(id)/attach/ws\nAttach to the container id via websocket\nImplements websocket protocol handshake according to RFC 6455\nExample request\n GET /containers/e90e34656806/attach/ws?logs=0stream=1stdin=1stdout=1stderr=1 HTTP/1.1\n\nExample response\n {{ STREAM }}\n\nQuery Parameters:\n\nlogs \u2013 1/True/true or 0/False/false, return logs. Default false\nstream \u2013 1/True/true or 0/False/false, return stream.\n Default false\nstdin \u2013 1/True/true or 0/False/false, if stream=true, attach\n to stdin. Default false\nstdout \u2013 1/True/true or 0/False/false, if logs=true, return\n stdout log, if stream=true, attach to stdout. Default false\nstderr \u2013 1/True/true or 0/False/false, if logs=true, return\n stderr log, if stream=true, attach to stderr. Default false\n\nStatus Codes:\n\n200 \u2013 no error\n400 \u2013 bad parameter\n404 \u2013 no such container\n500 \u2013 server error\n\nWait a container\nPOST /containers/(id)/wait\nBlock until container id stops, then returns the exit code\nExample request:\n POST /containers/16253994b7c4/wait HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\"StatusCode\": 0}\n\nStatus Codes:\n\n200 \u2013 no error\n404 \u2013 no such container\n500 \u2013 server error\n\nRemove a container\nDELETE /containers/(id)\nRemove the container id from the filesystem\nExample request:\n DELETE /containers/16253994b7c4?v=1 HTTP/1.1\n\nExample response:\n HTTP/1.1 204 OK\n\nQuery Parameters:\n\nv \u2013 1/True/true or 0/False/false, Remove the volumes\n associated to the container. Default false\n\nStatus Codes:\n\n204 \u2013 no error\n400 \u2013 bad parameter\n404 \u2013 no such container\n500 \u2013 server error\n\n2.2 Images\nList Images\nGET /images/(format)\nList images format could be json or viz (json default)\nExample request:\n GET /images/json?all=0 HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n [\n {\n \"Repository\":\"ubuntu\",\n \"Tag\":\"precise\",\n \"Id\":\"b750fe79269d\",\n \"Created\":1364102658\n },\n {\n \"Repository\":\"ubuntu\",\n \"Tag\":\"12.04\",\n \"Id\":\"b750fe79269d\",\n \"Created\":1364102658\n }\n ]\n\nExample request:\n GET /images/viz HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: text/plain\n\n digraph docker {\n \"d82cbacda43a\" - \"074be284591f\"\n \"1496068ca813\" - \"08306dc45919\"\n \"08306dc45919\" - \"0e7893146ac2\"\n \"b750fe79269d\" - \"1496068ca813\"\n base - \"27cf78414709\" [style=invis]\n \"f71189fff3de\" - \"9a33b36209ed\"\n \"27cf78414709\" - \"b750fe79269d\"\n \"0e7893146ac2\" - \"d6434d954665\"\n \"d6434d954665\" - \"d82cbacda43a\"\n base - \"e9aa60c60128\" [style=invis]\n \"074be284591f\" - \"f71189fff3de\"\n \"b750fe79269d\" [label=\"b750fe79269d\\nubuntu\",shape=box,fillcolor=\"paleturquoise\",style=\"filled,rounded\"];\n \"e9aa60c60128\" [label=\"e9aa60c60128\\ncentos\",shape=box,fillcolor=\"paleturquoise\",style=\"filled,rounded\"];\n \"9a33b36209ed\" [label=\"9a33b36209ed\\nfedora\",shape=box,fillcolor=\"paleturquoise\",style=\"filled,rounded\"];\n base [style=invisible]\n }\n\nQuery Parameters:\n\nall \u2013 1/True/true or 0/False/false, Show all containers.\n Only running containers are shown by defaul\n\nStatus Codes:\n\n200 \u2013 no error\n400 \u2013 bad parameter\n500 \u2013 server error\n\nCreate an image\nPOST /images/create\nCreate an image, either by pull it from the registry or by importing i\nExample request:\n POST /images/create?fromImage=ubuntu HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\"status\":\"Pulling...\"}\n {\"status\":\"Pulling\", \"progress\":\"1/? (n/a)\"}\n {\"error\":\"Invalid...\"}\n ...\n\nQuery Parameters:\n\nfromImage \u2013 name of the image to pull\nfromSrc \u2013 source to import, - means stdin\nrepo \u2013 repository\ntag \u2013 tag\nregistry \u2013 the registry to pull from\n\nStatus Codes:\n\n200 \u2013 no error\n500 \u2013 server error\n\nInsert a file in an image\nPOST /images/(name)/insert\nInsert a file from url in the image name at path\nExample request:\n POST /images/test/insert?path=/usrurl=myurl HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\"status\":\"Inserting...\"}\n {\"status\":\"Inserting\", \"progress\":\"1/? (n/a)\"}\n {\"error\":\"Invalid...\"}\n ...\n\nQuery Parameters:\n\nurl \u2013 The url from where the file is taken\npath \u2013 The path where the file is stored\n\nStatus Codes:\n\n200 \u2013 no error\n500 \u2013 server error\n\nInspect an image\nGET /images/(name)/json\nReturn low-level information on the image name\nExample request:\n GET /images/centos/json HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\n \"id\":\"b750fe79269d2ec9a3c593ef05b4332b1d1a02a62b4accb2c21d589ff2f5f2dc\",\n \"parent\":\"27cf784147099545\",\n \"created\":\"2013-03-23T22:24:18.818426-07:00\",\n \"container\":\"3d67245a8d72ecf13f33dffac9f79dcdf70f75acb84d308770391510e0c23ad0\",\n \"container_config\":\n {\n \"Hostname\":\"\",\n \"User\":\"\",\n \"Memory\":0,\n \"MemorySwap\":0,\n \"AttachStdin\":false,\n \"AttachStdout\":false,\n \"AttachStderr\":false,\n \"PortSpecs\":null,\n \"Tty\":true,\n \"OpenStdin\":true,\n \"StdinOnce\":false,\n \"Env\":null,\n \"Cmd\": [\"/bin/bash\"],\n \"Dns\":null,\n \"Image\":\"centos\",\n \"Volumes\":null,\n \"VolumesFrom\":\"\"\n }\n }\n\nStatus Codes:\n\n200 \u2013 no error\n404 \u2013 no such image\n500 \u2013 server error\n\nGet the history of an image\nGET /images/(name)/history\nReturn the history of the image name\nExample request:\n GET /images/fedora/history HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n [\n {\n \"Id\": \"b750fe79269d\",\n \"Created\": 1364102658,\n \"CreatedBy\": \"/bin/bash\"\n },\n {\n \"Id\": \"27cf78414709\",\n \"Created\": 1364068391,\n \"CreatedBy\": \"\"\n }\n ]\n\nStatus Codes:\n\n200 \u2013 no error\n404 \u2013 no such image\n500 \u2013 server error\n\nPush an image on the registry\nPOST /images/(name)/push\nPush the image name on the registry\n **Example request**:\n\n POST /images/test/push HTTP/1.1\n\n **Example response**:\n\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\"status\":\"Pushing...\"}\n {\"status\":\"Pushing\", \"progress\":\"1/? (n/a)\"}\n {\"error\":\"Invalid...\"}\n ...\n\nStatus Codes:\n\n200 \u2013 no error\n404 \u2013 no such image\n500 \u2013 server error\n\nTag an image into a repository\nPOST /images/(name)/tag\nTag the image name into a repository\nExample request:\n POST /images/test/tag?repo=myrepoforce=0tag=v42 HTTP/1.1\n\nExample response:\n HTTP/1.1 201 OK\n\nQuery Parameters:\n\nrepo \u2013 The repository to tag in\nforce \u2013 1/True/true or 0/False/false, default false\ntag - The new tag name\n\nStatus Codes:\n\n201 \u2013 no error\n400 \u2013 bad parameter\n404 \u2013 no such image\n409 \u2013 conflict\n500 \u2013 server error\n\nRemove an image\nDELETE /images/(name)\nRemove the image name from the filesystem\nExample request:\n DELETE /images/test HTTP/1.1\n\nExample response:\n HTTP/1.1 204 No Content\n\nStatus Codes:\n\n204 \u2013 no error\n404 \u2013 no such image\n500 \u2013 server error\n\nSearch images\nGET /images/search\nSearch for an image on Docker Hub\nExample request:\n GET /images/search?term=sshd HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n [\n {\n \"Name\":\"cespare/sshd\",\n \"Description\":\"\"\n },\n {\n \"Name\":\"johnfuller/sshd\",\n \"Description\":\"\"\n },\n {\n \"Name\":\"dhrp/mongodb-sshd\",\n \"Description\":\"\"\n }\n ]\n\n :query term: term to search\n :statuscode 200: no error\n :statuscode 500: server error\n\n2.3 Misc\nBuild an image from Dockerfile via stdin\nPOST /build\nBuild an image from Dockerfile via stdin\nExample request:\n POST /build HTTP/1.1\n\n {{ STREAM }}\n\nExample response:\n HTTP/1.1 200 OK\n\n {{ STREAM }}\n\nQuery Parameters:\n\n\n\nt \u2013 tag to be applied to the resulting image in case of\n success\n\nStatus Codes:\n\n200 \u2013 no error\n500 \u2013 server error\n\nGet default username and email\nGET /auth\nGet the default username and email\nExample request:\n GET /auth HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\n \"username\":\"hannibal\",\n \"email\":\"hannibal@a-team.com\"\n }\n\nStatus Codes:\n\n200 \u2013 no error\n500 \u2013 server error\n\nCheck auth configuration and store i\nPOST /auth\nGet the default username and email\nExample request:\n POST /auth HTTP/1.1\n Content-Type: application/json\n\n {\n \"username\":\"hannibal\",\n \"password:\"xxxx\",\n \"email\":\"hannibal@a-team.com\"\n }\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: text/plain\n\nStatus Codes:\n\n200 \u2013 no error\n204 \u2013 no error\n500 \u2013 server error\n\nDisplay system-wide information\nGET /info\nDisplay system-wide information\nExample request:\n GET /info HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\n \"Containers\":11,\n \"Images\":16,\n \"Debug\":false,\n \"NFd\": 11,\n \"NGoroutines\":21,\n \"MemoryLimit\":true,\n \"SwapLimit\":false\n }\n\nStatus Codes:\n\n200 \u2013 no error\n500 \u2013 server error\n\nShow the docker version information\nGET /version\nShow the docker version information\nExample request:\n GET /version HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\n \"Version\":\"0.2.2\",\n \"GitCommit\":\"5a2a5cc+CHANGES\",\n \"GoVersion\":\"go1.0.3\"\n }\n\nStatus Codes:\n\n200 \u2013 no error\n500 \u2013 server error\n\nCreate a new image from a container's changes\nPOST /commit\nCreate a new image from a container's changes\nExample request:\n POST /commit?container=44c004db4b17m=messagerepo=myrepo HTTP/1.1\n Content-Type: application/json\n\n {\n \"Cmd\": [\"cat\", \"/world\"],\n \"PortSpecs\":[\"22\"]\n }\n\nExample response:\n HTTP/1.1 201 OK\n Content-Type: application/vnd.docker.raw-stream\n\n {\"Id\": \"596069db4bf5\"}\n\nQuery Parameters:\n\ncontainer \u2013 source container\nrepo \u2013 repository\ntag \u2013 tag\nm \u2013 commit message\nauthor \u2013 author (e.g., \"John Hannibal Smith\n hannibal@a-team.com\")\n\nStatus Codes:\n\n201 \u2013 no error\n404 \u2013 no such container\n500 \u2013 server error\n\n3. Going further\n3.1 Inside docker run\nHere are the steps of docker run :\n\n\nCreate the container\n\n\nIf the status code is 404, it means the image doesn't exist:\n - Try to pull it\n - Then retry to create the container\n\n\nStart the container\n\n\nIf you are not in detached mode:\n - Attach to the container, using logs=1 (to have stdout and\n stderr from the container's start) and stream=1\n\n\nIf in detached mode or only stdin is attached:\n - Display the container's\n\n\n3.2 Hijacking\nIn this version of the API, /attach uses hijacking to transport stdin,\nstdout and stderr on the same socket. This might change in the future.",
|
|
"title": "**HIDDEN**"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api_v1.1#docker-remote-api-v11",
|
|
"tags": "",
|
|
"text": "",
|
|
"title": "Docker Remote API v1.1"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api_v1.1#1-brief-introduction",
|
|
"tags": "",
|
|
"text": "The Remote API is replacing rcli Default port in the docker daemon is 2375 The API tends to be REST, but for some complex commands, like attach\n or pull, the HTTP connection is hijacked to transport stdout stdin\n and stderr",
|
|
"title": "1. Brief introduction"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api_v1.1#2-endpoints",
|
|
"tags": "",
|
|
"text": "",
|
|
"title": "2. Endpoints"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api_v1.1#21-containers",
|
|
"tags": "",
|
|
"text": "List containers GET /containers/json List containers Example request : GET /containers/json?all=1 before=8dfafdbc3a40 HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n [\n {\n \"Id\": \"8dfafdbc3a40\",\n \"Image\": \"ubuntu:latest\",\n \"Command\": \"echo 1\",\n \"Created\": 1367854155,\n \"Status\": \"Exit 0\"\n },\n {\n \"Id\": \"9cd87474be90\",\n \"Image\": \"ubuntu:latest\",\n \"Command\": \"echo 222222\",\n \"Created\": 1367854155,\n \"Status\": \"Exit 0\"\n },\n {\n \"Id\": \"3176a2479c92\",\n \"Image\": \"centos:latest\",\n \"Command\": \"echo 3333333333333333\",\n \"Created\": 1367854154,\n \"Status\": \"Exit 0\"\n },\n {\n \"Id\": \"4cb07b47f9fb\",\n \"Image\": \"fedora:latest\",\n \"Command\": \"echo 444444444444444444444444444444444\",\n \"Created\": 1367854152,\n \"Status\": \"Exit 0\"\n }\n ] Query Parameters: all \u2013 1/True/true or 0/False/false, Show all containers.\n Only running containers are shown by default limit \u2013 Show limit last created\n containers, include non-running ones. since \u2013 Show only containers created since Id, include\n non-running ones. before \u2013 Show only containers created before Id, include\n non-running ones. Status Codes: 200 \u2013 no error 400 \u2013 bad parameter 500 \u2013 server error Create a container POST /containers/create Create a container Example request : POST /containers/create HTTP/1.1\n Content-Type: application/json\n\n {\n \"Hostname\":\"\",\n \"User\":\"\",\n \"Memory\":0,\n \"MemorySwap\":0,\n \"AttachStdin\":false,\n \"AttachStdout\":true,\n \"AttachStderr\":true,\n \"PortSpecs\":null,\n \"Tty\":false,\n \"OpenStdin\":false,\n \"StdinOnce\":false,\n \"Env\":null,\n \"Cmd\":[\n \"date\"\n ],\n \"Dns\":null,\n \"Image\":\"ubuntu\",\n \"Volumes\":{},\n \"VolumesFrom\":\"\"\n } Example response : HTTP/1.1 201 Created\n Content-Type: application/json\n\n {\n \"Id\":\"e90e34656806\"\n \"Warnings\":[]\n } Json Parameters: config \u2013 the container's configuration Status Codes: 201 \u2013 no error 404 \u2013 no such container 406 \u2013 impossible to attach (container not running) 500 \u2013 server error Inspect a container GET /containers/(id)/json Return low-level information on the container id Example request : GET /containers/4fa6e0f0c678/json HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\n \"Id\": \"4fa6e0f0c6786287e131c3852c58a2e01cc697a68231826813597e4994f1d6e2\",\n \"Created\": \"2013-05-07T14:51:42.041847+02:00\",\n \"Path\": \"date\",\n \"Args\": [],\n \"Config\": {\n \"Hostname\": \"4fa6e0f0c678\",\n \"User\": \"\",\n \"Memory\": 0,\n \"MemorySwap\": 0,\n \"AttachStdin\": false,\n \"AttachStdout\": true,\n \"AttachStderr\": true,\n \"PortSpecs\": null,\n \"Tty\": false,\n \"OpenStdin\": false,\n \"StdinOnce\": false,\n \"Env\": null,\n \"Cmd\": [\n \"date\"\n ],\n \"Dns\": null,\n \"Image\": \"ubuntu\",\n \"Volumes\": {},\n \"VolumesFrom\": \"\"\n },\n \"State\": {\n \"Running\": false,\n \"Pid\": 0,\n \"ExitCode\": 0,\n \"StartedAt\": \"2013-05-07T14:51:42.087658+02:01360\",\n \"Ghost\": false\n },\n \"Image\": \"b750fe79269d2ec9a3c593ef05b4332b1d1a02a62b4accb2c21d589ff2f5f2dc\",\n \"NetworkSettings\": {\n \"IpAddress\": \"\",\n \"IpPrefixLen\": 0,\n \"Gateway\": \"\",\n \"Bridge\": \"\",\n \"PortMapping\": null\n },\n \"SysInitPath\": \"/home/kitty/go/src/github.com/docker/docker/bin/docker\",\n \"ResolvConfPath\": \"/etc/resolv.conf\",\n \"Volumes\": {}\n } Status Codes: 200 \u2013 no error 404 \u2013 no such container 500 \u2013 server error Inspect changes on a container's filesystem GET /containers/(id)/changes Inspect changes on container id 's filesystem Example request : GET /containers/4fa6e0f0c678/changes HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n [\n {\n \"Path\": \"/dev\",\n \"Kind\": 0\n },\n {\n \"Path\": \"/dev/kmsg\",\n \"Kind\": 1\n },\n {\n \"Path\": \"/test\",\n \"Kind\": 1\n }\n ] Status Codes: 200 \u2013 no error 404 \u2013 no such container 500 \u2013 server error Export a container GET /containers/(id)/export Export the contents of container id Example request : GET /containers/4fa6e0f0c678/export HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/octet-stream\n\n {{ STREAM }} Status Codes: 200 \u2013 no error 404 \u2013 no such container 500 \u2013 server error Start a container POST /containers/(id)/start Start the container id Example request : POST /containers/e90e34656806/start HTTP/1.1 Example response : HTTP/1.1 200 OK Status Codes: 200 \u2013 no error 404 \u2013 no such container 500 \u2013 server error Stop a container POST /containers/(id)/stop Stop the container id Example request : POST /containers/e90e34656806/stop?t=5 HTTP/1.1 Example response : HTTP/1.1 204 OK Query Parameters: t \u2013 number of seconds to wait before killing the container Status Codes: 204 \u2013 no error 404 \u2013 no such container 500 \u2013 server error Restart a container POST /containers/(id)/restart Restart the container id Example request : POST /containers/e90e34656806/restart?t=5 HTTP/1.1 Example response : HTTP/1.1 204 No Content Query Parameters: t \u2013 number of seconds to wait before killing the container Status Codes: 204 \u2013 no error 404 \u2013 no such container 500 \u2013 server error Kill a container POST /containers/(id)/kill Kill the container id Example request : POST /containers/e90e34656806/kill HTTP/1.1 Example response : HTTP/1.1 204 No Content Status Codes: 204 \u2013 no error 404 \u2013 no such container 500 \u2013 server error Attach to a container POST /containers/(id)/attach Attach to the container id Example request : POST /containers/16253994b7c4/attach?logs=1 stream=0 stdout=1 HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/vnd.docker.raw-stream\n\n {{ STREAM }} Query Parameters: logs \u2013 1/True/true or 0/False/false, return logs. Defaul\n false stream \u2013 1/True/true or 0/False/false, return stream.\n Default false stdin \u2013 1/True/true or 0/False/false, if stream=true, attach\n to stdin. Default false stdout \u2013 1/True/true or 0/False/false, if logs=true, return\n stdout log, if stream=true, attach to stdout. Default false stderr \u2013 1/True/true or 0/False/false, if logs=true, return\n stderr log, if stream=true, attach to stderr. Default false Status Codes: 200 \u2013 no error 400 \u2013 bad parameter 404 \u2013 no such container 500 \u2013 server error Attach to a container (websocket) GET /containers/(id)/attach/ws Attach to the container id via websocket Implements websocket protocol handshake according to RFC 6455 Example request GET /containers/e90e34656806/attach/ws?logs=0 stream=1 stdin=1 stdout=1 stderr=1 HTTP/1.1 Example response {{ STREAM }} Query Parameters: logs \u2013 1/True/true or 0/False/false, return logs. Default false stream \u2013 1/True/true or 0/False/false, return stream.\n Default false stdin \u2013 1/True/true or 0/False/false, if stream=true, attach\n to stdin. Default false stdout \u2013 1/True/true or 0/False/false, if logs=true, return\n stdout log, if stream=true, attach to stdout. Default false stderr \u2013 1/True/true or 0/False/false, if logs=true, return\n stderr log, if stream=true, attach to stderr. Default false Status Codes: 200 \u2013 no error 400 \u2013 bad parameter 404 \u2013 no such container 500 \u2013 server error Wait a container POST /containers/(id)/wait Block until container id stops, then returns the exit code Example request : POST /containers/16253994b7c4/wait HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\"StatusCode\": 0} Status Codes: 200 \u2013 no error 404 \u2013 no such container 500 \u2013 server error Remove a container DELETE /containers/(id) Remove the container id from the filesystem Example request : DELETE /containers/16253994b7c4?v=1 HTTP/1.1 Example response : HTTP/1.1 204 OK Query Parameters: v \u2013 1/True/true or 0/False/false, Remove the volumes\n associated to the container. Default false Status Codes: 204 \u2013 no error 400 \u2013 bad parameter 404 \u2013 no such container 500 \u2013 server error",
|
|
"title": "2.1 Containers"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api_v1.1#22-images",
|
|
"tags": "",
|
|
"text": "List Images GET /images/(format) List images format could be json or viz (json default) Example request : GET /images/json?all=0 HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n [\n {\n \"Repository\":\"ubuntu\",\n \"Tag\":\"precise\",\n \"Id\":\"b750fe79269d\",\n \"Created\":1364102658\n },\n {\n \"Repository\":\"ubuntu\",\n \"Tag\":\"12.04\",\n \"Id\":\"b750fe79269d\",\n \"Created\":1364102658\n }\n ] Example request : GET /images/viz HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: text/plain\n\n digraph docker {\n \"d82cbacda43a\" - \"074be284591f\"\n \"1496068ca813\" - \"08306dc45919\"\n \"08306dc45919\" - \"0e7893146ac2\"\n \"b750fe79269d\" - \"1496068ca813\"\n base - \"27cf78414709\" [style=invis]\n \"f71189fff3de\" - \"9a33b36209ed\"\n \"27cf78414709\" - \"b750fe79269d\"\n \"0e7893146ac2\" - \"d6434d954665\"\n \"d6434d954665\" - \"d82cbacda43a\"\n base - \"e9aa60c60128\" [style=invis]\n \"074be284591f\" - \"f71189fff3de\"\n \"b750fe79269d\" [label=\"b750fe79269d\\nubuntu\",shape=box,fillcolor=\"paleturquoise\",style=\"filled,rounded\"];\n \"e9aa60c60128\" [label=\"e9aa60c60128\\ncentos\",shape=box,fillcolor=\"paleturquoise\",style=\"filled,rounded\"];\n \"9a33b36209ed\" [label=\"9a33b36209ed\\nfedora\",shape=box,fillcolor=\"paleturquoise\",style=\"filled,rounded\"];\n base [style=invisible]\n } Query Parameters: all \u2013 1/True/true or 0/False/false, Show all containers.\n Only running containers are shown by defaul Status Codes: 200 \u2013 no error 400 \u2013 bad parameter 500 \u2013 server error Create an image POST /images/create Create an image, either by pull it from the registry or by importing i Example request : POST /images/create?fromImage=ubuntu HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\"status\":\"Pulling...\"}\n {\"status\":\"Pulling\", \"progress\":\"1/? (n/a)\"}\n {\"error\":\"Invalid...\"}\n ... Query Parameters: fromImage \u2013 name of the image to pull fromSrc \u2013 source to import, - means stdin repo \u2013 repository tag \u2013 tag registry \u2013 the registry to pull from Status Codes: 200 \u2013 no error 500 \u2013 server error Insert a file in an image POST /images/(name)/insert Insert a file from url in the image name at path Example request : POST /images/test/insert?path=/usr url=myurl HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\"status\":\"Inserting...\"}\n {\"status\":\"Inserting\", \"progress\":\"1/? (n/a)\"}\n {\"error\":\"Invalid...\"}\n ... Query Parameters: url \u2013 The url from where the file is taken path \u2013 The path where the file is stored Status Codes: 200 \u2013 no error 500 \u2013 server error Inspect an image GET /images/(name)/json Return low-level information on the image name Example request : GET /images/centos/json HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\n \"id\":\"b750fe79269d2ec9a3c593ef05b4332b1d1a02a62b4accb2c21d589ff2f5f2dc\",\n \"parent\":\"27cf784147099545\",\n \"created\":\"2013-03-23T22:24:18.818426-07:00\",\n \"container\":\"3d67245a8d72ecf13f33dffac9f79dcdf70f75acb84d308770391510e0c23ad0\",\n \"container_config\":\n {\n \"Hostname\":\"\",\n \"User\":\"\",\n \"Memory\":0,\n \"MemorySwap\":0,\n \"AttachStdin\":false,\n \"AttachStdout\":false,\n \"AttachStderr\":false,\n \"PortSpecs\":null,\n \"Tty\":true,\n \"OpenStdin\":true,\n \"StdinOnce\":false,\n \"Env\":null,\n \"Cmd\": [\"/bin/bash\"],\n \"Dns\":null,\n \"Image\":\"centos\",\n \"Volumes\":null,\n \"VolumesFrom\":\"\"\n }\n } Status Codes: 200 \u2013 no error 404 \u2013 no such image 500 \u2013 server error Get the history of an image GET /images/(name)/history Return the history of the image name Example request : GET /images/fedora/history HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n [\n {\n \"Id\": \"b750fe79269d\",\n \"Created\": 1364102658,\n \"CreatedBy\": \"/bin/bash\"\n },\n {\n \"Id\": \"27cf78414709\",\n \"Created\": 1364068391,\n \"CreatedBy\": \"\"\n }\n ] Status Codes: 200 \u2013 no error 404 \u2013 no such image 500 \u2013 server error Push an image on the registry POST /images/(name)/push Push the image name on the registry **Example request**: POST /images/test/push HTTP/1.1 **Example response**:\n\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\"status\":\"Pushing...\"}\n {\"status\":\"Pushing\", \"progress\":\"1/? (n/a)\"}\n {\"error\":\"Invalid...\"}\n ... Status Codes: 200 \u2013 no error 404 \u2013 no such image 500 \u2013 server error Tag an image into a repository POST /images/(name)/tag Tag the image name into a repository Example request : POST /images/test/tag?repo=myrepo force=0 tag=v42 HTTP/1.1 Example response : HTTP/1.1 201 OK Query Parameters: repo \u2013 The repository to tag in force \u2013 1/True/true or 0/False/false, default false tag - The new tag name Status Codes: 201 \u2013 no error 400 \u2013 bad parameter 404 \u2013 no such image 409 \u2013 conflict 500 \u2013 server error Remove an image DELETE /images/(name) Remove the image name from the filesystem Example request : DELETE /images/test HTTP/1.1 Example response : HTTP/1.1 204 No Content Status Codes: 204 \u2013 no error 404 \u2013 no such image 500 \u2013 server error Search images GET /images/search Search for an image on Docker Hub Example request : GET /images/search?term=sshd HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n [\n {\n \"Name\":\"cespare/sshd\",\n \"Description\":\"\"\n },\n {\n \"Name\":\"johnfuller/sshd\",\n \"Description\":\"\"\n },\n {\n \"Name\":\"dhrp/mongodb-sshd\",\n \"Description\":\"\"\n }\n ]\n\n :query term: term to search\n :statuscode 200: no error\n :statuscode 500: server error",
|
|
"title": "2.2 Images"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api_v1.1#23-misc",
|
|
"tags": "",
|
|
"text": "Build an image from Dockerfile via stdin POST /build Build an image from Dockerfile via stdin Example request : POST /build HTTP/1.1\n\n {{ STREAM }} Example response : HTTP/1.1 200 OK\n\n {{ STREAM }} Query Parameters: t \u2013 tag to be applied to the resulting image in case of\n success Status Codes: 200 \u2013 no error 500 \u2013 server error Get default username and email GET /auth Get the default username and email Example request : GET /auth HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\n \"username\":\"hannibal\",\n \"email\":\"hannibal@a-team.com\"\n } Status Codes: 200 \u2013 no error 500 \u2013 server error Check auth configuration and store i POST /auth Get the default username and email Example request : POST /auth HTTP/1.1\n Content-Type: application/json\n\n {\n \"username\":\"hannibal\",\n \"password:\"xxxx\",\n \"email\":\"hannibal@a-team.com\"\n } Example response : HTTP/1.1 200 OK\n Content-Type: text/plain Status Codes: 200 \u2013 no error 204 \u2013 no error 500 \u2013 server error Display system-wide information GET /info Display system-wide information Example request : GET /info HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\n \"Containers\":11,\n \"Images\":16,\n \"Debug\":false,\n \"NFd\": 11,\n \"NGoroutines\":21,\n \"MemoryLimit\":true,\n \"SwapLimit\":false\n } Status Codes: 200 \u2013 no error 500 \u2013 server error Show the docker version information GET /version Show the docker version information Example request : GET /version HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\n \"Version\":\"0.2.2\",\n \"GitCommit\":\"5a2a5cc+CHANGES\",\n \"GoVersion\":\"go1.0.3\"\n } Status Codes: 200 \u2013 no error 500 \u2013 server error Create a new image from a container's changes POST /commit Create a new image from a container's changes Example request : POST /commit?container=44c004db4b17 m=message repo=myrepo HTTP/1.1\n Content-Type: application/json\n\n {\n \"Cmd\": [\"cat\", \"/world\"],\n \"PortSpecs\":[\"22\"]\n } Example response : HTTP/1.1 201 OK\n Content-Type: application/vnd.docker.raw-stream\n\n {\"Id\": \"596069db4bf5\"} Query Parameters: container \u2013 source container repo \u2013 repository tag \u2013 tag m \u2013 commit message author \u2013 author (e.g., \"John Hannibal Smith\n hannibal@a-team.com \") Status Codes: 201 \u2013 no error 404 \u2013 no such container 500 \u2013 server error",
|
|
"title": "2.3 Misc"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api_v1.1#3-going-further",
|
|
"tags": "",
|
|
"text": "",
|
|
"title": "3. Going further"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api_v1.1#31-inside-docker-run",
|
|
"tags": "",
|
|
"text": "Here are the steps of docker run : Create the container If the status code is 404, it means the image doesn't exist:\n - Try to pull it\n - Then retry to create the container Start the container If you are not in detached mode:\n - Attach to the container, using logs=1 (to have stdout and\n stderr from the container's start) and stream=1 If in detached mode or only stdin is attached:\n - Display the container's",
|
|
"title": "3.1 Inside docker run"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api_v1.1#32-hijacking",
|
|
"tags": "",
|
|
"text": "In this version of the API, /attach uses hijacking to transport stdin,\nstdout and stderr on the same socket. This might change in the future.",
|
|
"title": "3.2 Hijacking"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api_v1.0/",
|
|
"tags": "",
|
|
"text": "Docker Remote API v1.0\n1. Brief introduction\n\nThe Remote API is replacing rcli\nDefault port in the docker daemon is 2375\nThe API tends to be REST, but for some complex commands, like attach\n or pull, the HTTP connection is hijacked to transport stdout stdin\n and stderr\n\n2. Endpoints\n2.1 Containers\nList containers\nGET /containers/json\nList containers\nExample request:\n GET /containers/json?all=1before=8dfafdbc3a40 HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n [\n {\n \"Id\": \"8dfafdbc3a40\",\n \"Image\": \"ubuntu:latest\",\n \"Command\": \"echo 1\",\n \"Created\": 1367854155,\n \"Status\": \"Exit 0\"\n },\n {\n \"Id\": \"9cd87474be90\",\n \"Image\": \"ubuntu:latest\",\n \"Command\": \"echo 222222\",\n \"Created\": 1367854155,\n \"Status\": \"Exit 0\"\n },\n {\n \"Id\": \"3176a2479c92\",\n \"Image\": \"centos:latest\",\n \"Command\": \"echo 3333333333333333\",\n \"Created\": 1367854154,\n \"Status\": \"Exit 0\"\n },\n {\n \"Id\": \"4cb07b47f9fb\",\n \"Image\": \"fedora:latest\",\n \"Command\": \"echo 444444444444444444444444444444444\",\n \"Created\": 1367854152,\n \"Status\": \"Exit 0\"\n }\n ]\n\nQuery Parameters:\n\nall \u2013 1/True/true or 0/False/false, Show all containers.\n Only running containers are shown by default\nlimit \u2013 Show limit last created\n containers, include non-running ones.\nsince \u2013 Show only containers created since Id, include\n non-running ones.\nbefore \u2013 Show only containers created before Id, include\n non-running ones.\n\nStatus Codes:\n\n200 \u2013 no error\n400 \u2013 bad parameter\n500 \u2013 server error\n\nCreate a container\nPOST /containers/create\nCreate a container\nExample request:\n POST /containers/create HTTP/1.1\n Content-Type: application/json\n\n {\n \"Hostname\":\"\",\n \"User\":\"\",\n \"Memory\":0,\n \"MemorySwap\":0,\n \"AttachStdin\":false,\n \"AttachStdout\":true,\n \"AttachStderr\":true,\n \"PortSpecs\":null,\n \"Tty\":false,\n \"OpenStdin\":false,\n \"StdinOnce\":false,\n \"Env\":null,\n \"Cmd\":[\n \"date\"\n ],\n \"Dns\":null,\n \"Image\":\"ubuntu\",\n \"Volumes\":{},\n \"VolumesFrom\":\"\"\n }\n\nExample response:\n HTTP/1.1 201 Created\n Content-Type: application/json\n\n {\n \"Id\":\"e90e34656806\"\n \"Warnings\":[]\n }\n\nJson Parameters:\n\nconfig \u2013 the container's configuration\n\nStatus Codes:\n\n201 \u2013 no error\n404 \u2013 no such container\n406 \u2013 impossible to attach (container not running)\n500 \u2013 server error\n\nInspect a container\nGET /containers/(id)/json\nReturn low-level information on the container id\nExample request:\n GET /containers/4fa6e0f0c678/json HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\n \"Id\": \"4fa6e0f0c6786287e131c3852c58a2e01cc697a68231826813597e4994f1d6e2\",\n \"Created\": \"2013-05-07T14:51:42.041847+02:00\",\n \"Path\": \"date\",\n \"Args\": [],\n \"Config\": {\n \"Hostname\": \"4fa6e0f0c678\",\n \"User\": \"\",\n \"Memory\": 0,\n \"MemorySwap\": 0,\n \"AttachStdin\": false,\n \"AttachStdout\": true,\n \"AttachStderr\": true,\n \"PortSpecs\": null,\n \"Tty\": false,\n \"OpenStdin\": false,\n \"StdinOnce\": false,\n \"Env\": null,\n \"Cmd\": [\n \"date\"\n ],\n \"Dns\": null,\n \"Image\": \"ubuntu\",\n \"Volumes\": {},\n \"VolumesFrom\": \"\"\n },\n \"State\": {\n \"Running\": false,\n \"Pid\": 0,\n \"ExitCode\": 0,\n \"StartedAt\": \"2013-05-07T14:51:42.087658+02:01360\",\n \"Ghost\": false\n },\n \"Image\": \"b750fe79269d2ec9a3c593ef05b4332b1d1a02a62b4accb2c21d589ff2f5f2dc\",\n \"NetworkSettings\": {\n \"IpAddress\": \"\",\n \"IpPrefixLen\": 0,\n \"Gateway\": \"\",\n \"Bridge\": \"\",\n \"PortMapping\": null\n },\n \"SysInitPath\": \"/home/kitty/go/src/github.com/docker/docker/bin/docker\",\n \"ResolvConfPath\": \"/etc/resolv.conf\",\n \"Volumes\": {}\n }\n\nStatus Codes:\n\n200 \u2013 no error\n404 \u2013 no such container\n500 \u2013 server error\n\nInspect changes on a container's filesystem\nGET /containers/(id)/changes\nInspect changes on container id's filesystem\nExample request:\n GET /containers/4fa6e0f0c678/changes HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n [\n {\n \"Path\": \"/dev\",\n \"Kind\": 0\n },\n {\n \"Path\": \"/dev/kmsg\",\n \"Kind\": 1\n },\n {\n \"Path\": \"/test\",\n \"Kind\": 1\n }\n ]\n\nStatus Codes:\n\n200 \u2013 no error\n404 \u2013 no such container\n500 \u2013 server error\n\nExport a container\nGET /containers/(id)/export\nExport the contents of container id\nExample request:\n GET /containers/4fa6e0f0c678/export HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/octet-stream\n\n {{ TAR STREAM }}\n\nStatus Codes:\n\n200 \u2013 no error\n404 \u2013 no such container\n500 \u2013 server error\n\nStart a container\nPOST /containers/(id)/start\nStart the container id\nExample request:\n POST /containers/e90e34656806/start HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n\nStatus Codes:\n\n200 \u2013 no error\n404 \u2013 no such container\n500 \u2013 server error\n\nStop a container\nPOST /containers/(id)/stop\nStop the container id\nExample request:\n POST /containers/e90e34656806/stop?t=5 HTTP/1.1\n\nExample response:\n HTTP/1.1 204 OK\n\nQuery Parameters:\n\nt \u2013 number of seconds to wait before killing the container\n\nStatus Codes:\n\n204 \u2013 no error\n404 \u2013 no such container\n500 \u2013 server error\n\nRestart a container\nPOST /containers/(id)/restart\nRestart the container id\nExample request:\n POST /containers/e90e34656806/restart?t=5 HTTP/1.1\n\nExample response:\n HTTP/1.1 204 No Content\n\nQuery Parameters:\n\nt \u2013 number of seconds to wait before killing the container\n\nStatus Codes:\n\n204 \u2013 no error\n404 \u2013 no such container\n500 \u2013 server error\n\nKill a container\nPOST /containers/(id)/kill\nKill the container id\nExample request:\n POST /containers/e90e34656806/kill HTTP/1.1\n\nExample response:\n HTTP/1.1 204 No Content\n\nStatus Codes:\n\n204 \u2013 no error\n404 \u2013 no such container\n500 \u2013 server error\n\nAttach to a container\nPOST /containers/(id)/attach\nAttach to the container id\nExample request:\n POST /containers/16253994b7c4/attach?logs=1stream=0stdout=1 HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/vnd.docker.raw-stream\n\n {{ STREAM }}\n\nQuery Parameters:\n\nlogs \u2013 1/True/true or 0/False/false, return logs. Defaul\n false\nstream \u2013 1/True/true or 0/False/false, return stream.\n Default false\nstdin \u2013 1/True/true or 0/False/false, if stream=true, attach\n to stdin. Default false\nstdout \u2013 1/True/true or 0/False/false, if logs=true, return\n stdout log, if stream=true, attach to stdout. Default false\nstderr \u2013 1/True/true or 0/False/false, if logs=true, return\n stderr log, if stream=true, attach to stderr. Default false\n\nStatus Codes:\n\n200 \u2013 no error\n400 \u2013 bad parameter\n404 \u2013 no such container\n500 \u2013 server error\n\nAttach to a container (websocket)\nGET /containers/(id)/attach/ws\nAttach to the container id via websocket\nImplements websocket protocol handshake according to RFC 6455\nExample request\n GET /containers/e90e34656806/attach/ws?logs=0stream=1stdin=1stdout=1stderr=1 HTTP/1.1\n\nExample response\n {{ STREAM }}\n\nQuery Parameters:\n\nlogs \u2013 1/True/true or 0/False/false, return logs. Default false\nstream \u2013 1/True/true or 0/False/false, return stream.\n Default false\nstdin \u2013 1/True/true or 0/False/false, if stream=true, attach\n to stdin. Default false\nstdout \u2013 1/True/true or 0/False/false, if logs=true, return\n stdout log, if stream=true, attach to stdout. Default false\nstderr \u2013 1/True/true or 0/False/false, if logs=true, return\n stderr log, if stream=true, attach to stderr. Default false\n\nStatus Codes:\n\n200 \u2013 no error\n400 \u2013 bad parameter\n404 \u2013 no such container\n500 \u2013 server error\n\nWait a container\nPOST /containers/(id)/wait\nBlock until container id stops, then returns the exit code\nExample request:\n POST /containers/16253994b7c4/wait HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\"StatusCode\": 0}\n\nStatus Codes:\n\n200 \u2013 no error\n404 \u2013 no such container\n500 \u2013 server error\n\nRemove a container\nDELETE /containers/(id)\nRemove the container id from the filesystem\nExample request:\n DELETE /containers/16253994b7c4?v=1 HTTP/1.1\n\nExample response:\n HTTP/1.1 204 OK\n\nQuery Parameters:\n\nv \u2013 1/True/true or 0/False/false, Remove the volumes\n associated to the container. Default false\n\nStatus Codes:\n\n204 \u2013 no error\n400 \u2013 bad parameter\n404 \u2013 no such container\n500 \u2013 server error\n\n2.2 Images\nList Images\nGET /images/(format)\nList images format could be json or viz (json default)\nExample request:\n GET /images/json?all=0 HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n [\n {\n \"Repository\":\"ubuntu\",\n \"Tag\":\"precise\",\n \"Id\":\"b750fe79269d\",\n \"Created\":1364102658\n },\n {\n \"Repository\":\"ubuntu\",\n \"Tag\":\"12.04\",\n \"Id\":\"b750fe79269d\",\n \"Created\":1364102658\n }\n ]\n\nExample request:\n GET /images/viz HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: text/plain\n\n digraph docker {\n \"d82cbacda43a\" - \"074be284591f\"\n \"1496068ca813\" - \"08306dc45919\"\n \"08306dc45919\" - \"0e7893146ac2\"\n \"b750fe79269d\" - \"1496068ca813\"\n base - \"27cf78414709\" [style=invis]\n \"f71189fff3de\" - \"9a33b36209ed\"\n \"27cf78414709\" - \"b750fe79269d\"\n \"0e7893146ac2\" - \"d6434d954665\"\n \"d6434d954665\" - \"d82cbacda43a\"\n base - \"e9aa60c60128\" [style=invis]\n \"074be284591f\" - \"f71189fff3de\"\n \"b750fe79269d\" [label=\"b750fe79269d\\nubuntu\",shape=box,fillcolor=\"paleturquoise\",style=\"filled,rounded\"];\n \"e9aa60c60128\" [label=\"e9aa60c60128\\ncentos\",shape=box,fillcolor=\"paleturquoise\",style=\"filled,rounded\"];\n \"9a33b36209ed\" [label=\"9a33b36209ed\\nfedora\",shape=box,fillcolor=\"paleturquoise\",style=\"filled,rounded\"];\n base [style=invisible]\n }\n\nQuery Parameters:\n\nall \u2013 1/True/true or 0/False/false, Show all containers.\n Only running containers are shown by defaul\n\nStatus Codes:\n\n200 \u2013 no error\n400 \u2013 bad parameter\n500 \u2013 server error\n\nCreate an image\nPOST /images/create\nCreate an image, either by pull it from the registry or by importing i\nExample request:\n POST /images/create?fromImage=ubuntu HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/vnd.docker.raw-stream\n\n {{ STREAM }}\n\nQuery Parameters:\n\nfromImage \u2013 name of the image to pull\nfromSrc \u2013 source to import, - means stdin\nrepo \u2013 repository\ntag \u2013 tag\nregistry \u2013 the registry to pull from\n\nStatus Codes:\n\n200 \u2013 no error\n500 \u2013 server error\n\nInsert a file in an image\nPOST /images/(name)/insert\nInsert a file from url in the image name at path\nExample request:\n POST /images/test/insert?path=/usrurl=myurl HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n\n {{ TAR STREAM }}\n\nQuery Parameters:\n\nurl \u2013 The url from where the file is taken\npath \u2013 The path where the file is stored\n\nStatus Codes:\n\n200 \u2013 no error\n500 \u2013 server error\n\nInspect an image\nGET /images/(name)/json\nReturn low-level information on the image name\nExample request:\n GET /images/centos/json HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\n \"id\":\"b750fe79269d2ec9a3c593ef05b4332b1d1a02a62b4accb2c21d589ff2f5f2dc\",\n \"parent\":\"27cf784147099545\",\n \"created\":\"2013-03-23T22:24:18.818426-07:00\",\n \"container\":\"3d67245a8d72ecf13f33dffac9f79dcdf70f75acb84d308770391510e0c23ad0\",\n \"container_config\":\n {\n \"Hostname\":\"\",\n \"User\":\"\",\n \"Memory\":0,\n \"MemorySwap\":0,\n \"AttachStdin\":false,\n \"AttachStdout\":false,\n \"AttachStderr\":false,\n \"PortSpecs\":null,\n \"Tty\":true,\n \"OpenStdin\":true,\n \"StdinOnce\":false,\n \"Env\":null,\n \"Cmd\": [\"/bin/bash\"],\n \"Dns\":null,\n \"Image\":\"centos\",\n \"Volumes\":null,\n \"VolumesFrom\":\"\"\n }\n }\n\nStatus Codes:\n\n200 \u2013 no error\n404 \u2013 no such image\n500 \u2013 server error\n\nGet the history of an image\nGET /images/(name)/history\nReturn the history of the image name\nExample request:\n GET /images/fedora/history HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n [\n {\n \"Id\": \"b750fe79269d\",\n \"Created\": 1364102658,\n \"CreatedBy\": \"/bin/bash\"\n },\n {\n \"Id\": \"27cf78414709\",\n \"Created\": 1364068391,\n \"CreatedBy\": \"\"\n }\n ]\n\nStatus Codes:\n\n200 \u2013 no error\n404 \u2013 no such image\n500 \u2013 server error\n\nPush an image on the registry\nPOST /images/(name)/push\nPush the image name on the registry\n **Example request**:\n\n POST /images/test/push HTTP/1.1\n\n **Example response**:\n\n HTTP/1.1 200 OK\n Content-Type: application/vnd.docker.raw-stream\n\n {{ STREAM }}\n\nStatus Codes:\n\n200 \u2013 no error\n404 \u2013 no such image\n500 \u2013 server error\n\nTag an image into a repository\nPOST /images/(name)/tag\nTag the image name into a repository\nExample request:\n POST /images/test/tag?repo=myrepoforce=0tag=v42 HTTP/1.1\n\nExample response:\n HTTP/1.1 201 OK\n\nQuery Parameters:\n\nrepo \u2013 The repository to tag in\nforce \u2013 1/True/true or 0/False/false, default false\ntag - The new tag name\n\nStatus Codes:\n\n201 \u2013 no error\n400 \u2013 bad parameter\n404 \u2013 no such image\n500 \u2013 server error\n\nRemove an image\nDELETE /images/(name)\nRemove the image name from the filesystem\nExample request:\n DELETE /images/test HTTP/1.1\n\nExample response:\n HTTP/1.1 204 No Content\n\nStatus Codes:\n\n204 \u2013 no error\n404 \u2013 no such image\n500 \u2013 server error\n\nSearch images\nGET /images/search\nSearch for an image on Docker Hub\nExample request:\n GET /images/search?term=sshd HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n [\n {\n \"Name\":\"cespare/sshd\",\n \"Description\":\"\"\n },\n {\n \"Name\":\"johnfuller/sshd\",\n \"Description\":\"\"\n },\n {\n \"Name\":\"dhrp/mongodb-sshd\",\n \"Description\":\"\"\n }\n ]\n\n :query term: term to search\n :statuscode 200: no error\n :statuscode 500: server error\n\n2.3 Misc\nBuild an image from Dockerfile via stdin\nPOST /build\nBuild an image from Dockerfile via stdin\nExample request:\n POST /build HTTP/1.1\n\n {{ TAR STREAM }}\n\nExample response:\n HTTP/1.1 200 OK\n\n {{ STREAM }}\n\nQuery Parameters:\n\nt \u2013 repository name to be applied to the resulting image in\n case of success\n\nStatus Codes:\n\n200 \u2013 no error\n500 \u2013 server error\n\nGet default username and email\nGET /auth\nGet the default username and email\nExample request:\n GET /auth HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\n \"username\":\"hannibal\",\n \"email\":\"hannibal@a-team.com\"\n }\n\nStatus Codes:\n\n200 \u2013 no error\n500 \u2013 server error\n\nCheck auth configuration and store i\nPOST /auth\nGet the default username and email\nExample request:\n POST /auth HTTP/1.1\n Content-Type: application/json\n\n {\n \"username\":\"hannibal\",\n \"password:\"xxxx\",\n \"email\":\"hannibal@a-team.com\"\n }\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: text/plain\n\nStatus Codes:\n\n200 \u2013 no error\n204 \u2013 no error\n500 \u2013 server error\n\nDisplay system-wide information\nGET /info\nDisplay system-wide information\nExample request:\n GET /info HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\n \"Containers\":11,\n \"Images\":16,\n \"Debug\":false,\n \"NFd\": 11,\n \"NGoroutines\":21,\n \"MemoryLimit\":true,\n \"SwapLimit\":false\n }\n\nStatus Codes:\n\n200 \u2013 no error\n500 \u2013 server error\n\nShow the docker version information\nGET /version\nShow the docker version information\nExample request:\n GET /version HTTP/1.1\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\n \"Version\":\"0.2.2\",\n \"GitCommit\":\"5a2a5cc+CHANGES\",\n \"GoVersion\":\"go1.0.3\"\n }\n\nStatus Codes:\n\n200 \u2013 no error\n500 \u2013 server error\n\nCreate a new image from a container's changes\nPOST /commit\nCreate a new image from a container's changes\n \n Example request:\n POST /commit?container=44c004db4b17m=messagerepo=myrepo HTTP/1.1\n Content-Type: application/json\n\n {\n \"Cmd\": [\"cat\", \"/world\"],\n \"PortSpecs\":[\"22\"]\n }\n\nExample response:\n HTTP/1.1 201 OK\n Content-Type: application/vnd.docker.raw-stream\n\n {\"Id\": \"596069db4bf5\"}\n\nQuery Parameters:\n\ncontainer \u2013 source container\nrepo \u2013 repository\ntag \u2013 tag\nm \u2013 commit message\nauthor \u2013 author (e.g., \"John Hannibal Smith\n hannibal@a-team.com\")\n\nStatus Codes:\n\n201 \u2013 no error\n404 \u2013 no such container\n500 \u2013 server error\n\n3. Going further\n3.1 Inside docker run\nAs an example, the docker run command line makes the following API calls:\n\n\nCreate the container\n\n\nIf the status code is 404, it means the image doesn't exist:\n\nTry to pull it\nThen retry to create the container\n\n\n\nStart the container\n\n\nIf you are not in detached mode:\n\nAttach to the container, using logs=1 (to have stdout and\n stderr from the container's start) and stream=1\n\n\n\nIf in detached mode or only stdin is attached:\n\nDisplay the container's\n\n\n\n3.2 Hijacking\nIn this first version of the API, some of the endpoints, like /attach,\n/pull or /push uses hijacking to transport stdin, stdout and stderr on\nthe same socket. This might change in the future.",
|
|
"title": "**HIDDEN**"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api_v1.0#docker-remote-api-v10",
|
|
"tags": "",
|
|
"text": "",
|
|
"title": "Docker Remote API v1.0"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api_v1.0#1-brief-introduction",
|
|
"tags": "",
|
|
"text": "The Remote API is replacing rcli Default port in the docker daemon is 2375 The API tends to be REST, but for some complex commands, like attach\n or pull, the HTTP connection is hijacked to transport stdout stdin\n and stderr",
|
|
"title": "1. Brief introduction"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api_v1.0#2-endpoints",
|
|
"tags": "",
|
|
"text": "",
|
|
"title": "2. Endpoints"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api_v1.0#21-containers",
|
|
"tags": "",
|
|
"text": "List containers GET /containers/json List containers Example request : GET /containers/json?all=1 before=8dfafdbc3a40 HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n [\n {\n \"Id\": \"8dfafdbc3a40\",\n \"Image\": \"ubuntu:latest\",\n \"Command\": \"echo 1\",\n \"Created\": 1367854155,\n \"Status\": \"Exit 0\"\n },\n {\n \"Id\": \"9cd87474be90\",\n \"Image\": \"ubuntu:latest\",\n \"Command\": \"echo 222222\",\n \"Created\": 1367854155,\n \"Status\": \"Exit 0\"\n },\n {\n \"Id\": \"3176a2479c92\",\n \"Image\": \"centos:latest\",\n \"Command\": \"echo 3333333333333333\",\n \"Created\": 1367854154,\n \"Status\": \"Exit 0\"\n },\n {\n \"Id\": \"4cb07b47f9fb\",\n \"Image\": \"fedora:latest\",\n \"Command\": \"echo 444444444444444444444444444444444\",\n \"Created\": 1367854152,\n \"Status\": \"Exit 0\"\n }\n ] Query Parameters: all \u2013 1/True/true or 0/False/false, Show all containers.\n Only running containers are shown by default limit \u2013 Show limit last created\n containers, include non-running ones. since \u2013 Show only containers created since Id, include\n non-running ones. before \u2013 Show only containers created before Id, include\n non-running ones. Status Codes: 200 \u2013 no error 400 \u2013 bad parameter 500 \u2013 server error Create a container POST /containers/create Create a container Example request : POST /containers/create HTTP/1.1\n Content-Type: application/json\n\n {\n \"Hostname\":\"\",\n \"User\":\"\",\n \"Memory\":0,\n \"MemorySwap\":0,\n \"AttachStdin\":false,\n \"AttachStdout\":true,\n \"AttachStderr\":true,\n \"PortSpecs\":null,\n \"Tty\":false,\n \"OpenStdin\":false,\n \"StdinOnce\":false,\n \"Env\":null,\n \"Cmd\":[\n \"date\"\n ],\n \"Dns\":null,\n \"Image\":\"ubuntu\",\n \"Volumes\":{},\n \"VolumesFrom\":\"\"\n } Example response : HTTP/1.1 201 Created\n Content-Type: application/json\n\n {\n \"Id\":\"e90e34656806\"\n \"Warnings\":[]\n } Json Parameters: config \u2013 the container's configuration Status Codes: 201 \u2013 no error 404 \u2013 no such container 406 \u2013 impossible to attach (container not running) 500 \u2013 server error Inspect a container GET /containers/(id)/json Return low-level information on the container id Example request : GET /containers/4fa6e0f0c678/json HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\n \"Id\": \"4fa6e0f0c6786287e131c3852c58a2e01cc697a68231826813597e4994f1d6e2\",\n \"Created\": \"2013-05-07T14:51:42.041847+02:00\",\n \"Path\": \"date\",\n \"Args\": [],\n \"Config\": {\n \"Hostname\": \"4fa6e0f0c678\",\n \"User\": \"\",\n \"Memory\": 0,\n \"MemorySwap\": 0,\n \"AttachStdin\": false,\n \"AttachStdout\": true,\n \"AttachStderr\": true,\n \"PortSpecs\": null,\n \"Tty\": false,\n \"OpenStdin\": false,\n \"StdinOnce\": false,\n \"Env\": null,\n \"Cmd\": [\n \"date\"\n ],\n \"Dns\": null,\n \"Image\": \"ubuntu\",\n \"Volumes\": {},\n \"VolumesFrom\": \"\"\n },\n \"State\": {\n \"Running\": false,\n \"Pid\": 0,\n \"ExitCode\": 0,\n \"StartedAt\": \"2013-05-07T14:51:42.087658+02:01360\",\n \"Ghost\": false\n },\n \"Image\": \"b750fe79269d2ec9a3c593ef05b4332b1d1a02a62b4accb2c21d589ff2f5f2dc\",\n \"NetworkSettings\": {\n \"IpAddress\": \"\",\n \"IpPrefixLen\": 0,\n \"Gateway\": \"\",\n \"Bridge\": \"\",\n \"PortMapping\": null\n },\n \"SysInitPath\": \"/home/kitty/go/src/github.com/docker/docker/bin/docker\",\n \"ResolvConfPath\": \"/etc/resolv.conf\",\n \"Volumes\": {}\n } Status Codes: 200 \u2013 no error 404 \u2013 no such container 500 \u2013 server error Inspect changes on a container's filesystem GET /containers/(id)/changes Inspect changes on container id 's filesystem Example request : GET /containers/4fa6e0f0c678/changes HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n [\n {\n \"Path\": \"/dev\",\n \"Kind\": 0\n },\n {\n \"Path\": \"/dev/kmsg\",\n \"Kind\": 1\n },\n {\n \"Path\": \"/test\",\n \"Kind\": 1\n }\n ] Status Codes: 200 \u2013 no error 404 \u2013 no such container 500 \u2013 server error Export a container GET /containers/(id)/export Export the contents of container id Example request : GET /containers/4fa6e0f0c678/export HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/octet-stream\n\n {{ TAR STREAM }} Status Codes: 200 \u2013 no error 404 \u2013 no such container 500 \u2013 server error Start a container POST /containers/(id)/start Start the container id Example request : POST /containers/e90e34656806/start HTTP/1.1 Example response : HTTP/1.1 200 OK Status Codes: 200 \u2013 no error 404 \u2013 no such container 500 \u2013 server error Stop a container POST /containers/(id)/stop Stop the container id Example request : POST /containers/e90e34656806/stop?t=5 HTTP/1.1 Example response : HTTP/1.1 204 OK Query Parameters: t \u2013 number of seconds to wait before killing the container Status Codes: 204 \u2013 no error 404 \u2013 no such container 500 \u2013 server error Restart a container POST /containers/(id)/restart Restart the container id Example request : POST /containers/e90e34656806/restart?t=5 HTTP/1.1 Example response : HTTP/1.1 204 No Content Query Parameters: t \u2013 number of seconds to wait before killing the container Status Codes: 204 \u2013 no error 404 \u2013 no such container 500 \u2013 server error Kill a container POST /containers/(id)/kill Kill the container id Example request : POST /containers/e90e34656806/kill HTTP/1.1 Example response : HTTP/1.1 204 No Content Status Codes: 204 \u2013 no error 404 \u2013 no such container 500 \u2013 server error Attach to a container POST /containers/(id)/attach Attach to the container id Example request : POST /containers/16253994b7c4/attach?logs=1 stream=0 stdout=1 HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/vnd.docker.raw-stream\n\n {{ STREAM }} Query Parameters: logs \u2013 1/True/true or 0/False/false, return logs. Defaul\n false stream \u2013 1/True/true or 0/False/false, return stream.\n Default false stdin \u2013 1/True/true or 0/False/false, if stream=true, attach\n to stdin. Default false stdout \u2013 1/True/true or 0/False/false, if logs=true, return\n stdout log, if stream=true, attach to stdout. Default false stderr \u2013 1/True/true or 0/False/false, if logs=true, return\n stderr log, if stream=true, attach to stderr. Default false Status Codes: 200 \u2013 no error 400 \u2013 bad parameter 404 \u2013 no such container 500 \u2013 server error Attach to a container (websocket) GET /containers/(id)/attach/ws Attach to the container id via websocket Implements websocket protocol handshake according to RFC 6455 Example request GET /containers/e90e34656806/attach/ws?logs=0 stream=1 stdin=1 stdout=1 stderr=1 HTTP/1.1 Example response {{ STREAM }} Query Parameters: logs \u2013 1/True/true or 0/False/false, return logs. Default false stream \u2013 1/True/true or 0/False/false, return stream.\n Default false stdin \u2013 1/True/true or 0/False/false, if stream=true, attach\n to stdin. Default false stdout \u2013 1/True/true or 0/False/false, if logs=true, return\n stdout log, if stream=true, attach to stdout. Default false stderr \u2013 1/True/true or 0/False/false, if logs=true, return\n stderr log, if stream=true, attach to stderr. Default false Status Codes: 200 \u2013 no error 400 \u2013 bad parameter 404 \u2013 no such container 500 \u2013 server error Wait a container POST /containers/(id)/wait Block until container id stops, then returns the exit code Example request : POST /containers/16253994b7c4/wait HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\"StatusCode\": 0} Status Codes: 200 \u2013 no error 404 \u2013 no such container 500 \u2013 server error Remove a container DELETE /containers/(id) Remove the container id from the filesystem Example request : DELETE /containers/16253994b7c4?v=1 HTTP/1.1 Example response : HTTP/1.1 204 OK Query Parameters: v \u2013 1/True/true or 0/False/false, Remove the volumes\n associated to the container. Default false Status Codes: 204 \u2013 no error 400 \u2013 bad parameter 404 \u2013 no such container 500 \u2013 server error",
|
|
"title": "2.1 Containers"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api_v1.0#22-images",
|
|
"tags": "",
|
|
"text": "List Images GET /images/(format) List images format could be json or viz (json default) Example request : GET /images/json?all=0 HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n [\n {\n \"Repository\":\"ubuntu\",\n \"Tag\":\"precise\",\n \"Id\":\"b750fe79269d\",\n \"Created\":1364102658\n },\n {\n \"Repository\":\"ubuntu\",\n \"Tag\":\"12.04\",\n \"Id\":\"b750fe79269d\",\n \"Created\":1364102658\n }\n ] Example request : GET /images/viz HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: text/plain\n\n digraph docker {\n \"d82cbacda43a\" - \"074be284591f\"\n \"1496068ca813\" - \"08306dc45919\"\n \"08306dc45919\" - \"0e7893146ac2\"\n \"b750fe79269d\" - \"1496068ca813\"\n base - \"27cf78414709\" [style=invis]\n \"f71189fff3de\" - \"9a33b36209ed\"\n \"27cf78414709\" - \"b750fe79269d\"\n \"0e7893146ac2\" - \"d6434d954665\"\n \"d6434d954665\" - \"d82cbacda43a\"\n base - \"e9aa60c60128\" [style=invis]\n \"074be284591f\" - \"f71189fff3de\"\n \"b750fe79269d\" [label=\"b750fe79269d\\nubuntu\",shape=box,fillcolor=\"paleturquoise\",style=\"filled,rounded\"];\n \"e9aa60c60128\" [label=\"e9aa60c60128\\ncentos\",shape=box,fillcolor=\"paleturquoise\",style=\"filled,rounded\"];\n \"9a33b36209ed\" [label=\"9a33b36209ed\\nfedora\",shape=box,fillcolor=\"paleturquoise\",style=\"filled,rounded\"];\n base [style=invisible]\n } Query Parameters: all \u2013 1/True/true or 0/False/false, Show all containers.\n Only running containers are shown by defaul Status Codes: 200 \u2013 no error 400 \u2013 bad parameter 500 \u2013 server error Create an image POST /images/create Create an image, either by pull it from the registry or by importing i Example request : POST /images/create?fromImage=ubuntu HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/vnd.docker.raw-stream\n\n {{ STREAM }} Query Parameters: fromImage \u2013 name of the image to pull fromSrc \u2013 source to import, - means stdin repo \u2013 repository tag \u2013 tag registry \u2013 the registry to pull from Status Codes: 200 \u2013 no error 500 \u2013 server error Insert a file in an image POST /images/(name)/insert Insert a file from url in the image name at path Example request : POST /images/test/insert?path=/usr url=myurl HTTP/1.1 Example response : HTTP/1.1 200 OK\n\n {{ TAR STREAM }} Query Parameters: url \u2013 The url from where the file is taken path \u2013 The path where the file is stored Status Codes: 200 \u2013 no error 500 \u2013 server error Inspect an image GET /images/(name)/json Return low-level information on the image name Example request : GET /images/centos/json HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\n \"id\":\"b750fe79269d2ec9a3c593ef05b4332b1d1a02a62b4accb2c21d589ff2f5f2dc\",\n \"parent\":\"27cf784147099545\",\n \"created\":\"2013-03-23T22:24:18.818426-07:00\",\n \"container\":\"3d67245a8d72ecf13f33dffac9f79dcdf70f75acb84d308770391510e0c23ad0\",\n \"container_config\":\n {\n \"Hostname\":\"\",\n \"User\":\"\",\n \"Memory\":0,\n \"MemorySwap\":0,\n \"AttachStdin\":false,\n \"AttachStdout\":false,\n \"AttachStderr\":false,\n \"PortSpecs\":null,\n \"Tty\":true,\n \"OpenStdin\":true,\n \"StdinOnce\":false,\n \"Env\":null,\n \"Cmd\": [\"/bin/bash\"],\n \"Dns\":null,\n \"Image\":\"centos\",\n \"Volumes\":null,\n \"VolumesFrom\":\"\"\n }\n } Status Codes: 200 \u2013 no error 404 \u2013 no such image 500 \u2013 server error Get the history of an image GET /images/(name)/history Return the history of the image name Example request : GET /images/fedora/history HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n [\n {\n \"Id\": \"b750fe79269d\",\n \"Created\": 1364102658,\n \"CreatedBy\": \"/bin/bash\"\n },\n {\n \"Id\": \"27cf78414709\",\n \"Created\": 1364068391,\n \"CreatedBy\": \"\"\n }\n ] Status Codes: 200 \u2013 no error 404 \u2013 no such image 500 \u2013 server error Push an image on the registry POST /images/(name)/push Push the image name on the registry **Example request**: POST /images/test/push HTTP/1.1 **Example response**:\n\n HTTP/1.1 200 OK\n Content-Type: application/vnd.docker.raw-stream\n\n {{ STREAM }} Status Codes: 200 \u2013 no error 404 \u2013 no such image 500 \u2013 server error Tag an image into a repository POST /images/(name)/tag Tag the image name into a repository Example request : POST /images/test/tag?repo=myrepo force=0 tag=v42 HTTP/1.1 Example response : HTTP/1.1 201 OK Query Parameters: repo \u2013 The repository to tag in force \u2013 1/True/true or 0/False/false, default false tag - The new tag name Status Codes: 201 \u2013 no error 400 \u2013 bad parameter 404 \u2013 no such image 500 \u2013 server error Remove an image DELETE /images/(name) Remove the image name from the filesystem Example request : DELETE /images/test HTTP/1.1 Example response : HTTP/1.1 204 No Content Status Codes: 204 \u2013 no error 404 \u2013 no such image 500 \u2013 server error Search images GET /images/search Search for an image on Docker Hub Example request : GET /images/search?term=sshd HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n [\n {\n \"Name\":\"cespare/sshd\",\n \"Description\":\"\"\n },\n {\n \"Name\":\"johnfuller/sshd\",\n \"Description\":\"\"\n },\n {\n \"Name\":\"dhrp/mongodb-sshd\",\n \"Description\":\"\"\n }\n ]\n\n :query term: term to search\n :statuscode 200: no error\n :statuscode 500: server error",
|
|
"title": "2.2 Images"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api_v1.0#23-misc",
|
|
"tags": "",
|
|
"text": "Build an image from Dockerfile via stdin POST /build Build an image from Dockerfile via stdin Example request : POST /build HTTP/1.1\n\n {{ TAR STREAM }} Example response : HTTP/1.1 200 OK\n\n {{ STREAM }} Query Parameters: t \u2013 repository name to be applied to the resulting image in\n case of success Status Codes: 200 \u2013 no error 500 \u2013 server error Get default username and email GET /auth Get the default username and email Example request : GET /auth HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\n \"username\":\"hannibal\",\n \"email\":\"hannibal@a-team.com\"\n } Status Codes: 200 \u2013 no error 500 \u2013 server error Check auth configuration and store i POST /auth Get the default username and email Example request : POST /auth HTTP/1.1\n Content-Type: application/json\n\n {\n \"username\":\"hannibal\",\n \"password:\"xxxx\",\n \"email\":\"hannibal@a-team.com\"\n } Example response : HTTP/1.1 200 OK\n Content-Type: text/plain Status Codes: 200 \u2013 no error 204 \u2013 no error 500 \u2013 server error Display system-wide information GET /info Display system-wide information Example request : GET /info HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\n \"Containers\":11,\n \"Images\":16,\n \"Debug\":false,\n \"NFd\": 11,\n \"NGoroutines\":21,\n \"MemoryLimit\":true,\n \"SwapLimit\":false\n } Status Codes: 200 \u2013 no error 500 \u2013 server error Show the docker version information GET /version Show the docker version information Example request : GET /version HTTP/1.1 Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\n \"Version\":\"0.2.2\",\n \"GitCommit\":\"5a2a5cc+CHANGES\",\n \"GoVersion\":\"go1.0.3\"\n } Status Codes: 200 \u2013 no error 500 \u2013 server error Create a new image from a container's changes POST /commit Create a new image from a container's changes\n \n Example request : POST /commit?container=44c004db4b17 m=message repo=myrepo HTTP/1.1\n Content-Type: application/json\n\n {\n \"Cmd\": [\"cat\", \"/world\"],\n \"PortSpecs\":[\"22\"]\n } Example response : HTTP/1.1 201 OK\n Content-Type: application/vnd.docker.raw-stream\n\n {\"Id\": \"596069db4bf5\"} Query Parameters: container \u2013 source container repo \u2013 repository tag \u2013 tag m \u2013 commit message author \u2013 author (e.g., \"John Hannibal Smith\n hannibal@a-team.com \") Status Codes: 201 \u2013 no error 404 \u2013 no such container 500 \u2013 server error",
|
|
"title": "2.3 Misc"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api_v1.0#3-going-further",
|
|
"tags": "",
|
|
"text": "",
|
|
"title": "3. Going further"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api_v1.0#31-inside-docker-run",
|
|
"tags": "",
|
|
"text": "As an example, the docker run command line makes the following API calls: Create the container If the status code is 404, it means the image doesn't exist: Try to pull it Then retry to create the container Start the container If you are not in detached mode: Attach to the container, using logs=1 (to have stdout and\n stderr from the container's start) and stream=1 If in detached mode or only stdin is attached: Display the container's",
|
|
"title": "3.1 Inside docker run"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_remote_api_v1.0#32-hijacking",
|
|
"tags": "",
|
|
"text": "In this first version of the API, some of the endpoints, like /attach,\n/pull or /push uses hijacking to transport stdin, stdout and stderr on\nthe same socket. This might change in the future.",
|
|
"title": "3.2 Hijacking"
|
|
},
|
|
{
|
|
"loc": "/reference/api/remote_api_client_libraries/",
|
|
"tags": "",
|
|
"text": "Docker Remote API Client Libraries\nThese libraries have not been tested by the Docker maintainers for\ncompatibility. Please file issues with the library owners. If you find\nmore library implementations, please list them in Docker doc bugs and we\nwill add the libraries here.\n\n \n \n \n \n \n \n \n Language/Framework\n Name\n Repository\n Status\n \n \n \n \n C#\n Docker.DotNet\n https://github.com/ahmetalpbalkan/Docker.DotNet\n Active\n \n \n C++\n lasote/docker_client\n http://www.biicode.com/lasote/docker_client (Biicode C++ dependency manager)\n Active\n \n \n Erlang\n erldocker\n https://github.com/proger/erldocker\n Active\n \n \n Go\n go-dockerclient\n https://github.com/fsouza/go-dockerclient\n Active\n \n \n Go\n dockerclient\n https://github.com/samalba/dockerclient\n Active\n \n \n Groovy\n docker-client\n https://github.com/gesellix-docker/docker-client\n Active\n \n \n Java\n docker-java\n https://github.com/docker-java/docker-java\n Active\n \n \n Java\n docker-client\n https://github.com/spotify/docker-client\n Active\n \n \n Java\n jclouds-docker\n https://github.com/jclouds/jclouds-labs/tree/master/docker\n Active\n \n \n JavaScript (NodeJS)\n dockerode\n https://github.com/apocas/dockerode\n Install via NPM: npm install dockerode\n Active\n \n \n JavaScript (NodeJS)\n docker.io\n https://github.com/appersonlabs/docker.io\n Install via NPM: npm install docker.io\n Active\n \n \n JavaScript\n docker-js\n https://github.com/dgoujard/docker-js\n Outdated\n \n \n JavaScript (Angular) WebUI\n docker-cp\n https://github.com/13W/docker-cp\n Active\n \n \n JavaScript (Angular) WebUI\n dockerui\n https://github.com/crosbymichael/dockerui\n Active\n \n \n Perl\n Net::Docker\n https://metacpan.org/pod/Net::Docker\n Active\n \n \n Perl\n Eixo::Docker\n https://github.com/alambike/eixo-docker\n Active\n \n \n PHP\n Alvine\n http://pear.alvine.io/ (alpha)\n Active\n \n \n PHP\n Docker-PHP\n http://stage1.github.io/docker-php/\n Active\n \n \n Python\n docker-py\n https://github.com/docker/docker-py\n Active\n \n \n Ruby\n docker-api\n https://github.com/swipely/docker-api\n Active\n \n \n Ruby\n docker-client\n https://github.com/geku/docker-client\n Outdated\n \n \n Rust\n docker-rust\n https://github.com/abh1nav/docker-rust\n Active\n \n \n Scala\n tugboat\n https://github.com/softprops/tugboat\n Active\n \n \n Scala\n reactive-docker\n https://github.com/almoehi/reactive-docker\n Active",
|
|
"title": "Docker Remote API Client Libraries"
|
|
},
|
|
{
|
|
"loc": "/reference/api/remote_api_client_libraries#docker-remote-api-client-libraries",
|
|
"tags": "",
|
|
"text": "These libraries have not been tested by the Docker maintainers for\ncompatibility. Please file issues with the library owners. If you find\nmore library implementations, please list them in Docker doc bugs and we\nwill add the libraries here. \n \n \n \n \n \n \n \n Language/Framework \n Name \n Repository \n Status \n \n \n \n \n C# \n Docker.DotNet \n https://github.com/ahmetalpbalkan/Docker.DotNet \n Active \n \n \n C++ \n lasote/docker_client \n http://www.biicode.com/lasote/docker_client (Biicode C++ dependency manager) \n Active \n \n \n Erlang \n erldocker \n https://github.com/proger/erldocker \n Active \n \n \n Go \n go-dockerclient \n https://github.com/fsouza/go-dockerclient \n Active \n \n \n Go \n dockerclient \n https://github.com/samalba/dockerclient \n Active \n \n \n Groovy \n docker-client \n https://github.com/gesellix-docker/docker-client \n Active \n \n \n Java \n docker-java \n https://github.com/docker-java/docker-java \n Active \n \n \n Java \n docker-client \n https://github.com/spotify/docker-client \n Active \n \n \n Java \n jclouds-docker \n https://github.com/jclouds/jclouds-labs/tree/master/docker \n Active \n \n \n JavaScript (NodeJS) \n dockerode \n https://github.com/apocas/dockerode \n Install via NPM: npm install dockerode \n Active \n \n \n JavaScript (NodeJS) \n docker.io \n https://github.com/appersonlabs/docker.io \n Install via NPM: npm install docker.io \n Active \n \n \n JavaScript \n docker-js \n https://github.com/dgoujard/docker-js \n Outdated \n \n \n JavaScript (Angular) WebUI \n docker-cp \n https://github.com/13W/docker-cp \n Active \n \n \n JavaScript (Angular) WebUI \n dockerui \n https://github.com/crosbymichael/dockerui \n Active \n \n \n Perl \n Net::Docker \n https://metacpan.org/pod/Net::Docker \n Active \n \n \n Perl \n Eixo::Docker \n https://github.com/alambike/eixo-docker \n Active \n \n \n PHP \n Alvine \n http://pear.alvine.io/ (alpha) \n Active \n \n \n PHP \n Docker-PHP \n http://stage1.github.io/docker-php/ \n Active \n \n \n Python \n docker-py \n https://github.com/docker/docker-py \n Active \n \n \n Ruby \n docker-api \n https://github.com/swipely/docker-api \n Active \n \n \n Ruby \n docker-client \n https://github.com/geku/docker-client \n Outdated \n \n \n Rust \n docker-rust \n https://github.com/abh1nav/docker-rust \n Active \n \n \n Scala \n tugboat \n https://github.com/softprops/tugboat \n Active \n \n \n Scala \n reactive-docker \n https://github.com/almoehi/reactive-docker \n Active",
|
|
"title": "Docker Remote API Client Libraries"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_io_accounts_api/",
|
|
"tags": "",
|
|
"text": "docker.io Accounts API\nGet a single user\nGET /api/v1.1/users/:username/\nGet profile info for the specified user.\nParameters:\n\nusername \u2013 username of the user whose profile info is being\n requested.\n\nRequest Headers:\n\nAuthorization \u2013 required authentication credentials of\n either type HTTP Basic or OAuth Bearer Token.\n\nStatus Codes:\n\n200 \u2013 success, user data returned.\n401 \u2013 authentication error.\n403 \u2013 permission error, authenticated user must be the user\n whose data is being requested, OAuth access tokens must have\n profile_read scope.\n404 \u2013 the specified username does not exist.\n\nExample request:\n GET /api/v1.1/users/janedoe/ HTTP/1.1\n Host: www.docker.io\n Accept: application/json\n Authorization: Basic dXNlcm5hbWU6cGFzc3dvcmQ=\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\n \"id\": 2,\n \"username\": \"janedoe\",\n \"url\": \"https://www.docker.io/api/v1.1/users/janedoe/\",\n \"date_joined\": \"2014-02-12T17:58:01.431312Z\",\n \"type\": \"User\",\n \"full_name\": \"Jane Doe\",\n \"location\": \"San Francisco, CA\",\n \"company\": \"Success, Inc.\",\n \"profile_url\": \"https://docker.io/\",\n \"gravatar_url\": \"https://secure.gravatar.com/avatar/0212b397124be4acd4e7dea9aa357.jpg?s=80r=gd=mm\"\n \"email\": \"jane.doe@example.com\",\n \"is_active\": true\n }\n\nUpdate a single user\nPATCH /api/v1.1/users/:username/\nUpdate profile info for the specified user.\nParameters:\n\nusername \u2013 username of the user whose profile info is being\n updated.\n\nJson Parameters:\n\nfull_name (string) \u2013 (optional) the new name of the user.\nlocation (string) \u2013 (optional) the new location.\ncompany (string) \u2013 (optional) the new company of the user.\nprofile_url (string) \u2013 (optional) the new profile url.\ngravatar_email (string) \u2013 (optional) the new Gravatar\n email address.\n\nRequest Headers:\n\nAuthorization \u2013 required authentication credentials of\n either type HTTP Basic or OAuth Bearer Token.\nContent-Type \u2013 MIME Type of post data. JSON, url-encoded\n form data, etc.\n\nStatus Codes:\n\n200 \u2013 success, user data updated.\n400 \u2013 post data validation error.\n401 \u2013 authentication error.\n403 \u2013 permission error, authenticated user must be the user\n whose data is being updated, OAuth access tokens must have\n profile_write scope.\n404 \u2013 the specified username does not exist.\n\nExample request:\n PATCH /api/v1.1/users/janedoe/ HTTP/1.1\n Host: www.docker.io\n Accept: application/json\n Authorization: Basic dXNlcm5hbWU6cGFzc3dvcmQ=\n\n {\n \"location\": \"Private Island\",\n \"profile_url\": \"http://janedoe.com/\",\n \"company\": \"Retired\",\n }\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\n \"id\": 2,\n \"username\": \"janedoe\",\n \"url\": \"https://www.docker.io/api/v1.1/users/janedoe/\",\n \"date_joined\": \"2014-02-12T17:58:01.431312Z\",\n \"type\": \"User\",\n \"full_name\": \"Jane Doe\",\n \"location\": \"Private Island\",\n \"company\": \"Retired\",\n \"profile_url\": \"http://janedoe.com/\",\n \"gravatar_url\": \"https://secure.gravatar.com/avatar/0212b397124be4acd4e7dea9aa357.jpg?s=80r=gd=mm\"\n \"email\": \"jane.doe@example.com\",\n \"is_active\": true\n }\n\nList email addresses for a user\nGET /api/v1.1/users/:username/emails/\nList email info for the specified user.\nParameters:\n\nusername \u2013 username of the user whose profile info is being\n updated.\n\nRequest Headers:\n\nAuthorization \u2013 required authentication credentials of\n either type HTTP Basic or OAuth Bearer Token\n\nStatus Codes:\n\n200 \u2013 success, user data updated.\n401 \u2013 authentication error.\n403 \u2013 permission error, authenticated user must be the user\n whose data is being requested, OAuth access tokens must have\n email_read scope.\n404 \u2013 the specified username does not exist.\n\nExample request:\n GET /api/v1.1/users/janedoe/emails/ HTTP/1.1\n Host: www.docker.io\n Accept: application/json\n Authorization: Bearer zAy0BxC1wDv2EuF3tGs4HrI6qJp6KoL7nM\n\nExample response:\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n [\n {\n \"email\": \"jane.doe@example.com\",\n \"verified\": true,\n \"primary\": true\n }\n ]\n\nAdd email address for a user\nPOST /api/v1.1/users/:username/emails/\nAdd a new email address to the specified user's account. The email\naddress must be verified separately, a confirmation email is not\nautomatically sent.\nJson Parameters:\n\nemail (string) \u2013 email address to be added.\n\nRequest Headers:\n\nAuthorization \u2013 required authentication credentials of\n either type HTTP Basic or OAuth Bearer Token.\nContent-Type \u2013 MIME Type of post data. JSON, url-encoded\n form data, etc.\n\nStatus Codes:\n\n201 \u2013 success, new email added.\n400 \u2013 data validation error.\n401 \u2013 authentication error.\n403 \u2013 permission error, authenticated user must be the user\n whose data is being requested, OAuth access tokens must have\n email_write scope.\n404 \u2013 the specified username does not exist.\n\nExample request:\n POST /api/v1.1/users/janedoe/emails/ HTTP/1.1\n Host: www.docker.io\n Accept: application/json\n Content-Type: application/json\n Authorization: Bearer zAy0BxC1wDv2EuF3tGs4HrI6qJp6KoL7nM\n\n {\n \"email\": \"jane.doe+other@example.com\"\n }\n\nExample response:\n HTTP/1.1 201 Created\n Content-Type: application/json\n\n {\n \"email\": \"jane.doe+other@example.com\",\n \"verified\": false,\n \"primary\": false\n }\n\nDelete email address for a user\nDELETE /api/v1.1/users/:username/emails/\nDelete an email address from the specified user's account. You\ncannot delete a user's primary email address.\nJson Parameters:\n\nemail (string) \u2013 email address to be deleted.\n\nRequest Headers:\n\nAuthorization \u2013 required authentication credentials of\n either type HTTP Basic or OAuth Bearer Token.\nContent-Type \u2013 MIME Type of post data. JSON, url-encoded\n form data, etc.\n\nStatus Codes:\n\n204 \u2013 success, email address removed.\n400 \u2013 validation error.\n401 \u2013 authentication error.\n403 \u2013 permission error, authenticated user must be the user\n whose data is being requested, OAuth access tokens must have\n email_write scope.\n404 \u2013 the specified username or email address does not\n exist.\n\nExample request:\n DELETE /api/v1.1/users/janedoe/emails/ HTTP/1.1\n Host: www.docker.io\n Accept: application/json\n Content-Type: application/json\n Authorization: Bearer zAy0BxC1wDv2EuF3tGs4HrI6qJp6KoL7nM\n\n {\n \"email\": \"jane.doe+other@example.com\"\n }\n\nExample response:\n HTTP/1.1 204 NO CONTENT\n Content-Length: 0",
|
|
"title": "Docker Hub Accounts API"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_io_accounts_api#dockerio-accounts-api",
|
|
"tags": "",
|
|
"text": "",
|
|
"title": "docker.io Accounts API"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_io_accounts_api#get-a-single-user",
|
|
"tags": "",
|
|
"text": "GET /api/v1.1/users/:username/ Get profile info for the specified user. Parameters: username \u2013 username of the user whose profile info is being\n requested. Request Headers: Authorization \u2013 required authentication credentials of\n either type HTTP Basic or OAuth Bearer Token. Status Codes: 200 \u2013 success, user data returned. 401 \u2013 authentication error. 403 \u2013 permission error, authenticated user must be the user\n whose data is being requested, OAuth access tokens must have\n profile_read scope. 404 \u2013 the specified username does not exist. Example request : GET /api/v1.1/users/janedoe/ HTTP/1.1\n Host: www.docker.io\n Accept: application/json\n Authorization: Basic dXNlcm5hbWU6cGFzc3dvcmQ= Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\n \"id\": 2,\n \"username\": \"janedoe\",\n \"url\": \"https://www.docker.io/api/v1.1/users/janedoe/\",\n \"date_joined\": \"2014-02-12T17:58:01.431312Z\",\n \"type\": \"User\",\n \"full_name\": \"Jane Doe\",\n \"location\": \"San Francisco, CA\",\n \"company\": \"Success, Inc.\",\n \"profile_url\": \"https://docker.io/\",\n \"gravatar_url\": \"https://secure.gravatar.com/avatar/0212b397124be4acd4e7dea9aa357.jpg?s=80 r=g d=mm\"\n \"email\": \"jane.doe@example.com\",\n \"is_active\": true\n }",
|
|
"title": "Get a single user"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_io_accounts_api#update-a-single-user",
|
|
"tags": "",
|
|
"text": "PATCH /api/v1.1/users/:username/ Update profile info for the specified user. Parameters: username \u2013 username of the user whose profile info is being\n updated. Json Parameters: full_name ( string ) \u2013 (optional) the new name of the user. location ( string ) \u2013 (optional) the new location. company ( string ) \u2013 (optional) the new company of the user. profile_url ( string ) \u2013 (optional) the new profile url. gravatar_email ( string ) \u2013 (optional) the new Gravatar\n email address. Request Headers: Authorization \u2013 required authentication credentials of\n either type HTTP Basic or OAuth Bearer Token. Content-Type \u2013 MIME Type of post data. JSON, url-encoded\n form data, etc. Status Codes: 200 \u2013 success, user data updated. 400 \u2013 post data validation error. 401 \u2013 authentication error. 403 \u2013 permission error, authenticated user must be the user\n whose data is being updated, OAuth access tokens must have\n profile_write scope. 404 \u2013 the specified username does not exist. Example request : PATCH /api/v1.1/users/janedoe/ HTTP/1.1\n Host: www.docker.io\n Accept: application/json\n Authorization: Basic dXNlcm5hbWU6cGFzc3dvcmQ=\n\n {\n \"location\": \"Private Island\",\n \"profile_url\": \"http://janedoe.com/\",\n \"company\": \"Retired\",\n } Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\n \"id\": 2,\n \"username\": \"janedoe\",\n \"url\": \"https://www.docker.io/api/v1.1/users/janedoe/\",\n \"date_joined\": \"2014-02-12T17:58:01.431312Z\",\n \"type\": \"User\",\n \"full_name\": \"Jane Doe\",\n \"location\": \"Private Island\",\n \"company\": \"Retired\",\n \"profile_url\": \"http://janedoe.com/\",\n \"gravatar_url\": \"https://secure.gravatar.com/avatar/0212b397124be4acd4e7dea9aa357.jpg?s=80 r=g d=mm\"\n \"email\": \"jane.doe@example.com\",\n \"is_active\": true\n }",
|
|
"title": "Update a single user"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_io_accounts_api#list-email-addresses-for-a-user",
|
|
"tags": "",
|
|
"text": "GET /api/v1.1/users/:username/emails/ List email info for the specified user. Parameters: username \u2013 username of the user whose profile info is being\n updated. Request Headers: Authorization \u2013 required authentication credentials of\n either type HTTP Basic or OAuth Bearer Token Status Codes: 200 \u2013 success, user data updated. 401 \u2013 authentication error. 403 \u2013 permission error, authenticated user must be the user\n whose data is being requested, OAuth access tokens must have\n email_read scope. 404 \u2013 the specified username does not exist. Example request : GET /api/v1.1/users/janedoe/emails/ HTTP/1.1\n Host: www.docker.io\n Accept: application/json\n Authorization: Bearer zAy0BxC1wDv2EuF3tGs4HrI6qJp6KoL7nM Example response : HTTP/1.1 200 OK\n Content-Type: application/json\n\n [\n {\n \"email\": \"jane.doe@example.com\",\n \"verified\": true,\n \"primary\": true\n }\n ]",
|
|
"title": "List email addresses for a user"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_io_accounts_api#add-email-address-for-a-user",
|
|
"tags": "",
|
|
"text": "POST /api/v1.1/users/:username/emails/ Add a new email address to the specified user's account. The email\naddress must be verified separately, a confirmation email is not\nautomatically sent. Json Parameters: email ( string ) \u2013 email address to be added. Request Headers: Authorization \u2013 required authentication credentials of\n either type HTTP Basic or OAuth Bearer Token. Content-Type \u2013 MIME Type of post data. JSON, url-encoded\n form data, etc. Status Codes: 201 \u2013 success, new email added. 400 \u2013 data validation error. 401 \u2013 authentication error. 403 \u2013 permission error, authenticated user must be the user\n whose data is being requested, OAuth access tokens must have\n email_write scope. 404 \u2013 the specified username does not exist. Example request : POST /api/v1.1/users/janedoe/emails/ HTTP/1.1\n Host: www.docker.io\n Accept: application/json\n Content-Type: application/json\n Authorization: Bearer zAy0BxC1wDv2EuF3tGs4HrI6qJp6KoL7nM\n\n {\n \"email\": \"jane.doe+other@example.com\"\n } Example response : HTTP/1.1 201 Created\n Content-Type: application/json\n\n {\n \"email\": \"jane.doe+other@example.com\",\n \"verified\": false,\n \"primary\": false\n }",
|
|
"title": "Add email address for a user"
|
|
},
|
|
{
|
|
"loc": "/reference/api/docker_io_accounts_api#delete-email-address-for-a-user",
|
|
"tags": "",
|
|
"text": "DELETE /api/v1.1/users/:username/emails/ Delete an email address from the specified user's account. You\ncannot delete a user's primary email address. Json Parameters: email ( string ) \u2013 email address to be deleted. Request Headers: Authorization \u2013 required authentication credentials of\n either type HTTP Basic or OAuth Bearer Token. Content-Type \u2013 MIME Type of post data. JSON, url-encoded\n form data, etc. Status Codes: 204 \u2013 success, email address removed. 400 \u2013 validation error. 401 \u2013 authentication error. 403 \u2013 permission error, authenticated user must be the user\n whose data is being requested, OAuth access tokens must have\n email_write scope. 404 \u2013 the specified username or email address does not\n exist. Example request : DELETE /api/v1.1/users/janedoe/emails/ HTTP/1.1\n Host: www.docker.io\n Accept: application/json\n Content-Type: application/json\n Authorization: Bearer zAy0BxC1wDv2EuF3tGs4HrI6qJp6KoL7nM\n\n {\n \"email\": \"jane.doe+other@example.com\"\n } Example response : HTTP/1.1 204 NO CONTENT\n Content-Length: 0",
|
|
"title": "Delete email address for a user"
|
|
},
|
|
{
|
|
"loc": "/jsearch/",
|
|
"tags": "",
|
|
"text": "Search\n\n \n \n\n\n\nSorry, page not found.",
|
|
"title": "**HIDDEN**"
|
|
},
|
|
{
|
|
"loc": "/jsearch#search",
|
|
"tags": "",
|
|
"text": "Sorry, page not found.",
|
|
"title": "Search"
|
|
},
|
|
{
|
|
"loc": "/terms/",
|
|
"tags": "",
|
|
"text": "Table of Contents\nAbout\n\n\nDocker\n\n\nRelease Notes\n\n\nUnderstanding Docker\n\n\nInstallation\n\n\nUbuntu\n\n\nMac OS X\n\n\nMicrosoft Windows\n\n\nAmazon EC2\n\n\nArch Linux\n\n\nBinaries\n\n\nCentOS\n\n\nCRUX Linux\n\n\nDebian\n\n\nFedora\n\n\nFrugalWare\n\n\nGoogle Cloud Platform\n\n\nGentoo\n\n\nIBM Softlayer\n\n\nRackspace Cloud\n\n\nRed Hat Enterprise Linux\n\n\nOracle Linux\n\n\nSUSE\n\n\nDocker Compose\n\n\nUser Guide\n\n\nThe Docker User Guide\n\n\nGetting Started with Docker Hub\n\n\nDockerizing Applications\n\n\nWorking with Containers\n\n\nWorking with Docker Images\n\n\nLinking containers together\n\n\nManaging data in containers\n\n\nWorking with Docker Hub\n\n\nDocker Compose\n\n\nDocker Machine\n\n\nDocker Swarm\n\n\nDocker Hub\n\n\nDocker Hub\n\n\nAccounts\n\n\nRepositories\n\n\nAutomated Builds\n\n\nOfficial Repo Guidelines\n\n\nExamples\n\n\nDockerizing a Node.js web application\n\n\nDockerizing MongoDB\n\n\nDockerizing a Redis service\n\n\nDockerizing a PostgreSQL service\n\n\nDockerizing a Riak service\n\n\nDockerizing an SSH service\n\n\nDockerizing a CouchDB service\n\n\nDockerizing an Apt-Cacher-ng service\n\n\nGetting started with Compose and Django\n\n\nGetting started with Compose and Rails\n\n\nGetting started with Compose and Wordpress\n\n\nArticles\n\n\nDocker basics\n\n\nAdvanced networking\n\n\nSecurity\n\n\nRunning Docker with HTTPS\n\n\nRun a local registry mirror\n\n\nAutomatically starting containers\n\n\nCreating a base image\n\n\nBest practices for writing Dockerfiles\n\n\nUsing certificates for repository client verification\n\n\nUsing Supervisor\n\n\nProcess management with CFEngine\n\n\nUsing Puppet\n\n\nUsing Chef\n\n\nUsing PowerShell DSC\n\n\nCross-Host linking using ambassador containers\n\n\nRuntime metrics\n\n\nIncreasing a Boot2Docker volume\n\n\nControlling and configuring Docker using Systemd\n\n\nReference\n\n\nCommand line\n\n\nDockerfile\n\n\nFAQ\n\n\nRun Reference\n\n\nCompose command line\n\n\nCompose yml\n\n\nCompose ENV variables\n\n\nCompose commandline completion\n\n\nSwarm discovery\n\n\nSwarm strategies\n\n\nSwarm filters\n\n\nSwarm API\n\n\nDocker Hub API\n\n\nDocker Registry API\n\n\nDocker Registry API Client Libraries\n\n\nDocker Hub and Registry Spec\n\n\nDocker Remote API\n\n\nDocker Remote API v1.17\n\n\nDocker Remote API v1.16\n\n\nDocker Remote API Client Libraries\n\n\nDocker Hub Accounts API\n\n\nContributor Guide\n\n\nREADME first\n\n\nGet required software\n\n\nConfigure Git for contributing\n\n\nWork with a development container\n\n\nRun tests and test documentation\n\n\nUnderstand contribution workflow\n\n\nFind an issue\n\n\nWork on an issue\n\n\nCreate a pull request\n\n\nParticipate in the PR review\n\n\nAdvanced contributing\n\n\nWhere to get help\n\n\nCoding style guide\n\n\nDocumentation style guide",
|
|
"title": "**HIDDEN**"
|
|
},
|
|
{
|
|
"loc": "/terms#table-of-contents",
|
|
"tags": "",
|
|
"text": "",
|
|
"title": "Table of Contents"
|
|
},
|
|
{
|
|
"loc": "/terms#about",
|
|
"tags": "",
|
|
"text": "Docker Release Notes Understanding Docker",
|
|
"title": "About"
|
|
},
|
|
{
|
|
"loc": "/terms#installation",
|
|
"tags": "",
|
|
"text": "Ubuntu Mac OS X Microsoft Windows Amazon EC2 Arch Linux Binaries CentOS CRUX Linux Debian Fedora FrugalWare Google Cloud Platform Gentoo IBM Softlayer Rackspace Cloud Red Hat Enterprise Linux Oracle Linux SUSE Docker Compose",
|
|
"title": "Installation"
|
|
},
|
|
{
|
|
"loc": "/terms#user-guide",
|
|
"tags": "",
|
|
"text": "The Docker User Guide Getting Started with Docker Hub Dockerizing Applications Working with Containers Working with Docker Images Linking containers together Managing data in containers Working with Docker Hub Docker Compose Docker Machine Docker Swarm",
|
|
"title": "User Guide"
|
|
},
|
|
{
|
|
"loc": "/terms#docker-hub",
|
|
"tags": "",
|
|
"text": "Docker Hub Accounts Repositories Automated Builds Official Repo Guidelines",
|
|
"title": "Docker Hub"
|
|
},
|
|
{
|
|
"loc": "/terms#examples",
|
|
"tags": "",
|
|
"text": "Dockerizing a Node.js web application Dockerizing MongoDB Dockerizing a Redis service Dockerizing a PostgreSQL service Dockerizing a Riak service Dockerizing an SSH service Dockerizing a CouchDB service Dockerizing an Apt-Cacher-ng service Getting started with Compose and Django Getting started with Compose and Rails Getting started with Compose and Wordpress",
|
|
"title": "Examples"
|
|
},
|
|
{
|
|
"loc": "/terms#articles",
|
|
"tags": "",
|
|
"text": "Docker basics Advanced networking Security Running Docker with HTTPS Run a local registry mirror Automatically starting containers Creating a base image Best practices for writing Dockerfiles Using certificates for repository client verification Using Supervisor Process management with CFEngine Using Puppet Using Chef Using PowerShell DSC Cross-Host linking using ambassador containers Runtime metrics Increasing a Boot2Docker volume Controlling and configuring Docker using Systemd",
|
|
"title": "Articles"
|
|
},
|
|
{
|
|
"loc": "/terms#reference",
|
|
"tags": "",
|
|
"text": "Command line Dockerfile FAQ Run Reference Compose command line Compose yml Compose ENV variables Compose commandline completion Swarm discovery Swarm strategies Swarm filters Swarm API Docker Hub API Docker Registry API Docker Registry API Client Libraries Docker Hub and Registry Spec Docker Remote API Docker Remote API v1.17 Docker Remote API v1.16 Docker Remote API Client Libraries Docker Hub Accounts API",
|
|
"title": "Reference"
|
|
},
|
|
{
|
|
"loc": "/terms#contributor-guide",
|
|
"tags": "",
|
|
"text": "README first Get required software Configure Git for contributing Work with a development container Run tests and test documentation Understand contribution workflow Find an issue Work on an issue Create a pull request Participate in the PR review Advanced contributing Where to get help Coding style guide Documentation style guide",
|
|
"title": "Contributor Guide"
|
|
},
|
|
{
|
|
"loc": "/terms/layer/",
|
|
"tags": "",
|
|
"text": "Layers\nIntroduction\nIn a traditional Linux boot, the kernel first mounts the root File\nSystem as read-only, checks its\nintegrity, and then switches the whole rootfs volume to read-write mode.\nLayer\nWhen Docker mounts the rootfs, it starts read-only, as in a traditional\nLinux boot, but then, instead of changing the file system to read-write\nmode, it takes advantage of a union\nmount to add a read-write\nfile system over the read-only file system. In fact there may be\nmultiple read-only file systems stacked on top of each other. We think\nof each one of these file systems as a layer.\n\nAt first, the top read-write layer has nothing in it, but any time a\nprocess creates a file, this happens in the top layer. And if something\nneeds to update an existing file in a lower layer, then the file gets\ncopied to the upper layer and changes go into the copy. The version of\nthe file on the lower layer cannot be seen by the applications anymore,\nbut it is there, unchanged.\nUnion File System\nWe call the union of the read-write layer and all the read-only layers a\nunion file system.",
|
|
"title": "**HIDDEN**"
|
|
},
|
|
{
|
|
"loc": "/terms/layer#layers",
|
|
"tags": "",
|
|
"text": "",
|
|
"title": "Layers"
|
|
},
|
|
{
|
|
"loc": "/terms/layer#introduction",
|
|
"tags": "",
|
|
"text": "In a traditional Linux boot, the kernel first mounts the root File\nSystem as read-only, checks its\nintegrity, and then switches the whole rootfs volume to read-write mode.",
|
|
"title": "Introduction"
|
|
},
|
|
{
|
|
"loc": "/terms/layer#layer",
|
|
"tags": "",
|
|
"text": "When Docker mounts the rootfs, it starts read-only, as in a traditional\nLinux boot, but then, instead of changing the file system to read-write\nmode, it takes advantage of a union\nmount to add a read-write\nfile system over the read-only file system. In fact there may be\nmultiple read-only file systems stacked on top of each other. We think\nof each one of these file systems as a layer . At first, the top read-write layer has nothing in it, but any time a\nprocess creates a file, this happens in the top layer. And if something\nneeds to update an existing file in a lower layer, then the file gets\ncopied to the upper layer and changes go into the copy. The version of\nthe file on the lower layer cannot be seen by the applications anymore,\nbut it is there, unchanged.",
|
|
"title": "Layer"
|
|
},
|
|
{
|
|
"loc": "/terms/layer#union-file-system",
|
|
"tags": "",
|
|
"text": "We call the union of the read-write layer and all the read-only layers a union file system .",
|
|
"title": "Union File System"
|
|
},
|
|
{
|
|
"loc": "/terms/",
|
|
"tags": "",
|
|
"text": "Table of Contents\nAbout\n\n\nDocker\n\n\nRelease Notes\n\n\nUnderstanding Docker\n\n\nInstallation\n\n\nUbuntu\n\n\nMac OS X\n\n\nMicrosoft Windows\n\n\nAmazon EC2\n\n\nArch Linux\n\n\nBinaries\n\n\nCentOS\n\n\nCRUX Linux\n\n\nDebian\n\n\nFedora\n\n\nFrugalWare\n\n\nGoogle Cloud Platform\n\n\nGentoo\n\n\nIBM Softlayer\n\n\nRackspace Cloud\n\n\nRed Hat Enterprise Linux\n\n\nOracle Linux\n\n\nSUSE\n\n\nDocker Compose\n\n\nUser Guide\n\n\nThe Docker User Guide\n\n\nGetting Started with Docker Hub\n\n\nDockerizing Applications\n\n\nWorking with Containers\n\n\nWorking with Docker Images\n\n\nLinking containers together\n\n\nManaging data in containers\n\n\nWorking with Docker Hub\n\n\nDocker Compose\n\n\nDocker Machine\n\n\nDocker Swarm\n\n\nDocker Hub\n\n\nDocker Hub\n\n\nAccounts\n\n\nRepositories\n\n\nAutomated Builds\n\n\nOfficial Repo Guidelines\n\n\nExamples\n\n\nDockerizing a Node.js web application\n\n\nDockerizing MongoDB\n\n\nDockerizing a Redis service\n\n\nDockerizing a PostgreSQL service\n\n\nDockerizing a Riak service\n\n\nDockerizing an SSH service\n\n\nDockerizing a CouchDB service\n\n\nDockerizing an Apt-Cacher-ng service\n\n\nGetting started with Compose and Django\n\n\nGetting started with Compose and Rails\n\n\nGetting started with Compose and Wordpress\n\n\nArticles\n\n\nDocker basics\n\n\nAdvanced networking\n\n\nSecurity\n\n\nRunning Docker with HTTPS\n\n\nRun a local registry mirror\n\n\nAutomatically starting containers\n\n\nCreating a base image\n\n\nBest practices for writing Dockerfiles\n\n\nUsing certificates for repository client verification\n\n\nUsing Supervisor\n\n\nProcess management with CFEngine\n\n\nUsing Puppet\n\n\nUsing Chef\n\n\nUsing PowerShell DSC\n\n\nCross-Host linking using ambassador containers\n\n\nRuntime metrics\n\n\nIncreasing a Boot2Docker volume\n\n\nControlling and configuring Docker using Systemd\n\n\nReference\n\n\nCommand line\n\n\nDockerfile\n\n\nFAQ\n\n\nRun Reference\n\n\nCompose command line\n\n\nCompose yml\n\n\nCompose ENV variables\n\n\nCompose commandline completion\n\n\nSwarm discovery\n\n\nSwarm strategies\n\n\nSwarm filters\n\n\nSwarm API\n\n\nDocker Hub API\n\n\nDocker Registry API\n\n\nDocker Registry API Client Libraries\n\n\nDocker Hub and Registry Spec\n\n\nDocker Remote API\n\n\nDocker Remote API v1.17\n\n\nDocker Remote API v1.16\n\n\nDocker Remote API Client Libraries\n\n\nDocker Hub Accounts API\n\n\nContributor Guide\n\n\nREADME first\n\n\nGet required software\n\n\nConfigure Git for contributing\n\n\nWork with a development container\n\n\nRun tests and test documentation\n\n\nUnderstand contribution workflow\n\n\nFind an issue\n\n\nWork on an issue\n\n\nCreate a pull request\n\n\nParticipate in the PR review\n\n\nAdvanced contributing\n\n\nWhere to get help\n\n\nCoding style guide\n\n\nDocumentation style guide",
|
|
"title": "**HIDDEN**"
|
|
},
|
|
{
|
|
"loc": "/terms#table-of-contents",
|
|
"tags": "",
|
|
"text": "",
|
|
"title": "Table of Contents"
|
|
},
|
|
{
|
|
"loc": "/terms#about",
|
|
"tags": "",
|
|
"text": "Docker Release Notes Understanding Docker",
|
|
"title": "About"
|
|
},
|
|
{
|
|
"loc": "/terms#installation",
|
|
"tags": "",
|
|
"text": "Ubuntu Mac OS X Microsoft Windows Amazon EC2 Arch Linux Binaries CentOS CRUX Linux Debian Fedora FrugalWare Google Cloud Platform Gentoo IBM Softlayer Rackspace Cloud Red Hat Enterprise Linux Oracle Linux SUSE Docker Compose",
|
|
"title": "Installation"
|
|
},
|
|
{
|
|
"loc": "/terms#user-guide",
|
|
"tags": "",
|
|
"text": "The Docker User Guide Getting Started with Docker Hub Dockerizing Applications Working with Containers Working with Docker Images Linking containers together Managing data in containers Working with Docker Hub Docker Compose Docker Machine Docker Swarm",
|
|
"title": "User Guide"
|
|
},
|
|
{
|
|
"loc": "/terms#docker-hub",
|
|
"tags": "",
|
|
"text": "Docker Hub Accounts Repositories Automated Builds Official Repo Guidelines",
|
|
"title": "Docker Hub"
|
|
},
|
|
{
|
|
"loc": "/terms#examples",
|
|
"tags": "",
|
|
"text": "Dockerizing a Node.js web application Dockerizing MongoDB Dockerizing a Redis service Dockerizing a PostgreSQL service Dockerizing a Riak service Dockerizing an SSH service Dockerizing a CouchDB service Dockerizing an Apt-Cacher-ng service Getting started with Compose and Django Getting started with Compose and Rails Getting started with Compose and Wordpress",
|
|
"title": "Examples"
|
|
},
|
|
{
|
|
"loc": "/terms#articles",
|
|
"tags": "",
|
|
"text": "Docker basics Advanced networking Security Running Docker with HTTPS Run a local registry mirror Automatically starting containers Creating a base image Best practices for writing Dockerfiles Using certificates for repository client verification Using Supervisor Process management with CFEngine Using Puppet Using Chef Using PowerShell DSC Cross-Host linking using ambassador containers Runtime metrics Increasing a Boot2Docker volume Controlling and configuring Docker using Systemd",
|
|
"title": "Articles"
|
|
},
|
|
{
|
|
"loc": "/terms#reference",
|
|
"tags": "",
|
|
"text": "Command line Dockerfile FAQ Run Reference Compose command line Compose yml Compose ENV variables Compose commandline completion Swarm discovery Swarm strategies Swarm filters Swarm API Docker Hub API Docker Registry API Docker Registry API Client Libraries Docker Hub and Registry Spec Docker Remote API Docker Remote API v1.17 Docker Remote API v1.16 Docker Remote API Client Libraries Docker Hub Accounts API",
|
|
"title": "Reference"
|
|
},
|
|
{
|
|
"loc": "/terms#contributor-guide",
|
|
"tags": "",
|
|
"text": "README first Get required software Configure Git for contributing Work with a development container Run tests and test documentation Understand contribution workflow Find an issue Work on an issue Create a pull request Participate in the PR review Advanced contributing Where to get help Coding style guide Documentation style guide",
|
|
"title": "Contributor Guide"
|
|
},
|
|
{
|
|
"loc": "/terms/registry/",
|
|
"tags": "",
|
|
"text": "Registry\nIntroduction\nA Registry is a hosted service containing\nrepositories of\nimages which responds to the Registry API.\nThe default registry can be accessed using a browser at\nDocker Hub or using the\nsudo docker search command.\nFurther Reading\nFor more information see Working with\nRepositories",
|
|
"title": "**HIDDEN**"
|
|
},
|
|
{
|
|
"loc": "/terms/registry#registry",
|
|
"tags": "",
|
|
"text": "",
|
|
"title": "Registry"
|
|
},
|
|
{
|
|
"loc": "/terms/registry#introduction",
|
|
"tags": "",
|
|
"text": "A Registry is a hosted service containing repositories of images which responds to the Registry API. The default registry can be accessed using a browser at Docker Hub or using the sudo docker search command.",
|
|
"title": "Introduction"
|
|
},
|
|
{
|
|
"loc": "/terms/registry#further-reading",
|
|
"tags": "",
|
|
"text": "For more information see Working with\nRepositories",
|
|
"title": "Further Reading"
|
|
},
|
|
{
|
|
"loc": "/terms/container/",
|
|
"tags": "",
|
|
"text": "Container\nIntroduction\n\nOnce you start a process in Docker from an Image, Docker\nfetches the image and its Parent Image, and repeats the\nprocess until it reaches the Base Image. Then\nthe Union File System adds a read-write layer on top. That\nread-write layer, plus the information about its Parent\nImage\nand some additional information like its unique id, networking\nconfiguration, and resource limits is called a container.\nContainer State\nContainers can change, and so they have state. A container may be\nrunning or exited.\nWhen a container is running, the idea of a \"container\" also includes a\ntree of processes running on the CPU, isolated from the other processes\nrunning on the host.\nWhen the container is exited, the state of the file system and its exit\nvalue is preserved. You can start, stop, and restart a container. The\nprocesses restart from scratch (their memory state is not preserved\nin a container), but the file system is just as it was when the\ncontainer was stopped.\nYou can promote a container to an Image with docker commit.\nOnce a container is an image, you can use it as a parent for new containers.\nContainer IDs\nAll containers are identified by a 64 hexadecimal digit string\n(internally a 256bit value). To simplify their use, a short ID of the\nfirst 12 characters can be used on the command line. There is a small\npossibility of short id collisions, so the docker server will always\nreturn the long ID.",
|
|
"title": "**HIDDEN**"
|
|
},
|
|
{
|
|
"loc": "/terms/container#container",
|
|
"tags": "",
|
|
"text": "",
|
|
"title": "Container"
|
|
},
|
|
{
|
|
"loc": "/terms/container#introduction",
|
|
"tags": "",
|
|
"text": "Once you start a process in Docker from an Image , Docker\nfetches the image and its Parent Image , and repeats the\nprocess until it reaches the Base Image . Then\nthe Union File System adds a read-write layer on top. That\nread-write layer, plus the information about its Parent\nImage \nand some additional information like its unique id, networking\nconfiguration, and resource limits is called a container .",
|
|
"title": "Introduction"
|
|
},
|
|
{
|
|
"loc": "/terms/container#container-state",
|
|
"tags": "",
|
|
"text": "Containers can change, and so they have state. A container may be running or exited . When a container is running, the idea of a \"container\" also includes a\ntree of processes running on the CPU, isolated from the other processes\nrunning on the host. When the container is exited, the state of the file system and its exit\nvalue is preserved. You can start, stop, and restart a container. The\nprocesses restart from scratch (their memory state is not preserved\nin a container), but the file system is just as it was when the\ncontainer was stopped. You can promote a container to an Image with docker commit .\nOnce a container is an image, you can use it as a parent for new containers.",
|
|
"title": "Container State"
|
|
},
|
|
{
|
|
"loc": "/terms/container#container-ids",
|
|
"tags": "",
|
|
"text": "All containers are identified by a 64 hexadecimal digit string\n(internally a 256bit value). To simplify their use, a short ID of the\nfirst 12 characters can be used on the command line. There is a small\npossibility of short id collisions, so the docker server will always\nreturn the long ID.",
|
|
"title": "Container IDs"
|
|
},
|
|
{
|
|
"loc": "/terms/repository/",
|
|
"tags": "",
|
|
"text": "Repository\nIntroduction\nA repository is a set of images either on your local Docker server, or\nshared, by pushing it to a Registry\nserver.\nImages can be associated with a repository (or multiple) by giving them\nan image name using one of three different commands:\n\nAt build time (e.g., sudo docker build -t IMAGENAME),\nWhen committing a container (e.g.,\n sudo docker commit CONTAINERID IMAGENAME) or\nWhen tagging an image id with an image name (e.g.,\n sudo docker tag IMAGEID IMAGENAME).\n\nA Fully Qualified Image Name (FQIN) can be made up of 3 parts:\n[registry_hostname[:port]/][user_name/](repository_name:version_tag)\nusername and registry_hostname default to an empty string. When\nregistry_hostname is an empty string, then docker push will push to\nindex.docker.io:80.\nIf you create a new repository which you want to share, you will need to\nset at least the user_name, as the default blank user_name prefix is\nreserved for official Docker images.\nFor more information see Working with\nRepositories",
|
|
"title": "**HIDDEN**"
|
|
},
|
|
{
|
|
"loc": "/terms/repository#repository",
|
|
"tags": "",
|
|
"text": "",
|
|
"title": "Repository"
|
|
},
|
|
{
|
|
"loc": "/terms/repository#introduction",
|
|
"tags": "",
|
|
"text": "A repository is a set of images either on your local Docker server, or\nshared, by pushing it to a Registry \nserver. Images can be associated with a repository (or multiple) by giving them\nan image name using one of three different commands: At build time (e.g., sudo docker build -t IMAGENAME ), When committing a container (e.g.,\n sudo docker commit CONTAINERID IMAGENAME ) or When tagging an image id with an image name (e.g.,\n sudo docker tag IMAGEID IMAGENAME ). A Fully Qualified Image Name (FQIN) can be made up of 3 parts: [registry_hostname[:port]/][user_name/](repository_name:version_tag) username and registry_hostname default to an empty string. When registry_hostname is an empty string, then docker push will push to index.docker.io:80 . If you create a new repository which you want to share, you will need to\nset at least the user_name , as the default blank user_name prefix is\nreserved for official Docker images. For more information see Working with\nRepositories",
|
|
"title": "Introduction"
|
|
},
|
|
{
|
|
"loc": "/terms/filesystem/",
|
|
"tags": "",
|
|
"text": "File System\nIntroduction\n\nIn order for a Linux system to run, it typically needs two file\nsystems:\n\nboot file system (bootfs)\nroot file system (rootfs)\n\nThe boot file system contains the bootloader and the kernel. The\nuser never makes any changes to the boot file system. In fact, soon\nafter the boot process is complete, the entire kernel is in memory, and\nthe boot file system is unmounted to free up the RAM associated with the\ninitrd disk image.\nThe root file system includes the typical directory structure we\nassociate with Unix-like operating systems:\n/dev, /proc, /bin, /etc, /lib, /usr, and /tmp plus all the configuration\nfiles, binaries and libraries required to run user applications (like bash,\nls, and so forth).\nWhile there can be important kernel differences between different Linux\ndistributions, the contents and organization of the root file system are\nusually what make your software packages dependent on one distribution\nversus another. Docker can help solve this problem by running multiple\ndistributions at the same time.",
|
|
"title": "**HIDDEN**"
|
|
},
|
|
{
|
|
"loc": "/terms/filesystem#file-system",
|
|
"tags": "",
|
|
"text": "",
|
|
"title": "File System"
|
|
},
|
|
{
|
|
"loc": "/terms/filesystem#introduction",
|
|
"tags": "",
|
|
"text": "In order for a Linux system to run, it typically needs two file\nsystems : boot file system (bootfs) root file system (rootfs) The boot file system contains the bootloader and the kernel. The\nuser never makes any changes to the boot file system. In fact, soon\nafter the boot process is complete, the entire kernel is in memory, and\nthe boot file system is unmounted to free up the RAM associated with the\ninitrd disk image. The root file system includes the typical directory structure we\nassociate with Unix-like operating systems: /dev, /proc, /bin, /etc, /lib, /usr, and /tmp plus all the configuration\nfiles, binaries and libraries required to run user applications (like bash,\nls, and so forth). While there can be important kernel differences between different Linux\ndistributions, the contents and organization of the root file system are\nusually what make your software packages dependent on one distribution\nversus another. Docker can help solve this problem by running multiple\ndistributions at the same time.",
|
|
"title": "Introduction"
|
|
},
|
|
{
|
|
"loc": "/terms/image/",
|
|
"tags": "",
|
|
"text": "Image\nIntroduction\n\nIn Docker terminology, a read-only Layer is\ncalled an image. An image never changes.\nSince Docker uses a Union File System, the\nprocesses think the whole file system is mounted read-write. But all the\nchanges go to the top-most writeable layer, and underneath, the original\nfile in the read-only image is unchanged. Since images don't change,\nimages do not have state.\n\nParent Image\n\nEach image may depend on one more image which forms the layer beneath\nit. We sometimes say that the lower image is the parent of the upper\nimage.\nBase Image\nAn image that has no parent is a base image.\nImage IDs\nAll images are identified by a 64 hexadecimal digit string (internally a\n256bit value). To simplify their use, a short ID of the first 12\ncharacters can be used on the command line. There is a small possibility\nof short id collisions, so the docker server will always return the long\nID.",
|
|
"title": "**HIDDEN**"
|
|
},
|
|
{
|
|
"loc": "/terms/image#image",
|
|
"tags": "",
|
|
"text": "",
|
|
"title": "Image"
|
|
},
|
|
{
|
|
"loc": "/terms/image#introduction",
|
|
"tags": "",
|
|
"text": "In Docker terminology, a read-only Layer is\ncalled an image . An image never changes. Since Docker uses a Union File System , the\nprocesses think the whole file system is mounted read-write. But all the\nchanges go to the top-most writeable layer, and underneath, the original\nfile in the read-only image is unchanged. Since images don't change,\nimages do not have state.",
|
|
"title": "Introduction"
|
|
},
|
|
{
|
|
"loc": "/terms/image#parent-image",
|
|
"tags": "",
|
|
"text": "Each image may depend on one more image which forms the layer beneath\nit. We sometimes say that the lower image is the parent of the upper\nimage.",
|
|
"title": "Parent Image"
|
|
},
|
|
{
|
|
"loc": "/terms/image#base-image",
|
|
"tags": "",
|
|
"text": "An image that has no parent is a base image .",
|
|
"title": "Base Image"
|
|
},
|
|
{
|
|
"loc": "/terms/image#image-ids",
|
|
"tags": "",
|
|
"text": "All images are identified by a 64 hexadecimal digit string (internally a\n256bit value). To simplify their use, a short ID of the first 12\ncharacters can be used on the command line. There is a small possibility\nof short id collisions, so the docker server will always return the long\nID.",
|
|
"title": "Image IDs"
|
|
},
|
|
{
|
|
"loc": "/project/",
|
|
"tags": "",
|
|
"text": "Table of Contents\nAbout\n\n\nDocker\n\n\nRelease Notes\n\n\nUnderstanding Docker\n\n\nInstallation\n\n\nUbuntu\n\n\nMac OS X\n\n\nMicrosoft Windows\n\n\nAmazon EC2\n\n\nArch Linux\n\n\nBinaries\n\n\nCentOS\n\n\nCRUX Linux\n\n\nDebian\n\n\nFedora\n\n\nFrugalWare\n\n\nGoogle Cloud Platform\n\n\nGentoo\n\n\nIBM Softlayer\n\n\nRackspace Cloud\n\n\nRed Hat Enterprise Linux\n\n\nOracle Linux\n\n\nSUSE\n\n\nDocker Compose\n\n\nUser Guide\n\n\nThe Docker User Guide\n\n\nGetting Started with Docker Hub\n\n\nDockerizing Applications\n\n\nWorking with Containers\n\n\nWorking with Docker Images\n\n\nLinking containers together\n\n\nManaging data in containers\n\n\nWorking with Docker Hub\n\n\nDocker Compose\n\n\nDocker Machine\n\n\nDocker Swarm\n\n\nDocker Hub\n\n\nDocker Hub\n\n\nAccounts\n\n\nRepositories\n\n\nAutomated Builds\n\n\nOfficial Repo Guidelines\n\n\nExamples\n\n\nDockerizing a Node.js web application\n\n\nDockerizing MongoDB\n\n\nDockerizing a Redis service\n\n\nDockerizing a PostgreSQL service\n\n\nDockerizing a Riak service\n\n\nDockerizing an SSH service\n\n\nDockerizing a CouchDB service\n\n\nDockerizing an Apt-Cacher-ng service\n\n\nGetting started with Compose and Django\n\n\nGetting started with Compose and Rails\n\n\nGetting started with Compose and Wordpress\n\n\nArticles\n\n\nDocker basics\n\n\nAdvanced networking\n\n\nSecurity\n\n\nRunning Docker with HTTPS\n\n\nRun a local registry mirror\n\n\nAutomatically starting containers\n\n\nCreating a base image\n\n\nBest practices for writing Dockerfiles\n\n\nUsing certificates for repository client verification\n\n\nUsing Supervisor\n\n\nProcess management with CFEngine\n\n\nUsing Puppet\n\n\nUsing Chef\n\n\nUsing PowerShell DSC\n\n\nCross-Host linking using ambassador containers\n\n\nRuntime metrics\n\n\nIncreasing a Boot2Docker volume\n\n\nControlling and configuring Docker using Systemd\n\n\nReference\n\n\nCommand line\n\n\nDockerfile\n\n\nFAQ\n\n\nRun Reference\n\n\nCompose command line\n\n\nCompose yml\n\n\nCompose ENV variables\n\n\nCompose commandline completion\n\n\nSwarm discovery\n\n\nSwarm strategies\n\n\nSwarm filters\n\n\nSwarm API\n\n\nDocker Hub API\n\n\nDocker Registry API\n\n\nDocker Registry API Client Libraries\n\n\nDocker Hub and Registry Spec\n\n\nDocker Remote API\n\n\nDocker Remote API v1.17\n\n\nDocker Remote API v1.16\n\n\nDocker Remote API Client Libraries\n\n\nDocker Hub Accounts API\n\n\nContributor Guide\n\n\nREADME first\n\n\nGet required software\n\n\nConfigure Git for contributing\n\n\nWork with a development container\n\n\nRun tests and test documentation\n\n\nUnderstand contribution workflow\n\n\nFind an issue\n\n\nWork on an issue\n\n\nCreate a pull request\n\n\nParticipate in the PR review\n\n\nAdvanced contributing\n\n\nWhere to get help\n\n\nCoding style guide\n\n\nDocumentation style guide",
|
|
"title": "**HIDDEN**"
|
|
},
|
|
{
|
|
"loc": "/project#table-of-contents",
|
|
"tags": "",
|
|
"text": "",
|
|
"title": "Table of Contents"
|
|
},
|
|
{
|
|
"loc": "/project#about",
|
|
"tags": "",
|
|
"text": "Docker Release Notes Understanding Docker",
|
|
"title": "About"
|
|
},
|
|
{
|
|
"loc": "/project#installation",
|
|
"tags": "",
|
|
"text": "Ubuntu Mac OS X Microsoft Windows Amazon EC2 Arch Linux Binaries CentOS CRUX Linux Debian Fedora FrugalWare Google Cloud Platform Gentoo IBM Softlayer Rackspace Cloud Red Hat Enterprise Linux Oracle Linux SUSE Docker Compose",
|
|
"title": "Installation"
|
|
},
|
|
{
|
|
"loc": "/project#user-guide",
|
|
"tags": "",
|
|
"text": "The Docker User Guide Getting Started with Docker Hub Dockerizing Applications Working with Containers Working with Docker Images Linking containers together Managing data in containers Working with Docker Hub Docker Compose Docker Machine Docker Swarm",
|
|
"title": "User Guide"
|
|
},
|
|
{
|
|
"loc": "/project#docker-hub",
|
|
"tags": "",
|
|
"text": "Docker Hub Accounts Repositories Automated Builds Official Repo Guidelines",
|
|
"title": "Docker Hub"
|
|
},
|
|
{
|
|
"loc": "/project#examples",
|
|
"tags": "",
|
|
"text": "Dockerizing a Node.js web application Dockerizing MongoDB Dockerizing a Redis service Dockerizing a PostgreSQL service Dockerizing a Riak service Dockerizing an SSH service Dockerizing a CouchDB service Dockerizing an Apt-Cacher-ng service Getting started with Compose and Django Getting started with Compose and Rails Getting started with Compose and Wordpress",
|
|
"title": "Examples"
|
|
},
|
|
{
|
|
"loc": "/project#articles",
|
|
"tags": "",
|
|
"text": "Docker basics Advanced networking Security Running Docker with HTTPS Run a local registry mirror Automatically starting containers Creating a base image Best practices for writing Dockerfiles Using certificates for repository client verification Using Supervisor Process management with CFEngine Using Puppet Using Chef Using PowerShell DSC Cross-Host linking using ambassador containers Runtime metrics Increasing a Boot2Docker volume Controlling and configuring Docker using Systemd",
|
|
"title": "Articles"
|
|
},
|
|
{
|
|
"loc": "/project#reference",
|
|
"tags": "",
|
|
"text": "Command line Dockerfile FAQ Run Reference Compose command line Compose yml Compose ENV variables Compose commandline completion Swarm discovery Swarm strategies Swarm filters Swarm API Docker Hub API Docker Registry API Docker Registry API Client Libraries Docker Hub and Registry Spec Docker Remote API Docker Remote API v1.17 Docker Remote API v1.16 Docker Remote API Client Libraries Docker Hub Accounts API",
|
|
"title": "Reference"
|
|
},
|
|
{
|
|
"loc": "/project#contributor-guide",
|
|
"tags": "",
|
|
"text": "README first Get required software Configure Git for contributing Work with a development container Run tests and test documentation Understand contribution workflow Find an issue Work on an issue Create a pull request Participate in the PR review Advanced contributing Where to get help Coding style guide Documentation style guide",
|
|
"title": "Contributor Guide"
|
|
},
|
|
{
|
|
"loc": "/project/who-written-for/",
|
|
"tags": "",
|
|
"text": "README first\nThis section of the documentation contains a guide for Docker users who want to\ncontribute code or documentation to the Docker project. As a community, we\nshare rules of behavior and interaction. Make sure you are familiar with the community guidelines before continuing.\nWhere and what you can contribute\nThe Docker project consists of not just one but several repositories on GitHub.\nSo, in addition to the docker/docker repository, there is the\ndocker/libcontainer repo, the docker/machine repo, and several more.\nContribute to any of these and you contribute to the Docker project.\nNot all Docker repositories use the Go language. Also, each repository has its\nown focus area. So, if you are an experienced contributor, think about\ncontributing to a Docker repository that has a language or a focus area you are\nfamiliar with.\nIf you are new to the open source community, to Docker, or to formal\nprogramming, you should start out contributing to the docker/docker\nrepository. Why? Because this guide is written for that repository specifically.\nFinally, code or documentation isn't the only way to contribute. You can report\nan issue, add to discussions in our community channel, write a blog post, or\ntake a usability test. You can even propose your own type of contribution.\nRight now we don't have a lot written about this yet, so just email\n if this type of contributing interests you.\nA turtle is involved\n\nEnough said.\nHow to use this guide\nThis is written for the distracted, the overworked, the sloppy reader with fair\ngit skills and a failing memory for the GitHub GUI. The guide attempts to\nexplain how to use the Docker environment as precisely, predictably, and\nprocedurally as possible.\nUsers who are new to the Docker development environment should start by setting\nup their environment. Then, they should try a simple code change. After that,\nyou should find something to work on or propose at totally new change.\nIf you are a programming prodigy, you still may find this documentation useful.\nPlease feel free to skim past information you find obvious or boring.\nHow to get started\nStart by getting the software you need to contribute.",
|
|
"title": "README first"
|
|
},
|
|
{
|
|
"loc": "/project/who-written-for#readme-first",
|
|
"tags": "",
|
|
"text": "This section of the documentation contains a guide for Docker users who want to\ncontribute code or documentation to the Docker project. As a community, we\nshare rules of behavior and interaction. Make sure you are familiar with the community guidelines before continuing.",
|
|
"title": "README first"
|
|
},
|
|
{
|
|
"loc": "/project/who-written-for#where-and-what-you-can-contribute",
|
|
"tags": "",
|
|
"text": "The Docker project consists of not just one but several repositories on GitHub.\nSo, in addition to the docker/docker repository, there is the docker/libcontainer repo, the docker/machine repo, and several more.\nContribute to any of these and you contribute to the Docker project. Not all Docker repositories use the Go language. Also, each repository has its\nown focus area. So, if you are an experienced contributor, think about\ncontributing to a Docker repository that has a language or a focus area you are\nfamiliar with. If you are new to the open source community, to Docker, or to formal\nprogramming, you should start out contributing to the docker/docker \nrepository. Why? Because this guide is written for that repository specifically. Finally, code or documentation isn't the only way to contribute. You can report\nan issue, add to discussions in our community channel, write a blog post, or\ntake a usability test. You can even propose your own type of contribution.\nRight now we don't have a lot written about this yet, so just email if this type of contributing interests you.",
|
|
"title": "Where and what you can contribute"
|
|
},
|
|
{
|
|
"loc": "/project/who-written-for#a-turtle-is-involved",
|
|
"tags": "",
|
|
"text": "Enough said.",
|
|
"title": "A turtle is involved"
|
|
},
|
|
{
|
|
"loc": "/project/who-written-for#how-to-use-this-guide",
|
|
"tags": "",
|
|
"text": "This is written for the distracted, the overworked, the sloppy reader with fair git skills and a failing memory for the GitHub GUI. The guide attempts to\nexplain how to use the Docker environment as precisely, predictably, and\nprocedurally as possible. Users who are new to the Docker development environment should start by setting\nup their environment. Then, they should try a simple code change. After that,\nyou should find something to work on or propose at totally new change. If you are a programming prodigy, you still may find this documentation useful.\nPlease feel free to skim past information you find obvious or boring.",
|
|
"title": "How to use this guide"
|
|
},
|
|
{
|
|
"loc": "/project/who-written-for#how-to-get-started",
|
|
"tags": "",
|
|
"text": "Start by getting the software you need to contribute .",
|
|
"title": "How to get started"
|
|
},
|
|
{
|
|
"loc": "/project/software-required/",
|
|
"tags": "",
|
|
"text": "Get the required software\nBefore you begin contributing you must have:\n\na GitHub account\ngit\nmake \ndocker\n\nYou'll notice that go, the language that Docker is written in, is not listed.\nThat's because you don't need it installed; Docker's development environment\nprovides it for you. You'll learn more about the development environment later.\nGet a GitHub account\nTo contribute to the Docker project, you will need a GitHub account. A free account is\nfine. All the Docker project repositories are public and visible to everyone.\nYou should also have some experience using both the GitHub application and git\non the command line. \nInstall git\nInstall git on your local system. You can check if git is on already on your\nsystem and properly installed with the following command:\n$ git --version\n\nThis documentation is written using git version 2.2.2. Your version may be\ndifferent depending on your OS.\nInstall make\nInstall make. You can check if make is on your system with the following\ncommand:\n$ make -v\n\nThis documentation is written using GNU Make 3.81. Your version may be different\ndepending on your OS.\nInstall or upgrade Docker\nIf you haven't already, install the Docker software using the \ninstructions for your operating system.\nIf you have an existing installation, check your version and make sure you have\nthe latest Docker. \nTo check if docker is already installed on Linux:\n$ docker --version\nDocker version 1.5.0, build a8a31ef\n\nOn Mac OS X or Windows, you should have installed Boot2Docker which includes\nDocker. You'll need to verify both Boot2Docker and then Docker. This\ndocumentation was written on OS X using the following versions.\n$ boot2docker version\nBoot2Docker-cli version: v1.5.0\nGit commit: ccd9032\n\n$ docker --version\nDocker version 1.5.0, build a8a31ef\n\nLinux users and sudo\nThis guide assumes you have added your user to the docker group on your system.\nTo check, list the group's contents:\n$ getent group docker\ndocker:x:999:ubuntu\n\nIf the command returns no matches, you have two choices. You can preface this\nguide's docker commands with sudo as you work. Alternatively, you can add\nyour user to the docker group as follows:\n$ sudo usermod -aG docker ubuntu\n\nYou must log out and back in for this modification to take effect.\nWhere to go next\nIn the next section, you'll learn how to set up and configure Git for\ncontributing to Docker.",
|
|
"title": "Get required software"
|
|
},
|
|
{
|
|
"loc": "/project/software-required#get-the-required-software",
|
|
"tags": "",
|
|
"text": "Before you begin contributing you must have: a GitHub account git make docker You'll notice that go , the language that Docker is written in, is not listed.\nThat's because you don't need it installed; Docker's development environment\nprovides it for you. You'll learn more about the development environment later. Get a GitHub account To contribute to the Docker project, you will need a GitHub account . A free account is\nfine. All the Docker project repositories are public and visible to everyone. You should also have some experience using both the GitHub application and git \non the command line. Install git Install git on your local system. You can check if git is on already on your\nsystem and properly installed with the following command: $ git --version This documentation is written using git version 2.2.2. Your version may be\ndifferent depending on your OS. Install make Install make . You can check if make is on your system with the following\ncommand: $ make -v This documentation is written using GNU Make 3.81. Your version may be different\ndepending on your OS. Install or upgrade Docker If you haven't already, install the Docker software using the instructions for your operating system .\nIf you have an existing installation, check your version and make sure you have\nthe latest Docker. To check if docker is already installed on Linux: $ docker --version\nDocker version 1.5.0, build a8a31ef On Mac OS X or Windows, you should have installed Boot2Docker which includes\nDocker. You'll need to verify both Boot2Docker and then Docker. This\ndocumentation was written on OS X using the following versions. $ boot2docker version\nBoot2Docker-cli version: v1.5.0\nGit commit: ccd9032\n\n$ docker --version\nDocker version 1.5.0, build a8a31ef",
|
|
"title": "Get the required software"
|
|
},
|
|
{
|
|
"loc": "/project/software-required#linux-users-and-sudo",
|
|
"tags": "",
|
|
"text": "This guide assumes you have added your user to the docker group on your system.\nTo check, list the group's contents: $ getent group docker\ndocker:x:999:ubuntu If the command returns no matches, you have two choices. You can preface this\nguide's docker commands with sudo as you work. Alternatively, you can add\nyour user to the docker group as follows: $ sudo usermod -aG docker ubuntu You must log out and back in for this modification to take effect.",
|
|
"title": "Linux users and sudo"
|
|
},
|
|
{
|
|
"loc": "/project/software-required#where-to-go-next",
|
|
"tags": "",
|
|
"text": "In the next section, you'll learn how to set up and configure Git for\ncontributing to Docker .",
|
|
"title": "Where to go next"
|
|
},
|
|
{
|
|
"loc": "/project/set-up-git/",
|
|
"tags": "",
|
|
"text": "Configure Git for contributing\nWork through this page to configure Git and a repository you'll use throughout\nthe Contributor Guide. The work you do further in the guide, depends on the work\nyou do here. \nFork and clone the Docker code\nBefore contributing, you first fork the Docker code repository. A fork copies\na repository at a particular point in time. GitHub tracks for you where a fork\noriginates.\nAs you make contributions, you change your fork's code. When you are ready,\nyou make a pull request back to the original Docker repository. If you aren't\nfamiliar with this workflow, don't worry, this guide walks you through all the\nsteps. \nTo fork and clone Docker:\n\n\nOpen a browser and log into GitHub with your account.\n\n\nGo to the docker/docker repository.\n\n\nClick the \"Fork\" button in the upper right corner of the GitHub interface.\n\nGitHub forks the repository to your GitHub account. The original\ndocker/docker repository becomes a new fork YOUR_ACCOUNT/docker under\nyour account.\n\n\nCopy your fork's clone URL from GitHub.\nGitHub allows you to use HTTPS or SSH protocols for clones. You can use the\ngit command line or clients like Subversion to clone a repository. \n\nThis guide assume you are using the HTTPS protocol and the git command\nline. If you are comfortable with SSH and some other tool, feel free to use\nthat instead. You'll need to convert what you see in the guide to what is\nappropriate to your tool.\n\n\nOpen a terminal window on your local host and change to your home directory.\n$ cd ~\n\n\n\nCreate a repos directory.\n$ mkdir repos\n\n\n\nChange into your repos directory.\n$ cd repos\n\n\n\nClone the fork to your local host into a repository called docker-fork.\n$ git clone https://github.com/moxiegirl/docker.git docker-fork\n\nNaming your local repo docker-fork should help make these instructions\neasier to follow; experienced coders don't typically change the name.\n\n\nChange directory into your new docker-fork directory.\n$ cd docker-fork\n\nTake a moment to familiarize yourself with the repository's contents. List\nthe contents. \n\n\nSet your signature and an upstream remote\nWhen you contribute to Docker, you must certify you agree with the \nDeveloper Certificate of Origin.\nYou indicate your agreement by signing your git commits like this:\nSigned-off-by: Pat Smith pat.smith@email.com\n\nTo create a signature, you configure your username and email address in Git.\nYou can set these globally or locally on just your docker-fork repository.\nYou must sign with your real name. We don't accept anonymous contributions or\ncontributions through pseudonyms.\nAs you change code in your fork, you'll want to keep it in sync with the changes\nothers make in the docker/docker repository. To make syncing easier, you'll\nalso add a remote called upstream that points to docker/docker. A remote\nis just another a project version hosted on the internet or network.\nTo configure your username, email, and add a remote:\n\n\nChange to the root of your docker-fork repository.\n$ cd docker-fork\n\n\n\nSet your user.name for the repository.\n$ git config --local user.name \"FirstName LastName\"\n\n\n\nSet your user.email for the repository.\n$ git config --local user.email \"emailname@mycompany.com\"\n\n\n\nSet your local repo to track changes upstream, on the docker repository. \n$ git remote add upstream https://github.com/docker/docker.git\n\n\n\nCheck the result in your git configuration.\n$ git config --local -l\ncore.repositoryformatversion=0\ncore.filemode=true\ncore.bare=false\ncore.logallrefupdates=true\nremote.origin.url=https://github.com/moxiegirl/docker.git\nremote.origin.fetch=+refs/heads/*:refs/remotes/origin/*\nbranch.master.remote=origin\nbranch.master.merge=refs/heads/master\nuser.name=Mary Anthony\nuser.email=mary@docker.com\nremote.upstream.url=https://github.com/docker/docker.git\nremote.upstream.fetch=+refs/heads/*:refs/remotes/upstream/*\n\nTo list just the remotes use:\n$ git remote -v\norigin https://github.com/moxiegirl/docker.git (fetch)\norigin https://github.com/moxiegirl/docker.git (push)\nupstream https://github.com/docker/docker.git (fetch)\nupstream https://github.com/docker/docker.git (push)\n\n\n\nCreate and push a branch\nAs you change code in your fork, you make your changes on a repository branch.\nThe branch name should reflect what you are working on. In this section, you\ncreate a branch, make a change, and push it up to your fork. \nThis branch is just for testing your config for this guide. The changes arepart\nof a dry run so the branch name is going to be dry-run-test. To create an push\nthe branch to your fork on GitHub:\n\n\nOpen a terminal and go to the root of your docker-fork.\n$ cd docker-fork\n\n\n\nCreate a dry-run-test branch.\n$ git checkout -b dry-run-test\n\nThis command creates the branch and switches the repository to it.\n\n\nVerify you are in your new branch.\n$ git branch\n* dry-run-test\n master\n\nThe current branch has an * (asterisk) marker. So, these results shows you\nare on the right branch. \n\n\nCreate a TEST.md file in the repository's root.\n$ touch TEST.md\n\n\n\nEdit the file and add your email and location.\n\nYou can use any text editor you are comfortable with.\n\n\nClose and save the file.\n\n\nCheck the status of your branch. \n$ git status\nOn branch dry-run-test\nUntracked files:\n (use \"git add file...\" to include in what will be committed)\n\n TEST.md\n\nnothing added to commit but untracked files present (use \"git add\" to track)\n\nYou've only changed the one file. It is untracked so far by git.\n\n\nAdd your file.\n$ git add TEST.md\n\nThat is the only staged file. Stage is fancy word for work that Git is\ntracking.\n\n\nSign and commit your change.\n$ git commit -s -m \"Making a dry run test.\"\n[dry-run-test 6e728fb] Making a dry run test\n 1 file changed, 1 insertion(+)\n create mode 100644 TEST.md\n\nCommit messages should have a short summary sentence of no more than 50\ncharacters. Optionally, you can also include a more detailed explanation\nafter the summary. Separate the summary from any explanation with an empty\nline.\n\n\nPush your changes to GitHub.\n$ git push --set-upstream origin dry-run-test\nUsername for 'https://github.com': moxiegirl\nPassword for 'https://moxiegirl@github.com':\n\nGit prompts you for your GitHub username and password. Then, the command\nreturns a result.\nCounting objects: 13, done.\nCompressing objects: 100% (2/2), done.\nWriting objects: 100% (3/3), 320 bytes | 0 bytes/s, done.\nTotal 3 (delta 1), reused 0 (delta 0)\nTo https://github.com/moxiegirl/docker.git\n * [new branch] dry-run-test - dry-run-test\nBranch dry-run-test set up to track remote branch dry-run-test from origin.\n\n\n\nOpen your browser to Github.\n\n\nNavigate to your Docker fork.\n\n\nMake sure the dry-run-test branch exists, that it has your commit, and the\ncommit is signed.\n\n\n\nWhere to go next\nCongratulations, you have finished configuring both your local host environment\nand Git for contributing. In the next section you'll learn how to set up and\nwork in a Docker development container.",
|
|
"title": "Configure Git for contributing"
|
|
},
|
|
{
|
|
"loc": "/project/set-up-git#configure-git-for-contributing",
|
|
"tags": "",
|
|
"text": "Work through this page to configure Git and a repository you'll use throughout\nthe Contributor Guide. The work you do further in the guide, depends on the work\nyou do here.",
|
|
"title": "Configure Git for contributing"
|
|
},
|
|
{
|
|
"loc": "/project/set-up-git#fork-and-clone-the-docker-code",
|
|
"tags": "",
|
|
"text": "Before contributing, you first fork the Docker code repository. A fork copies\na repository at a particular point in time. GitHub tracks for you where a fork\noriginates. As you make contributions, you change your fork's code. When you are ready,\nyou make a pull request back to the original Docker repository. If you aren't\nfamiliar with this workflow, don't worry, this guide walks you through all the\nsteps. To fork and clone Docker: Open a browser and log into GitHub with your account. Go to the docker/docker repository . Click the \"Fork\" button in the upper right corner of the GitHub interface. GitHub forks the repository to your GitHub account. The original docker/docker repository becomes a new fork YOUR_ACCOUNT/docker under\nyour account. Copy your fork's clone URL from GitHub. GitHub allows you to use HTTPS or SSH protocols for clones. You can use the git command line or clients like Subversion to clone a repository. This guide assume you are using the HTTPS protocol and the git command\nline. If you are comfortable with SSH and some other tool, feel free to use\nthat instead. You'll need to convert what you see in the guide to what is\nappropriate to your tool. Open a terminal window on your local host and change to your home directory. $ cd ~ Create a repos directory. $ mkdir repos Change into your repos directory. $ cd repos Clone the fork to your local host into a repository called docker-fork . $ git clone https://github.com/moxiegirl/docker.git docker-fork Naming your local repo docker-fork should help make these instructions\neasier to follow; experienced coders don't typically change the name. Change directory into your new docker-fork directory. $ cd docker-fork Take a moment to familiarize yourself with the repository's contents. List\nthe contents.",
|
|
"title": "Fork and clone the Docker code"
|
|
},
|
|
{
|
|
"loc": "/project/set-up-git#set-your-signature-and-an-upstream-remote",
|
|
"tags": "",
|
|
"text": "When you contribute to Docker, you must certify you agree with the Developer Certificate of Origin .\nYou indicate your agreement by signing your git commits like this: Signed-off-by: Pat Smith pat.smith@email.com To create a signature, you configure your username and email address in Git.\nYou can set these globally or locally on just your docker-fork repository.\nYou must sign with your real name. We don't accept anonymous contributions or\ncontributions through pseudonyms. As you change code in your fork, you'll want to keep it in sync with the changes\nothers make in the docker/docker repository. To make syncing easier, you'll\nalso add a remote called upstream that points to docker/docker . A remote\nis just another a project version hosted on the internet or network. To configure your username, email, and add a remote: Change to the root of your docker-fork repository. $ cd docker-fork Set your user.name for the repository. $ git config --local user.name \"FirstName LastName\" Set your user.email for the repository. $ git config --local user.email \"emailname@mycompany.com\" Set your local repo to track changes upstream, on the docker repository. $ git remote add upstream https://github.com/docker/docker.git Check the result in your git configuration. $ git config --local -l\ncore.repositoryformatversion=0\ncore.filemode=true\ncore.bare=false\ncore.logallrefupdates=true\nremote.origin.url=https://github.com/moxiegirl/docker.git\nremote.origin.fetch=+refs/heads/*:refs/remotes/origin/*\nbranch.master.remote=origin\nbranch.master.merge=refs/heads/master\nuser.name=Mary Anthony\nuser.email=mary@docker.com\nremote.upstream.url=https://github.com/docker/docker.git\nremote.upstream.fetch=+refs/heads/*:refs/remotes/upstream/* To list just the remotes use: $ git remote -v\norigin https://github.com/moxiegirl/docker.git (fetch)\norigin https://github.com/moxiegirl/docker.git (push)\nupstream https://github.com/docker/docker.git (fetch)\nupstream https://github.com/docker/docker.git (push)",
|
|
"title": "Set your signature and an upstream remote"
|
|
},
|
|
{
|
|
"loc": "/project/set-up-git#create-and-push-a-branch",
|
|
"tags": "",
|
|
"text": "As you change code in your fork, you make your changes on a repository branch.\nThe branch name should reflect what you are working on. In this section, you\ncreate a branch, make a change, and push it up to your fork. This branch is just for testing your config for this guide. The changes arepart\nof a dry run so the branch name is going to be dry-run-test. To create an push\nthe branch to your fork on GitHub: Open a terminal and go to the root of your docker-fork . $ cd docker-fork Create a dry-run-test branch. $ git checkout -b dry-run-test This command creates the branch and switches the repository to it. Verify you are in your new branch. $ git branch\n* dry-run-test\n master The current branch has an * (asterisk) marker. So, these results shows you\nare on the right branch. Create a TEST.md file in the repository's root. $ touch TEST.md Edit the file and add your email and location. You can use any text editor you are comfortable with. Close and save the file. Check the status of your branch. $ git status\nOn branch dry-run-test\nUntracked files:\n (use \"git add file ...\" to include in what will be committed)\n\n TEST.md\n\nnothing added to commit but untracked files present (use \"git add\" to track) You've only changed the one file. It is untracked so far by git. Add your file. $ git add TEST.md That is the only staged file. Stage is fancy word for work that Git is\ntracking. Sign and commit your change. $ git commit -s -m \"Making a dry run test.\"\n[dry-run-test 6e728fb] Making a dry run test\n 1 file changed, 1 insertion(+)\n create mode 100644 TEST.md Commit messages should have a short summary sentence of no more than 50\ncharacters. Optionally, you can also include a more detailed explanation\nafter the summary. Separate the summary from any explanation with an empty\nline. Push your changes to GitHub. $ git push --set-upstream origin dry-run-test\nUsername for 'https://github.com': moxiegirl\nPassword for 'https://moxiegirl@github.com': Git prompts you for your GitHub username and password. Then, the command\nreturns a result. Counting objects: 13, done.\nCompressing objects: 100% (2/2), done.\nWriting objects: 100% (3/3), 320 bytes | 0 bytes/s, done.\nTotal 3 (delta 1), reused 0 (delta 0)\nTo https://github.com/moxiegirl/docker.git\n * [new branch] dry-run-test - dry-run-test\nBranch dry-run-test set up to track remote branch dry-run-test from origin. Open your browser to Github. Navigate to your Docker fork. Make sure the dry-run-test branch exists, that it has your commit, and the\ncommit is signed.",
|
|
"title": "Create and push a branch"
|
|
},
|
|
{
|
|
"loc": "/project/set-up-git#where-to-go-next",
|
|
"tags": "",
|
|
"text": "Congratulations, you have finished configuring both your local host environment\nand Git for contributing. In the next section you'll learn how to set up and\nwork in a Docker development container .",
|
|
"title": "Where to go next"
|
|
},
|
|
{
|
|
"loc": "/project/set-up-dev-env/",
|
|
"tags": "",
|
|
"text": "Work with a development container\nIn this section, you learn to develop like a member of Docker's core team.\nThe docker repository includes a Dockerfile at its root. This file defines\nDocker's development environment. The Dockerfile lists the environment's\ndependencies: system libraries and binaries, go environment, go dependencies,\netc. \nDocker's development environment is itself, ultimately a Docker container.\nYou use the docker repository and its Dockerfile to create a Docker image, \nrun a Docker container, and develop code in the container. Docker itself builds,\ntests, and releases new Docker versions using this container.\nIf you followed the procedures that \nset up Git for contributing, you should have a fork of the docker/docker\nrepository. You also created a branch called dry-run-test. In this section,\nyou continue working with your fork on this branch.\nClean your host of Docker artifacts\nDocker developers run the latest stable release of the Docker software; Or \nBoot2docker and Docker if their machine is Mac OS X. They clean their local\nhosts of unnecessary Docker artifacts such as stopped containers or unused\nimages. Cleaning unnecessary artifacts isn't strictly necessary but it is\ngood practice, so it is included here.\nTo remove unnecessary artifacts.\n\n\nVerify that you have no unnecessary containers running on your host.\n$ docker ps\n\nYou should see something similar to the following:\n\n \n CONTAINER ID\n IMAGE\n COMMAND\n CREATED\n STATUS\n PORTS\n NAMES\n \n\nThere are no running containers on this host. If you have running but unused\ncontainers, stop and then remove them with the docker stop and docker rm\ncommands.\n\n\nVerify that your host has no dangling images.\n$ docker images\n\nYou should see something similar to the following:\n\n \n REPOSITORY\n TAG\n IMAGE ID\n CREATED\n VIRTUAL SIZE\n \n\nThis host has no images. You may have one or more dangling images. A\ndangling image is not used by a running container and is not an ancestor of\nanother image on your system. A fast way to remove dangling containers is\nthe following:\n$ docker rmi -f $(docker images -q -a -f dangling=true)\n\nThis command uses docker images to lists all images (-a flag) by numeric\nIDs (-q flag) and filter them to find dangling images (-f\ndangling=true). Then, the docker rmi command forcibly (-f flag) removes\nthe resulting list. To remove just one image, use the docker rmi ID\ncommand.\n\n\nBuild an image\nIf you followed the last procedure, your host is clean of unnecessary images \nand containers. In this section, you build an image from the Docker development\nenvironment.\n\n\nOpen a terminal.\nMac users, use boot2docker status to make sure Boot2Docker is running. You\nmay need to run eval \"$(boot2docker shellinit)\" to initialize your shell\nenvironment.\n\n\nChange into the root of your forked repository.\n$ cd ~/repos/docker-fork\n\nIf you are following along with this guide, you created a dry-run-test\nbranch when you set up Git for\ncontributing\n\n\nEnsure you are on your dry-run-test branch.\n$ git checkout dry-run-test\n\nIf you get a message that the branch doesn't exist, add the -b flag so the\ncommand both creates the branch and checks it out.\n\n\nCompile your development environment container into an image.\n$ docker build -t dry-run-test .\n\nThe docker build command returns informational message as it runs. The\nfirst build may take a few minutes to create an image. Using the\ninstructions in the Dockerfile, the build may need to download source and\nother images. A successful build returns a final status message similar to\nthe following:\nSuccessfully built 676815d59283\n\n\n\nList your Docker images again.\n$ docker images\n\nYou should see something similar to this:\n\n \n REPOSTITORY\n TAG\n IMAGE ID\n CREATED\n VIRTUAL SIZE\n \n \n dry-run-test\n latest\n 663fbee70028\n About a minute ago\n \n \n \n ubuntu\n trusty\n 2d24f826cb16\n 2 days ago\n 188.3 MB\n \n \n ubuntu\n trusty-20150218.1\n 2d24f826cb16\n 2 days ago\n 188.3 MB\n \n \n ubuntu\n 14.04\n 2d24f826cb16\n 2 days ago\n 188.3 MB\n \n \n ubuntu\n 14.04.2\n 2d24f826cb16\n 2 days ago\n 188.3 MB\n \n \n ubuntu\n latest\n 2d24f826cb16\n 2 days ago\n 188.3 MB\n \n\nLocate your new dry-run-test image in the list. You should also see a\nnumber of ubuntu images. The build process creates these. They are the\nancestors of your new Docker development image. When you next rebuild your\nimage, the build process reuses these ancestors images if they exist. \nKeeping the ancestor images improves the build performance. When you rebuild\nthe child image, the build process uses the local ancestors rather than\nretrieving them from the Hub. The build process gets new ancestors only if\nDockerHub has updated versions.\n\n\nStart a container and run a test\nAt this point, you have created a new Docker development environment image. Now,\nyou'll use this image to create a Docker container to develop in. Then, you'll\nbuild and run a docker binary in your container.\n\n\nOpen two additional terminals on your host.\nAt this point, you'll have about three terminals open.\n\nMac OSX users, make sure you run eval \"$(boot2docker shellinit)\" in any new \nterminals.\n\n\nIn a terminal, create a new container from your dry-run-test image.\n$ docker run --privileged --rm -ti dry-run-test /bin/bash\nroot@5f8630b873fe:/go/src/github.com/docker/docker#\n\nThe command creates a container from your dry-run-test image. It opens an\ninteractive terminal (-ti) running a /bin/bash shell. The\n--privileged flag gives the container access to kernel features and device\naccess. It is this flag that allows you to run a container in a container.\nFinally, the -rm flag instructs Docker to remove the container when you\nexit the /bin/bash shell.\nThe container includes the source of your image repository in the\n/go/src/github.com/docker/docker directory. Try listing the contents to\nverify they are the same as that of your docker-fork repo.\n\n\n\nInvestigate your container bit. \nIf you do a go version you'll find the go language is part of the\ncontainer. \nroot@31ed86e9ddcf:/go/src/github.com/docker/docker# go version\ngo version go1.4.2 linux/amd64\n\nSimilarly, if you do a docker version you find the container\nhas no docker binary. \nroot@31ed86e9ddcf:/go/src/github.com/docker/docker# docker version\nbash: docker: command not found\n\nYou will create one in the next steps.\n\n\nFrom the /go/src/github.com/docker/docker directory make a docker binary\nwith the make.sh script.\nroot@5f8630b873fe:/go/src/github.com/docker/docker# hack/make.sh binary\n\nYou only call hack/make.sh to build a binary inside a Docker\ndevelopment container as you are now. On your host, you'll use make\ncommands (more about this later). \nAs it makes the binary, the make.sh script reports the build's progress.\nWhen the command completes successfully, you should see the following\noutput:\n--- Making bundle: ubuntu (in bundles/1.5.0-dev/ubuntu)\nCreated package {:path=\"lxc-docker-1.5.0-dev_1.5.0~dev~git20150223.181106.0.1ab0d23_amd64.deb\"}\nCreated package {:path=\"lxc-docker_1.5.0~dev~git20150223.181106.0.1ab0d23_amd64.deb\"}\n\n\n\nList all the contents of the binary directory.\nroot@5f8630b873fe:/go/src/github.com/docker/docker# ls bundles/1.5.0-dev/binary/\ndocker docker-1.5.0-dev docker-1.5.0-dev.md5 docker-1.5.0-dev.sha256\n\nYou should see that binary directory, just as it sounds, contains the\nmade binaries.\n\n\nCopy the docker binary to the /usr/bin of your container.\nroot@5f8630b873fe:/go/src/github.com/docker/docker# cp bundles/1.5.0-dev/binary/docker /usr/bin\n\n\n\nInside your container, check your Docker version.\nroot@5f8630b873fe:/go/src/github.com/docker/docker# docker --version\nDocker version 1.5.0-dev, build 6e728fb\n\nInside the container you are running a development version. This is version\non the current branch it reflects the value of the VERSION file at the\nroot of your docker-fork repository.\n\n\nStart a docker daemon running inside your container.\nroot@5f8630b873fe:/go/src/github.com/docker/docker# docker -dD\n\nThe -dD flag starts the daemon in debug mode; You'll find this useful\nwhen debugging your code.\n\n\nBring up one of the terminals on your local host.\n\n\nList your containers and look for the container running the dry-run-test image.\n$ docker ps\n\n\n \n CONTAINER ID\n IMAGE\n COMMAND\n CREATED\n STATUS\n PORTS\n NAMES\n \n \n 474f07652525\n dry-run-test:latest\n \"hack/dind /bin/bash\n 14 minutes ago\n Up 14 minutes\n \n tender_shockley\n \n\nIn this example, the container's name is tender_shockley; yours will be\ndifferent.\n\n\nFrom the terminal, start another shell on your Docker development container.\n$ docker exec -it tender_shockley bash\n\nAt this point, you have two terminals both with a shell open into your\ndevelopment container. One terminal is running a debug session. The other\nterminal is displaying a bash prompt.\n\n\nAt the prompt, test the Docker client by running the hello-world container. \nroot@9337c96e017a:/go/src/github.com/docker/docker# docker run hello-world\n\nYou should see the image load and return. Meanwhile, you\ncan see the calls made via the debug session in your other terminal.\n\n\n\nRestart a container with your source\nAt this point, you have experienced the \"Docker inception\" technique. That is,\nyou have:\n\nbuilt a Docker image from the Docker repository\ncreated and started a Docker development container from that image\nbuilt a Docker binary inside of your Docker development container\nlaunched a docker daemon using your newly compiled binary\ncalled the docker client to run a hello-world container inside\n your development container\n\nWhen you really get to developing code though, you'll want to iterate code\nchanges and builds inside the container. For that you need to mount your local\nDocker repository source into your Docker container. Try that now.\n\n\nIf you haven't already, exit out of BASH shells in your running Docker\ncontainer.\nIf you have followed this guide exactly, exiting out your BASH shells stops\nthe running container. You can use the docker ps command to verify the\ndevelopment container is stopped. All of your terminals should be at the\nlocal host prompt.\n\n\nChoose a terminal and make sure you are in your docker-fork repository.\n$ pwd\n/Users/mary/go/src/github.com/moxiegirl/docker-fork\n\nYour location will be different because it reflects your environment. \n\n\nCreate a container using dry-run-test but this time mount your repository\nonto the /go directory inside the container.\n$ docker run --privileged --rm -ti -v `pwd`:/go/src/github.com/docker/docker dry-run-test /bin/bash\n\nWhen you pass pwd, docker resolves it to your current directory.\n\n\nFrom inside the container, list your binary directory.\nroot@074626fc4b43:/go/src/github.com/docker/docker# ls bundles/1.5.0-dev/binary\nls: cannot access binary: No such file or directory\n\nYour dry-run-test image does not retain any of the changes you made inside\nthe container. This is the expected behavior for a container. \n\n\nIn a fresh terminal on your local host, change to the docker-fork root.\n$ cd ~/repos/docker-fork/\n\n\n\nCreate a fresh binary but this time use the make command.\n$ make BINDDIR=. binary\n\nThe BINDDIR flag is only necessary on Mac OS X but it won't hurt to pass\nit on Linux command line. The make command, like the make.sh script\ninside the container, reports its progress. When the make succeeds, it\nreturns the location of the new binary.\n\n\nBack in the terminal running the container, list your binary directory.\nroot@074626fc4b43:/go/src/github.com/docker/docker# ls bundles/1.5.0-dev/binary\ndocker docker-1.5.0-dev docker-1.5.0-dev.md5 docker-1.5.0-dev.sha256\n\nThe compiled binaries created from your repository on your local host are\nnow available inside your running Docker development container.\n\n\nRepeat the steps you ran in the previous procedure.\n\ncopy the binary inside the development container using\n cp bundles/1.5.0-dev/binary/docker /usr/bin\nstart docker -dD to launch the Docker daemon inside the container\nrun docker ps on local host to get the development container's name\nconnect to your running container docker exec -it container_name bash\nuse the docker run hello-world command to create and run a container \n inside your development container\n\n\n\nWhere to go next\nCongratulations, you have successfully achieved Docker inception. At this point,\nyou've set up your development environment and verified almost all the essential\nprocesses you need to contribute. Of course, before you start contributing, \nyou'll need to learn one more piece of the development environment, the test\nframework.",
|
|
"title": "Work with a development container"
|
|
},
|
|
{
|
|
"loc": "/project/set-up-dev-env#work-with-a-development-container",
|
|
"tags": "",
|
|
"text": "In this section, you learn to develop like a member of Docker's core team.\nThe docker repository includes a Dockerfile at its root. This file defines\nDocker's development environment. The Dockerfile lists the environment's\ndependencies: system libraries and binaries, go environment, go dependencies,\netc. Docker's development environment is itself, ultimately a Docker container.\nYou use the docker repository and its Dockerfile to create a Docker image, \nrun a Docker container, and develop code in the container. Docker itself builds,\ntests, and releases new Docker versions using this container. If you followed the procedures that \nset up Git for contributing , you should have a fork of the docker/docker \nrepository. You also created a branch called dry-run-test . In this section,\nyou continue working with your fork on this branch.",
|
|
"title": "Work with a development container"
|
|
},
|
|
{
|
|
"loc": "/project/set-up-dev-env#clean-your-host-of-docker-artifacts",
|
|
"tags": "",
|
|
"text": "Docker developers run the latest stable release of the Docker software; Or \nBoot2docker and Docker if their machine is Mac OS X. They clean their local\nhosts of unnecessary Docker artifacts such as stopped containers or unused\nimages. Cleaning unnecessary artifacts isn't strictly necessary but it is\ngood practice, so it is included here. To remove unnecessary artifacts. Verify that you have no unnecessary containers running on your host. $ docker ps You should see something similar to the following: \n \n CONTAINER ID \n IMAGE \n COMMAND \n CREATED \n STATUS \n PORTS \n NAMES \n There are no running containers on this host. If you have running but unused\ncontainers, stop and then remove them with the docker stop and docker rm \ncommands. Verify that your host has no dangling images. $ docker images You should see something similar to the following: \n \n REPOSITORY \n TAG \n IMAGE ID \n CREATED \n VIRTUAL SIZE \n This host has no images. You may have one or more dangling images. A\ndangling image is not used by a running container and is not an ancestor of\nanother image on your system. A fast way to remove dangling containers is\nthe following: $ docker rmi -f $(docker images -q -a -f dangling=true) This command uses docker images to lists all images ( -a flag) by numeric\nIDs ( -q flag) and filter them to find dangling images ( -f\ndangling=true ). Then, the docker rmi command forcibly ( -f flag) removes\nthe resulting list. To remove just one image, use the docker rmi ID \ncommand.",
|
|
"title": "Clean your host of Docker artifacts"
|
|
},
|
|
{
|
|
"loc": "/project/set-up-dev-env#build-an-image",
|
|
"tags": "",
|
|
"text": "If you followed the last procedure, your host is clean of unnecessary images \nand containers. In this section, you build an image from the Docker development\nenvironment. Open a terminal. Mac users, use boot2docker status to make sure Boot2Docker is running. You\nmay need to run eval \"$(boot2docker shellinit)\" to initialize your shell\nenvironment. Change into the root of your forked repository. $ cd ~/repos/docker-fork If you are following along with this guide, you created a dry-run-test \nbranch when you set up Git for\ncontributing Ensure you are on your dry-run-test branch. $ git checkout dry-run-test If you get a message that the branch doesn't exist, add the -b flag so the\ncommand both creates the branch and checks it out. Compile your development environment container into an image. $ docker build -t dry-run-test . The docker build command returns informational message as it runs. The\nfirst build may take a few minutes to create an image. Using the\ninstructions in the Dockerfile , the build may need to download source and\nother images. A successful build returns a final status message similar to\nthe following: Successfully built 676815d59283 List your Docker images again. $ docker images You should see something similar to this: \n \n REPOSTITORY \n TAG \n IMAGE ID \n CREATED \n VIRTUAL SIZE \n \n \n dry-run-test \n latest \n 663fbee70028 \n About a minute ago \n \n \n \n ubuntu \n trusty \n 2d24f826cb16 \n 2 days ago \n 188.3 MB \n \n \n ubuntu \n trusty-20150218.1 \n 2d24f826cb16 \n 2 days ago \n 188.3 MB \n \n \n ubuntu \n 14.04 \n 2d24f826cb16 \n 2 days ago \n 188.3 MB \n \n \n ubuntu \n 14.04.2 \n 2d24f826cb16 \n 2 days ago \n 188.3 MB \n \n \n ubuntu \n latest \n 2d24f826cb16 \n 2 days ago \n 188.3 MB \n Locate your new dry-run-test image in the list. You should also see a\nnumber of ubuntu images. The build process creates these. They are the\nancestors of your new Docker development image. When you next rebuild your\nimage, the build process reuses these ancestors images if they exist. Keeping the ancestor images improves the build performance. When you rebuild\nthe child image, the build process uses the local ancestors rather than\nretrieving them from the Hub. The build process gets new ancestors only if\nDockerHub has updated versions.",
|
|
"title": "Build an image"
|
|
},
|
|
{
|
|
"loc": "/project/set-up-dev-env#start-a-container-and-run-a-test",
|
|
"tags": "",
|
|
"text": "At this point, you have created a new Docker development environment image. Now,\nyou'll use this image to create a Docker container to develop in. Then, you'll\nbuild and run a docker binary in your container. Open two additional terminals on your host. At this point, you'll have about three terminals open. Mac OSX users, make sure you run eval \"$(boot2docker shellinit)\" in any new \nterminals. In a terminal, create a new container from your dry-run-test image. $ docker run --privileged --rm -ti dry-run-test /bin/bash\nroot@5f8630b873fe:/go/src/github.com/docker/docker# The command creates a container from your dry-run-test image. It opens an\ninteractive terminal ( -ti ) running a /bin/bash shell . The --privileged flag gives the container access to kernel features and device\naccess. It is this flag that allows you to run a container in a container.\nFinally, the -rm flag instructs Docker to remove the container when you\nexit the /bin/bash shell. The container includes the source of your image repository in the /go/src/github.com/docker/docker directory. Try listing the contents to\nverify they are the same as that of your docker-fork repo. Investigate your container bit. If you do a go version you'll find the go language is part of the\ncontainer. root@31ed86e9ddcf:/go/src/github.com/docker/docker# go version\ngo version go1.4.2 linux/amd64 Similarly, if you do a docker version you find the container\nhas no docker binary. root@31ed86e9ddcf:/go/src/github.com/docker/docker# docker version\nbash: docker: command not found You will create one in the next steps. From the /go/src/github.com/docker/docker directory make a docker binary\nwith the make.sh script. root@5f8630b873fe:/go/src/github.com/docker/docker# hack/make.sh binary You only call hack/make.sh to build a binary inside a Docker\ndevelopment container as you are now. On your host, you'll use make \ncommands (more about this later). As it makes the binary, the make.sh script reports the build's progress.\nWhen the command completes successfully, you should see the following\noutput: --- Making bundle: ubuntu (in bundles/1.5.0-dev/ubuntu)\nCreated package {:path= \"lxc-docker-1.5.0-dev_1.5.0~dev~git20150223.181106.0.1ab0d23_amd64.deb\"}\nCreated package {:path= \"lxc-docker_1.5.0~dev~git20150223.181106.0.1ab0d23_amd64.deb\"} List all the contents of the binary directory. root@5f8630b873fe:/go/src/github.com/docker/docker# ls bundles/1.5.0-dev/binary/\ndocker docker-1.5.0-dev docker-1.5.0-dev.md5 docker-1.5.0-dev.sha256 You should see that binary directory, just as it sounds, contains the\nmade binaries. Copy the docker binary to the /usr/bin of your container. root@5f8630b873fe:/go/src/github.com/docker/docker# cp bundles/1.5.0-dev/binary/docker /usr/bin Inside your container, check your Docker version. root@5f8630b873fe:/go/src/github.com/docker/docker# docker --version\nDocker version 1.5.0-dev, build 6e728fb Inside the container you are running a development version. This is version\non the current branch it reflects the value of the VERSION file at the\nroot of your docker-fork repository. Start a docker daemon running inside your container. root@5f8630b873fe:/go/src/github.com/docker/docker# docker -dD The -dD flag starts the daemon in debug mode; You'll find this useful\nwhen debugging your code. Bring up one of the terminals on your local host. List your containers and look for the container running the dry-run-test image. $ docker ps \n \n CONTAINER ID \n IMAGE \n COMMAND \n CREATED \n STATUS \n PORTS \n NAMES \n \n \n 474f07652525 \n dry-run-test:latest \n \"hack/dind /bin/bash \n 14 minutes ago \n Up 14 minutes \n \n tender_shockley \n In this example, the container's name is tender_shockley ; yours will be\ndifferent. From the terminal, start another shell on your Docker development container. $ docker exec -it tender_shockley bash At this point, you have two terminals both with a shell open into your\ndevelopment container. One terminal is running a debug session. The other\nterminal is displaying a bash prompt. At the prompt, test the Docker client by running the hello-world container. root@9337c96e017a:/go/src/github.com/docker/docker# docker run hello-world You should see the image load and return. Meanwhile, you\ncan see the calls made via the debug session in your other terminal.",
|
|
"title": "Start a container and run a test"
|
|
},
|
|
{
|
|
"loc": "/project/set-up-dev-env#restart-a-container-with-your-source",
|
|
"tags": "",
|
|
"text": "At this point, you have experienced the \"Docker inception\" technique. That is,\nyou have: built a Docker image from the Docker repository created and started a Docker development container from that image built a Docker binary inside of your Docker development container launched a docker daemon using your newly compiled binary called the docker client to run a hello-world container inside\n your development container When you really get to developing code though, you'll want to iterate code\nchanges and builds inside the container. For that you need to mount your local\nDocker repository source into your Docker container. Try that now. If you haven't already, exit out of BASH shells in your running Docker\ncontainer. If you have followed this guide exactly, exiting out your BASH shells stops\nthe running container. You can use the docker ps command to verify the\ndevelopment container is stopped. All of your terminals should be at the\nlocal host prompt. Choose a terminal and make sure you are in your docker-fork repository. $ pwd\n/Users/mary/go/src/github.com/moxiegirl/docker-fork Your location will be different because it reflects your environment. Create a container using dry-run-test but this time mount your repository\nonto the /go directory inside the container. $ docker run --privileged --rm -ti -v `pwd`:/go/src/github.com/docker/docker dry-run-test /bin/bash When you pass pwd , docker resolves it to your current directory. From inside the container, list your binary directory. root@074626fc4b43:/go/src/github.com/docker/docker# ls bundles/1.5.0-dev/binary\nls: cannot access binary: No such file or directory Your dry-run-test image does not retain any of the changes you made inside\nthe container. This is the expected behavior for a container. In a fresh terminal on your local host, change to the docker-fork root. $ cd ~/repos/docker-fork/ Create a fresh binary but this time use the make command. $ make BINDDIR=. binary The BINDDIR flag is only necessary on Mac OS X but it won't hurt to pass\nit on Linux command line. The make command, like the make.sh script\ninside the container, reports its progress. When the make succeeds, it\nreturns the location of the new binary. Back in the terminal running the container, list your binary directory. root@074626fc4b43:/go/src/github.com/docker/docker# ls bundles/1.5.0-dev/binary\ndocker docker-1.5.0-dev docker-1.5.0-dev.md5 docker-1.5.0-dev.sha256 The compiled binaries created from your repository on your local host are\nnow available inside your running Docker development container. Repeat the steps you ran in the previous procedure. copy the binary inside the development container using\n cp bundles/1.5.0-dev/binary/docker /usr/bin start docker -dD to launch the Docker daemon inside the container run docker ps on local host to get the development container's name connect to your running container docker exec -it container_name bash use the docker run hello-world command to create and run a container \n inside your development container",
|
|
"title": "Restart a container with your source"
|
|
},
|
|
{
|
|
"loc": "/project/set-up-dev-env#where-to-go-next",
|
|
"tags": "",
|
|
"text": "Congratulations, you have successfully achieved Docker inception. At this point,\nyou've set up your development environment and verified almost all the essential\nprocesses you need to contribute. Of course, before you start contributing, you'll need to learn one more piece of the development environment, the test\nframework .",
|
|
"title": "Where to go next"
|
|
},
|
|
{
|
|
"loc": "/project/test-and-docs/",
|
|
"tags": "",
|
|
"text": "Run tests and test documentation\nContributing includes testing your changes. If you change the Docker code, you\nmay need to add a new test or modify an existing one. Your contribution could\neven be adding tests to Docker. For this reason, you need to know a little\nabout Docker's test infrastructure.\nMany contributors contribute documentation only. Or, a contributor makes a code\ncontribution that changes how Docker behaves and that change needs\ndocumentation. For these reasons, you also need to know how to build, view, and\ntest the Docker documentation.\nIn this section, you run tests in the dry-run-test branch of your Docker\nfork. If you have followed along in this guide, you already have this branch.\nIf you don't have this branch, you can create it or simply use another of your\nbranches.\nUnderstand testing at Docker\nDocker tests use the Go language's test framework. In this framework, files\nwhose names end in _test.go contain test code; you'll find test files like\nthis throughout the Docker repo. Use these files for inspiration when writing\nyour own tests. For information on Go's test framework, see Go's testing package\ndocumentation and the go test help. \nYou are responsible for unit testing your contribution when you add new or\nchange existing Docker code. A unit test is a piece of code that invokes a\nsingle, small piece of code ( unit of work ) to verify the unit works as\nexpected.\nDepending on your contribution, you may need to add integration tests. These\nare tests that combine two or more work units into one component. These work\nunits each have unit tests and then, together, integration tests that test the\ninterface between the components. The integration and integration-cli\ndirectories in the Docker repository contain integration test code.\nTesting is its own speciality. If you aren't familiar with testing techniques,\nthere is a lot of information available to you on the Web. For now, you should\nunderstand that, the Docker maintainers may ask you to write a new test or\nchange an existing one.\nRun tests on your local host\nBefore submitting any code change, you should run the entire Docker test suite.\nThe Makefile contains a target for the entire test suite. The target's name\nis simply test. The make file contains several targets for testing:\n\n.monospaced {font-family: Monaco, Consolas, \"Lucida Console\", monospace !important;}\n\n\n\n \n Target\n What this target does\n \n \n test\n Run all the tests.\n \n \n test-unit\n Run just the unit tests.\n \n \n test-integration\n Run just integration tests.\n \n \n test-integration-cli\n Run the test for the integration command line interface.\n \n \n test-docker-py\n Run the tests for Docker API client.\n \n \n docs-test\n Runs the documentation test build.\n \n\n\nRun the entire test suite on your current repository:\n\n\nOpen a terminal on your local host.\n\n\nChange to the root your Docker repository.\n$ cd docker-fork\n\n\n\nMake sure you are in your development branch.\n$ git checkout dry-run-test\n\n\n\nRun the make test command.\n$ make test\n\nThis command does several things, it creates a container temporarily for\ntesting. Inside that container, the make:\n\ncreates a new binary\ncross-compiles all the binaries for the various operating systems\nruns the all the tests in the system\n\nIt can take several minutes to run all the tests. When they complete\nsuccessfully, you see the output concludes with something like this:\n[PASSED]: top - sleep process should be listed in privileged mode\n[PASSED]: version - verify that it works and that the output is properly formatted\nPASS\ncoverage: 70.8% of statements\n--- Making bundle: test-docker-py (in bundles/1.5.0-dev/test-docker-py)\n+++ exec docker --daemon --debug --host unix:///go/src/github.com/docker/docker/bundles/1.5.0-dev/test-docker-py/docker.sock --storage-driver vfs --exec-driver native --pidfile /go/src/github.com/docker/docker/bundles/1.5.0-dev/test-docker-py/docker.pid\n.................................................................\n----------------------------------------------------------------------\nRan 65 tests in 89.266s\n\n\n\nRun test targets inside the development container\nIf you are working inside a Docker development container, you use the\nhack/make.sh script to run tests. The hack/make.sh script doesn't\nhave a single target that runs all the tests. Instead, you provide a single\ncommmand line with multiple targets that does the same thing.\nTry this now.\n\n\nOpen a terminal and change to the docker-fork root.\n\n\nStart a Docker development image.\nIf you are following along with this guide, you should have a\ndry-run-test image.\n$ docker run --privileged --rm -ti -v `pwd`:/go/src/github.com/docker/docker dry-run-test /bin/bash\n\n\n\nRun the tests using the hack/make.sh script.\nroot@5f8630b873fe:/go/src/github.com/docker/docker# hack/make.sh dynbinary binary cross test-unit test-integration test-integration-cli test-docker-py\n\nThe tests run just as they did within your local host.\n\n\nOf course, you can also run a subset of these targets too. For example, to run\njust the unit tests:\nroot@5f8630b873fe:/go/src/github.com/docker/docker# hack/make.sh dynbinary binary cross test-unit\n\nMost test targets require that you build these precursor targets first:\ndynbinary binary cross\nRunning individual or multiple named tests\nYou can use the TESTFLAGS environment variable to run a single test. The\nflag's value is passed as arguments to the go test command. For example, from\nyour local host you can run the TestBuild test with this command:\n $ TESTFLAGS='-test.run \\ˆTestBuild\\$' make test\n\nTo run the same test inside your Docker development container, you do this:\nroot@5f8630b873fe:/go/src/github.com/docker/docker# TESTFLAGS='-run ˆTestBuild$' hack/make.sh\n\nIf tests under Boot2Docker fail due to disk space errors\nRunning the tests requires about 2GB of memory. If you are running your\ncontainer on bare metal, that is you are not running with Boot2Docker, your\nDocker development container is able to take the memory it requires directly\nfrom your local host.\nIf you are running Docker using Boot2Docker, the VM uses 2048MB by default.\nThis means you can exceed the memory of your VM running tests in a Boot2Docker\nenvironment. When the test suite runs out of memory, it returns errors similar\nto the following:\nserver.go:1302 Error: Insertion failed because database is full: database or\ndisk is full\n\nutils_test.go:179: Error copy: exit status 1 (cp: writing\n'/tmp/docker-testd5c9-[...]': No space left on device\n\nTo increase the memory on your VM, you need to reinitialize the Boot2Docker VM\nwith new memory settings.\n\n\nStop all running containers.\n\n\nView the current memory setting.\n$ boot2docker info\n{\n \"Name\": \"boot2docker-vm\",\n \"UUID\": \"491736fd-4075-4be7-a6f5-1d4cdcf2cc74\",\n \"Iso\": \"/Users/mary/.boot2docker/boot2docker.iso\",\n \"State\": \"running\",\n \"CPUs\": 8,\n \"Memory\": 2048,\n \"VRAM\": 8,\n \"CfgFile\": \"/Users/mary/VirtualBox VMs/boot2docker-vm/boot2docker-vm.vbox\",\n \"BaseFolder\": \"/Users/mary/VirtualBox VMs/boot2docker-vm\",\n \"OSType\": \"\",\n \"Flag\": 0,\n \"BootOrder\": null,\n \"DockerPort\": 0,\n \"SSHPort\": 2022,\n \"SerialFile\": \"/Users/mary/.boot2docker/boot2docker-vm.sock\"\n}\n\n\n\nDelete your existing boot2docker profile.\n$ boot2docker delete\n\n\n\nReinitialize boot2docker and specify a higher memory.\n$ boot2docker init -m 5555\n\n\n\nVerify the memory was reset.\n$ boot2docker info\n\n\n\nRestart your container and try your test again.\n\n\nBuild and test the documentation\nThe Docker documentation source files are under docs/sources. The content is\nwritten using extended Markdown. We use the static generator MkDocs to build Docker's\ndocumentation. Of course, you don't need to install this generator\nto build the documentation, it is included with container.\nYou should always check your documentation for grammar and spelling. The best\nway to do this is with an online grammar checker.\nWhen you change a documentation source file, you should test your change\nlocally to make sure your content is there and any links work correctly. You\ncan build the documentation from the local host. The build starts a container\nand loads the documentation into a server. As long as this container runs, you\ncan browse the docs.\n\n\nIn a terminal, change to the root of your docker-fork repository.\n$ cd ~/repos/dry-run-test\n\n\n\nMake sure you are in your feature branch.\n$ git status\nOn branch dry-run-test\nYour branch is up-to-date with 'origin/dry-run-test'.\nnothing to commit, working directory clean\n\n\n\nBuild the documentation.\n$ make docs\n\nWhen the build completes, you'll see a final output message similar to the\nfollowing:\nSuccessfully built ee7fe7553123\ndocker run --rm -it -e AWS_S3_BUCKET -e NOCACHE -p 8000:8000 \"docker-docs:dry-run-test\" mkdocs serve\nRunning at: http://0.0.0.0:8000/\nLive reload enabled.\nHold ctrl+c to quit.\n\n\n\nEnter the URL in your browser.\nIf you are running Boot2Docker, replace the default localhost address\n(0.0.0.0) with your DOCKERHOST value. You can get this value at any time by\nentering boot2docker ip at the command line.\n\n\nOnce in the documentation, look for the red notice to verify you are seeing the correct build.\n\n\n\nNavigate to your new or changed document.\n\n\nReview both the content and the links.\n\n\nReturn to your terminal and exit out of the running documentation container.\n\n\nWhere to go next\nCongratulations, you have successfully completed the basics you need to\nunderstand the Docker test framework. In the next steps, you use what you have\nlearned so far to contribute to Docker by working on an\nissue.",
|
|
"title": "Run tests and test documentation"
|
|
},
|
|
{
|
|
"loc": "/project/test-and-docs#run-tests-and-test-documentation",
|
|
"tags": "",
|
|
"text": "Contributing includes testing your changes. If you change the Docker code, you\nmay need to add a new test or modify an existing one. Your contribution could\neven be adding tests to Docker. For this reason, you need to know a little\nabout Docker's test infrastructure. Many contributors contribute documentation only. Or, a contributor makes a code\ncontribution that changes how Docker behaves and that change needs\ndocumentation. For these reasons, you also need to know how to build, view, and\ntest the Docker documentation. In this section, you run tests in the dry-run-test branch of your Docker\nfork. If you have followed along in this guide, you already have this branch.\nIf you don't have this branch, you can create it or simply use another of your\nbranches.",
|
|
"title": "Run tests and test documentation"
|
|
},
|
|
{
|
|
"loc": "/project/test-and-docs#understand-testing-at-docker",
|
|
"tags": "",
|
|
"text": "Docker tests use the Go language's test framework. In this framework, files\nwhose names end in _test.go contain test code; you'll find test files like\nthis throughout the Docker repo. Use these files for inspiration when writing\nyour own tests. For information on Go's test framework, see Go's testing package\ndocumentation and the go test help . You are responsible for unit testing your contribution when you add new or\nchange existing Docker code. A unit test is a piece of code that invokes a\nsingle, small piece of code ( unit of work ) to verify the unit works as\nexpected. Depending on your contribution, you may need to add integration tests . These\nare tests that combine two or more work units into one component. These work\nunits each have unit tests and then, together, integration tests that test the\ninterface between the components. The integration and integration-cli \ndirectories in the Docker repository contain integration test code. Testing is its own speciality. If you aren't familiar with testing techniques,\nthere is a lot of information available to you on the Web. For now, you should\nunderstand that, the Docker maintainers may ask you to write a new test or\nchange an existing one. Run tests on your local host Before submitting any code change, you should run the entire Docker test suite.\nThe Makefile contains a target for the entire test suite. The target's name\nis simply test . The make file contains several targets for testing: \n.monospaced {font-family: Monaco, Consolas, \"Lucida Console\", monospace !important;} \n \n Target \n What this target does \n \n \n test \n Run all the tests. \n \n \n test-unit \n Run just the unit tests. \n \n \n test-integration \n Run just integration tests. \n \n \n test-integration-cli \n Run the test for the integration command line interface. \n \n \n test-docker-py \n Run the tests for Docker API client. \n \n \n docs-test \n Runs the documentation test build. \n Run the entire test suite on your current repository: Open a terminal on your local host. Change to the root your Docker repository. $ cd docker-fork Make sure you are in your development branch. $ git checkout dry-run-test Run the make test command. $ make test This command does several things, it creates a container temporarily for\ntesting. Inside that container, the make : creates a new binary cross-compiles all the binaries for the various operating systems runs the all the tests in the system It can take several minutes to run all the tests. When they complete\nsuccessfully, you see the output concludes with something like this: [PASSED]: top - sleep process should be listed in privileged mode\n[PASSED]: version - verify that it works and that the output is properly formatted\nPASS\ncoverage: 70.8% of statements\n--- Making bundle: test-docker-py (in bundles/1.5.0-dev/test-docker-py)\n+++ exec docker --daemon --debug --host unix:///go/src/github.com/docker/docker/bundles/1.5.0-dev/test-docker-py/docker.sock --storage-driver vfs --exec-driver native --pidfile /go/src/github.com/docker/docker/bundles/1.5.0-dev/test-docker-py/docker.pid\n.................................................................\n----------------------------------------------------------------------\nRan 65 tests in 89.266s Run test targets inside the development container If you are working inside a Docker development container, you use the hack/make.sh script to run tests. The hack/make.sh script doesn't\nhave a single target that runs all the tests. Instead, you provide a single\ncommmand line with multiple targets that does the same thing. Try this now. Open a terminal and change to the docker-fork root. Start a Docker development image. If you are following along with this guide, you should have a dry-run-test image. $ docker run --privileged --rm -ti -v `pwd`:/go/src/github.com/docker/docker dry-run-test /bin/bash Run the tests using the hack/make.sh script. root@5f8630b873fe:/go/src/github.com/docker/docker# hack/make.sh dynbinary binary cross test-unit test-integration test-integration-cli test-docker-py The tests run just as they did within your local host. Of course, you can also run a subset of these targets too. For example, to run\njust the unit tests: root@5f8630b873fe:/go/src/github.com/docker/docker# hack/make.sh dynbinary binary cross test-unit Most test targets require that you build these precursor targets first: dynbinary binary cross",
|
|
"title": "Understand testing at Docker"
|
|
},
|
|
{
|
|
"loc": "/project/test-and-docs#running-individual-or-multiple-named-tests",
|
|
"tags": "",
|
|
"text": "You can use the TESTFLAGS environment variable to run a single test. The\nflag's value is passed as arguments to the go test command. For example, from\nyour local host you can run the TestBuild test with this command: $ TESTFLAGS='-test.run \\ˆTestBuild\\$' make test To run the same test inside your Docker development container, you do this: root@5f8630b873fe:/go/src/github.com/docker/docker# TESTFLAGS='-run ˆTestBuild$' hack/make.sh",
|
|
"title": "Running individual or multiple named tests"
|
|
},
|
|
{
|
|
"loc": "/project/test-and-docs#if-tests-under-boot2docker-fail-due-to-disk-space-errors",
|
|
"tags": "",
|
|
"text": "Running the tests requires about 2GB of memory. If you are running your\ncontainer on bare metal, that is you are not running with Boot2Docker, your\nDocker development container is able to take the memory it requires directly\nfrom your local host. If you are running Docker using Boot2Docker, the VM uses 2048MB by default.\nThis means you can exceed the memory of your VM running tests in a Boot2Docker\nenvironment. When the test suite runs out of memory, it returns errors similar\nto the following: server.go:1302 Error: Insertion failed because database is full: database or\ndisk is full\n\nutils_test.go:179: Error copy: exit status 1 (cp: writing\n'/tmp/docker-testd5c9-[...]': No space left on device To increase the memory on your VM, you need to reinitialize the Boot2Docker VM\nwith new memory settings. Stop all running containers. View the current memory setting. $ boot2docker info\n{\n \"Name\": \"boot2docker-vm\",\n \"UUID\": \"491736fd-4075-4be7-a6f5-1d4cdcf2cc74\",\n \"Iso\": \"/Users/mary/.boot2docker/boot2docker.iso\",\n \"State\": \"running\",\n \"CPUs\": 8,\n \"Memory\": 2048,\n \"VRAM\": 8,\n \"CfgFile\": \"/Users/mary/VirtualBox VMs/boot2docker-vm/boot2docker-vm.vbox\",\n \"BaseFolder\": \"/Users/mary/VirtualBox VMs/boot2docker-vm\",\n \"OSType\": \"\",\n \"Flag\": 0,\n \"BootOrder\": null,\n \"DockerPort\": 0,\n \"SSHPort\": 2022,\n \"SerialFile\": \"/Users/mary/.boot2docker/boot2docker-vm.sock\"\n} Delete your existing boot2docker profile. $ boot2docker delete Reinitialize boot2docker and specify a higher memory. $ boot2docker init -m 5555 Verify the memory was reset. $ boot2docker info Restart your container and try your test again.",
|
|
"title": "If tests under Boot2Docker fail due to disk space errors"
|
|
},
|
|
{
|
|
"loc": "/project/test-and-docs#build-and-test-the-documentation",
|
|
"tags": "",
|
|
"text": "The Docker documentation source files are under docs/sources . The content is\nwritten using extended Markdown. We use the static generator MkDocs to build Docker's\ndocumentation. Of course, you don't need to install this generator\nto build the documentation, it is included with container. You should always check your documentation for grammar and spelling. The best\nway to do this is with an online grammar checker . When you change a documentation source file, you should test your change\nlocally to make sure your content is there and any links work correctly. You\ncan build the documentation from the local host. The build starts a container\nand loads the documentation into a server. As long as this container runs, you\ncan browse the docs. In a terminal, change to the root of your docker-fork repository. $ cd ~/repos/dry-run-test Make sure you are in your feature branch. $ git status\nOn branch dry-run-test\nYour branch is up-to-date with 'origin/dry-run-test'.\nnothing to commit, working directory clean Build the documentation. $ make docs When the build completes, you'll see a final output message similar to the\nfollowing: Successfully built ee7fe7553123\ndocker run --rm -it -e AWS_S3_BUCKET -e NOCACHE -p 8000:8000 \"docker-docs:dry-run-test\" mkdocs serve\nRunning at: http://0.0.0.0:8000/\nLive reload enabled.\nHold ctrl+c to quit. Enter the URL in your browser. If you are running Boot2Docker, replace the default localhost address\n(0.0.0.0) with your DOCKERHOST value. You can get this value at any time by\nentering boot2docker ip at the command line. Once in the documentation, look for the red notice to verify you are seeing the correct build. Navigate to your new or changed document. Review both the content and the links. Return to your terminal and exit out of the running documentation container.",
|
|
"title": "Build and test the documentation"
|
|
},
|
|
{
|
|
"loc": "/project/test-and-docs#where-to-go-next",
|
|
"tags": "",
|
|
"text": "Congratulations, you have successfully completed the basics you need to\nunderstand the Docker test framework. In the next steps, you use what you have\nlearned so far to contribute to Docker by working on an\nissue .",
|
|
"title": "Where to go next"
|
|
},
|
|
{
|
|
"loc": "/project/make-a-contribution/",
|
|
"tags": "",
|
|
"text": "Understand how to contribute\nContributing is a process where you work with Docker maintainers and the\ncommunity to improve Docker. The maintainers are experienced contributors\nwho specialize in one or more Docker components. Maintainers play a big role\nin reviewing contributions.\nThere is a formal process for contributing. We try to keep our contribution\nprocess simple so you'll want to contribute frequently.\nThe basic contribution workflow\nIn this guide, you work through Docker's basic contribution workflow by fixing a\nsingle beginner issue in the docker/docker repository. The workflow\nfor fixing simple issues looks like this:\n\nAll Docker repositories have code and documentation. You use this same workflow\nfor either content type. For example, you can find and fix doc or code issues.\nAlso, you can propose a new Docker feature or propose a new Docker tutorial. \nSome workflow stages do have slight differences for code or documentation\ncontributions. When you reach that point in the flow, we make sure to tell you.\nWhere to go next\nNow that you know a little about the contribution process, go to the next section\nto find an issue you want to work on.",
|
|
"title": "Understand contribution workflow"
|
|
},
|
|
{
|
|
"loc": "/project/make-a-contribution#understand-how-to-contribute",
|
|
"tags": "",
|
|
"text": "Contributing is a process where you work with Docker maintainers and the\ncommunity to improve Docker. The maintainers are experienced contributors\nwho specialize in one or more Docker components. Maintainers play a big role\nin reviewing contributions. There is a formal process for contributing. We try to keep our contribution\nprocess simple so you'll want to contribute frequently.",
|
|
"title": "Understand how to contribute"
|
|
},
|
|
{
|
|
"loc": "/project/make-a-contribution#the-basic-contribution-workflow",
|
|
"tags": "",
|
|
"text": "In this guide, you work through Docker's basic contribution workflow by fixing a\nsingle beginner issue in the docker/docker repository. The workflow\nfor fixing simple issues looks like this: All Docker repositories have code and documentation. You use this same workflow\nfor either content type. For example, you can find and fix doc or code issues.\nAlso, you can propose a new Docker feature or propose a new Docker tutorial. Some workflow stages do have slight differences for code or documentation\ncontributions. When you reach that point in the flow, we make sure to tell you.",
|
|
"title": "The basic contribution workflow"
|
|
},
|
|
{
|
|
"loc": "/project/make-a-contribution#where-to-go-next",
|
|
"tags": "",
|
|
"text": "Now that you know a little about the contribution process, go to the next section\nto find an issue you want to work on .",
|
|
"title": "Where to go next"
|
|
},
|
|
{
|
|
"loc": "/project/find-an-issue/",
|
|
"tags": "",
|
|
"text": "/* GitHub label styles */\n.gh-label {\n display: inline-block;\n padding: 3px 4px;\n font-size: 12px;\n font-weight: bold;\n line-height: 1;\n color: #fff;\n border-radius: 2px;\n box-shadow: inset 0 -1px 0 rgba(0,0,0,0.12);\n}\n\n/* Experience */\n.gh-label.beginner { background-color: #B5E0B5; color: #333333; }\n.gh-label.expert { background-color: #599898; color: #ffffff; }\n.gh-label.master { background-color: #306481; color: #ffffff; }\n.gh-label.novice { background-color: #D6F2AC; color: #333333; }\n.gh-label.proficient { background-color: #8DC7A9; color: #333333; }\n\n/* Kind */\n.gh-label.bug { background-color: #FF9DA4; color: #333333; }\n.gh-label.cleanup { background-color: #FFB7B3; color: #333333; }\n.gh-label.content { background-color: #CDD3C2; color: #333333; }\n.gh-label.feature { background-color: #B7BEB7; color: #333333; }\n.gh-label.graphics { background-color: #E1EFCB; color: #333333; }\n.gh-label.improvement { background-color: #EBD2BB; color: #333333; }\n.gh-label.proposal { background-color: #FFD9C0; color: #333333; }\n.gh-label.question { background-color: #EEF1D1; color: #333333; }\n.gh-label.usecase { background-color: #F0E4C2; color: #333333; }\n.gh-label.writing { background-color: #B5E9D5; color: #333333; }\n\n\n\n\nFind and claim an issue\nOn this page, you choose what you want to work on. As a contributor you can work\non whatever you want. If you are new to contributing, you should start by\nworking with our known issues.\nUnderstand the issue types\nAn existing issue is something reported by a Docker user. As issues come in,\nour maintainers triage them. Triage is its own topic. For now, it is important\nfor you to know that triage includes ranking issues according to difficulty. \nTriaged issues have one of these labels:\n\n \n Level\n Experience level guideline\n \n \n exp/beginner\n You have made less than 10 contributions in your life time to any open source project.\n \n \n exp/novice\n You have made more than 10 contributions to an open source project or at least 5 contributions to Docker. \n \n \n exp/proficient\n You have made more than 5 contributions to Docker which amount to at least 200 code lines or 1000 documentation lines. \n \n \n exp/expert\n You have made less than 20 commits to Docker which amount to 500-1000 code lines or 1000-3000 documentation lines. \n \n \n exp/master\n You have made more than 20 commits to Docker and greater than 1000 code lines or 3000 documentation lines.\n \n\n\nAs the table states, these labels are meant as guidelines. You might have\nwritten a whole plugin for Docker in a personal project and never contributed to\nDocker. With that kind of experience, you could take on an exp/expert or exp/master level task.\nClaim a beginner or novice issue\nIn this section, you find and claim an open documentation lines issue.\n\n\nGo to the docker/docker repository.\n\n\nClick on the \"Issues\" link.\nA list of the open issues appears. \n\n\n\nLook for the exp/beginner items on the list.\n\n\nClick on the \"labels\" dropdown and select exp/beginner.\nThe system filters to show only open exp/beginner issues.\n\n\nOpen an issue that interests you.\nThe comments on the issues can tell you both the problem and the potential \nsolution.\n\n\nMake sure that no other user has chosen to work on the issue.\nWe don't allow external contributors to assign issues to themselves. So, you\nneed to read the comments to find if a user claimed the issue by leaving a\n#dibs comment on the issue. \n\n\nWhen you find an open issue that both interests you and is unclaimed, add a\n#dibs comment.\n\nThis example uses issue 11038. Your issue # will be different depending on\n what you claimed. After a moment, Gordon the Docker bot, changes the issue\n status to claimed.\n\n\nMake a note of the issue number; you'll need it later.\n\n\nSync your fork and create a new branch\nIf you have followed along in this guide, you forked the docker/docker\nrepository. Maybe that was an hour ago or a few days ago. In any case, before\nyou start working on your issue, sync your repository with the upstream\ndocker/docker master. Syncing ensures your repository has the latest\nchanges.\nTo sync your repository:\n\n\nOpen a terminal on your local host.\n\n\nChange directory to the docker-fork root.\n$ cd ~/repos/docker-fork\n\n\n\nCheckout the master branch.\n$ git checkout master\nSwitched to branch 'master'\nYour branch is up-to-date with 'origin/master'.\n\nRecall that origin/master is a branch on your remote GitHub repository.\n\n\nMake sure you have the upstream remote docker/docker by listing them.\n$ git remote -v\norigin https://github.com/moxiegirl/docker.git (fetch)\norigin https://github.com/moxiegirl/docker.git (push)\nupstream https://github.com/docker/docker.git (fetch)\nupstream https://github.com/docker/docker.git (\n\nIf the upstream is missing, add it.\n$ git remote add upstream https://github.com/docker/docker.git\n\n\n\nFetch all the changes from the upstream/master branch.\n$ git fetch upstream\nremote: Counting objects: 141, done.\nremote: Compressing objects: 100% (29/29), done.\nremote: Total 141 (delta 52), reused 46 (delta 46), pack-reused 66\nReceiving objects: 100% (141/141), 112.43 KiB | 0 bytes/s, done.\nResolving deltas: 100% (79/79), done.\nFrom github.com:docker/docker\n 9ffdf1e..01d09e4 docs - upstream/docs\n 05ba127..ac2521b master - upstream/master\n\nThis command says get all the changes from the master branch belonging to\nthe upstream remote.\n\n\nRebase your local master with the upstream/master.\n$ git rebase upstream/master\nFirst, rewinding head to replay your work on top of it...\nFast-forwarded master to upstream/master.\n\nThis command writes all the commits from the upstream branch into your local\nbranch.\n\n\nCheck the status of your local branch.\n$ git status\nOn branch master\nYour branch is ahead of 'origin/master' by 38 commits.\n (use \"git push\" to publish your local commits)\nnothing to commit, working directory clean\n\nYour local repository now has any changes from the upstream remote. You\nneed to push the changes to your own remote fork which is origin/master.\n\n\nPush the rebased master to origin/master.\n$ git push origin\nUsername for 'https://github.com': moxiegirl\nPassword for 'https://moxiegirl@github.com': \nCounting objects: 223, done.\nCompressing objects: 100% (38/38), done.\nWriting objects: 100% (69/69), 8.76 KiB | 0 bytes/s, done.\nTotal 69 (delta 53), reused 47 (delta 31)\nTo https://github.com/moxiegirl/docker.git\n 8e107a9..5035fa1 master - master\n\n\n\nCreate a new feature branch to work on your issue.\nYour branch name should have the format XXXX-descriptive where XXXX is\nthe issue number you are working on. For example:\n$ git checkout -b 11038-fix-rhel-link\nSwitched to a new branch '11038-fix-rhel-link'\n\nYour branch should be up-to-date with the upstream/master. Why? Because you\nbranched off a freshly synced master. Let's check this anyway in the next\nstep.\n\n\nRebase your branch from upstream/master.\n$ git rebase upstream/master\nCurrent branch 11038-fix-rhel-link is up to date.\n\nAt this point, your local branch, your remote repository, and the Docker\nrepository all have identical code. You are ready to make changesfor your\nissues.\n\n\nWhere to go next\nAt this point, you know what you want to work on and you have a branch to do\nyour work in. Go onto the next section to learn how to work on your\nchanges.",
|
|
"title": "Find an issue"
|
|
},
|
|
{
|
|
"loc": "/project/find-an-issue#find-and-claim-an-issue",
|
|
"tags": "",
|
|
"text": "On this page, you choose what you want to work on. As a contributor you can work\non whatever you want. If you are new to contributing, you should start by\nworking with our known issues.",
|
|
"title": "Find and claim an issue"
|
|
},
|
|
{
|
|
"loc": "/project/find-an-issue#understand-the-issue-types",
|
|
"tags": "",
|
|
"text": "An existing issue is something reported by a Docker user. As issues come in,\nour maintainers triage them. Triage is its own topic. For now, it is important\nfor you to know that triage includes ranking issues according to difficulty. Triaged issues have one of these labels: \n \n Level \n Experience level guideline \n \n \n exp/beginner \n You have made less than 10 contributions in your life time to any open source project. \n \n \n exp/novice \n You have made more than 10 contributions to an open source project or at least 5 contributions to Docker. \n \n \n exp/proficient \n You have made more than 5 contributions to Docker which amount to at least 200 code lines or 1000 documentation lines. \n \n \n exp/expert \n You have made less than 20 commits to Docker which amount to 500-1000 code lines or 1000-3000 documentation lines. \n \n \n exp/master \n You have made more than 20 commits to Docker and greater than 1000 code lines or 3000 documentation lines. \n As the table states, these labels are meant as guidelines. You might have\nwritten a whole plugin for Docker in a personal project and never contributed to\nDocker. With that kind of experience, you could take on an exp/expert or exp/master level task.",
|
|
"title": "Understand the issue types"
|
|
},
|
|
{
|
|
"loc": "/project/find-an-issue#claim-a-beginner-or-novice-issue",
|
|
"tags": "",
|
|
"text": "In this section, you find and claim an open documentation lines issue. Go to the docker/docker repository . Click on the \"Issues\" link. A list of the open issues appears. Look for the exp/beginner items on the list. Click on the \"labels\" dropdown and select exp/beginner . The system filters to show only open exp/beginner issues. Open an issue that interests you. The comments on the issues can tell you both the problem and the potential \nsolution. Make sure that no other user has chosen to work on the issue. We don't allow external contributors to assign issues to themselves. So, you\nneed to read the comments to find if a user claimed the issue by leaving a #dibs comment on the issue. When you find an open issue that both interests you and is unclaimed, add a #dibs comment. This example uses issue 11038. Your issue # will be different depending on\n what you claimed. After a moment, Gordon the Docker bot, changes the issue\n status to claimed. Make a note of the issue number; you'll need it later.",
|
|
"title": "Claim a beginner or novice issue"
|
|
},
|
|
{
|
|
"loc": "/project/find-an-issue#sync-your-fork-and-create-a-new-branch",
|
|
"tags": "",
|
|
"text": "If you have followed along in this guide, you forked the docker/docker \nrepository. Maybe that was an hour ago or a few days ago. In any case, before\nyou start working on your issue, sync your repository with the upstream docker/docker master. Syncing ensures your repository has the latest\nchanges. To sync your repository: Open a terminal on your local host. Change directory to the docker-fork root. $ cd ~/repos/docker-fork Checkout the master branch. $ git checkout master\nSwitched to branch 'master'\nYour branch is up-to-date with 'origin/master'. Recall that origin/master is a branch on your remote GitHub repository. Make sure you have the upstream remote docker/docker by listing them. $ git remote -v\norigin https://github.com/moxiegirl/docker.git (fetch)\norigin https://github.com/moxiegirl/docker.git (push)\nupstream https://github.com/docker/docker.git (fetch)\nupstream https://github.com/docker/docker.git ( If the upstream is missing, add it. $ git remote add upstream https://github.com/docker/docker.git Fetch all the changes from the upstream/master branch. $ git fetch upstream\nremote: Counting objects: 141, done.\nremote: Compressing objects: 100% (29/29), done.\nremote: Total 141 (delta 52), reused 46 (delta 46), pack-reused 66\nReceiving objects: 100% (141/141), 112.43 KiB | 0 bytes/s, done.\nResolving deltas: 100% (79/79), done.\nFrom github.com:docker/docker\n 9ffdf1e..01d09e4 docs - upstream/docs\n 05ba127..ac2521b master - upstream/master This command says get all the changes from the master branch belonging to\nthe upstream remote. Rebase your local master with the upstream/master . $ git rebase upstream/master\nFirst, rewinding head to replay your work on top of it...\nFast-forwarded master to upstream/master. This command writes all the commits from the upstream branch into your local\nbranch. Check the status of your local branch. $ git status\nOn branch master\nYour branch is ahead of 'origin/master' by 38 commits.\n (use \"git push\" to publish your local commits)\nnothing to commit, working directory clean Your local repository now has any changes from the upstream remote. You\nneed to push the changes to your own remote fork which is origin/master . Push the rebased master to origin/master . $ git push origin\nUsername for 'https://github.com': moxiegirl\nPassword for 'https://moxiegirl@github.com': \nCounting objects: 223, done.\nCompressing objects: 100% (38/38), done.\nWriting objects: 100% (69/69), 8.76 KiB | 0 bytes/s, done.\nTotal 69 (delta 53), reused 47 (delta 31)\nTo https://github.com/moxiegirl/docker.git\n 8e107a9..5035fa1 master - master Create a new feature branch to work on your issue. Your branch name should have the format XXXX-descriptive where XXXX is\nthe issue number you are working on. For example: $ git checkout -b 11038-fix-rhel-link\nSwitched to a new branch '11038-fix-rhel-link' Your branch should be up-to-date with the upstream/master. Why? Because you\nbranched off a freshly synced master. Let's check this anyway in the next\nstep. Rebase your branch from upstream/master. $ git rebase upstream/master\nCurrent branch 11038-fix-rhel-link is up to date. At this point, your local branch, your remote repository, and the Docker\nrepository all have identical code. You are ready to make changesfor your\nissues.",
|
|
"title": "Sync your fork and create a new branch"
|
|
},
|
|
{
|
|
"loc": "/project/find-an-issue#where-to-go-next",
|
|
"tags": "",
|
|
"text": "At this point, you know what you want to work on and you have a branch to do\nyour work in. Go onto the next section to learn how to work on your\nchanges .",
|
|
"title": "Where to go next"
|
|
},
|
|
{
|
|
"loc": "/project/work-issue/",
|
|
"tags": "",
|
|
"text": "Work on your issue\nThe work you do for your issue depends on the specific issue you picked.\nThis section gives you a step-by-step workflow. Where appropriate, it provides\ncommand examples. \nHowever, this is a generalized workflow, depending on your issue you may repeat\nsteps or even skip some. How much time the work takes depends on you --- you\ncould spend days or 30 minutes of your time.\nHow to work on your local branch\nFollow this workflow as you work:\n\n\nReview the appropriate style guide.\nIf you are changing code, review the coding style guide. Changing documentation? Review the\ndocumentation style guide. \n\n\nMake changes in your feature branch.\nYour feature branch you created in the last section. Here you use the\ndevelopment container. If you are making a code change, you can mount your\nsource into a development container and iterate that way. For documentation\nalone, you can work on your local host. \nMake sure you don't change files in the vendor directory and its\nsubdirectories; they contain third-party dependency code. Review if you forgot the details of\nworking with a container.\n\n\nTest your changes as you work.\nIf you have followed along with the guide, you know the make test target\nruns the entire test suite and make docs builds the documentation. If you\nforgot the other test targets, see the documentation for testing both code and\ndocumentation. \n\n\nFor code changes, add unit tests if appropriate.\nIf you add new functionality or change existing functionality, you should\nadd a unit test also. Use the existing test files for inspiration. Aren't\nsure if you need tests? Skip this step; you can add them later in the\nprocess if necessary.\n\n\nFormat your source files correctly.\n\n \n \n File type\n How to format\n \n \n \n \n .go\n \n \n Format .go files using the gofmt command.\n For example, if you edited the docker.go file you would format the file\n like this:\n \n $ gofmt -s -w file.go\n \n Most file editors have a plugin to format for you. Check your editor's\n documentation.\n \n \n \n \n .md and non-.go files\n Wrap lines to 80 characters.\n \n \n\n\n\nList your changes.\n$ git status\nOn branch 11038-fix-rhel-link\nChanges not staged for commit:\n (use \"git add file...\" to update what will be committed)\n (use \"git checkout -- file...\" to discard changes in working directory)\n\nmodified: docs/sources/installation/mac.md\nmodified: docs/sources/installation/rhel.md\n\nThe status command lists what changed in the repository. Make sure you see\nthe changes you expect.\n\n\nAdd your change to Git.\n$ git add docs/sources/installation/mac.md\n$ git add docs/sources/installation/rhel.md\n\n\n\nCommit your changes making sure you use the -s flag to sign your work.\n$ git commit -s -m \"Fixing RHEL link\"\n\n\n\nPush your change to your repository.\n$ git push origin\nUsername for 'https://github.com': moxiegirl\nPassword for 'https://moxiegirl@github.com': \nCounting objects: 60, done.\nCompressing objects: 100% (7/7), done.\nWriting objects: 100% (7/7), 582 bytes | 0 bytes/s, done.\nTotal 7 (delta 6), reused 0 (delta 0)\nTo https://github.com/moxiegirl/docker.git\n * [new branch] 11038-fix-rhel-link - 11038-fix-rhel-link\nBranch 11038-fix-rhel-link set up to track remote branch 11038-fix-rhel-link from origin.\n\nThe first time you push a change, you must specify the branch. Later, you can just do this:\ngit push origin\n\n\n\nReview your branch on GitHub\nAfter you push a new branch, you should verify it on GitHub:\n\n\nOpen your browser to GitHub.\n\n\nGo to your Docker fork.\n\n\nSelect your branch from the dropdown.\n\n\n\nUse the \"Compare\" button to compare the differences between your branch and master.\nDepending how long you've been working on your branch, your branch maybe\n behind Docker's upstream repository. \n\n\nReview the commits.\nMake sure your branch only shows the work you've done.\n\n\nPull and rebase frequently\nYou should pull and rebase frequently as you work. \n\n\nReturn to the terminal on your local machine.\n\n\nMake sure you are in your branch.\n$ git branch 11038-fix-rhel-link\n\n\n\nFetch all the changes from the upstream/master branch.\n $ git fetch upstream/master\n\nThis command says get all the changes from the master branch belonging to\nthe upstream remote.\n\n\nRebase your local master with Docker's upstream/master branch.\n $ git rebase -i upstream/master\n\nThis command starts an interactive rebase to merge code from Docker's\nupstream/master branch into your local branch. If you aren't familiar or\ncomfortable with rebase, you can learn more about rebasing on the web.\n\n\nRebase opens an editor with a list of commits.\n pick 1a79f55 Tweak some of the other text for grammar \n pick 53e4983 Fix a link \n pick 3ce07bb Add a new line about RHEL\n\nIf you run into trouble, git --rebase abort removes any changes and gets\nyou back to where you started. \n\n\nSquash the pick keyword with squash on all but the first commit.\n pick 1a79f55 Tweak some of the other text for grammar\n squash 53e4983 Fix a link\n squash 3ce07bb Add a new line about RHEL\n\nAfter closing the file, git opens your editor again to edit the commit\nmessage. \n\n\nEdit and save your commit message.\nMake sure you include your signature.\n\n\nPush any changes to your fork on GitHub.\n$ git push origin 11038-fix-rhel-link\n\n\n\nWhere to go next\nAt this point, you should understand how to work on an issue. In the next\nsection, you learn how to make a pull request.",
|
|
"title": "Work on an issue"
|
|
},
|
|
{
|
|
"loc": "/project/work-issue#work-on-your-issue",
|
|
"tags": "",
|
|
"text": "The work you do for your issue depends on the specific issue you picked.\nThis section gives you a step-by-step workflow. Where appropriate, it provides\ncommand examples. However, this is a generalized workflow, depending on your issue you may repeat\nsteps or even skip some. How much time the work takes depends on you --- you\ncould spend days or 30 minutes of your time.",
|
|
"title": "Work on your issue"
|
|
},
|
|
{
|
|
"loc": "/project/work-issue#how-to-work-on-your-local-branch",
|
|
"tags": "",
|
|
"text": "Follow this workflow as you work: Review the appropriate style guide. If you are changing code, review the coding style guide . Changing documentation? Review the documentation style guide . Make changes in your feature branch. Your feature branch you created in the last section. Here you use the\ndevelopment container. If you are making a code change, you can mount your\nsource into a development container and iterate that way. For documentation\nalone, you can work on your local host. Make sure you don't change files in the vendor directory and its\nsubdirectories; they contain third-party dependency code. Review if you forgot the details of\nworking with a container . Test your changes as you work. If you have followed along with the guide, you know the make test target\nruns the entire test suite and make docs builds the documentation. If you\nforgot the other test targets, see the documentation for testing both code and\ndocumentation . For code changes, add unit tests if appropriate. If you add new functionality or change existing functionality, you should\nadd a unit test also. Use the existing test files for inspiration. Aren't\nsure if you need tests? Skip this step; you can add them later in the\nprocess if necessary. Format your source files correctly. \n \n \n File type \n How to format \n \n \n \n \n .go \n \n \n Format .go files using the gofmt command.\n For example, if you edited the docker.go file you would format the file\n like this:\n \n $ gofmt -s -w file.go \n \n Most file editors have a plugin to format for you. Check your editor's\n documentation.\n \n \n \n \n .md and non- .go files \n Wrap lines to 80 characters. \n \n List your changes. $ git status\nOn branch 11038-fix-rhel-link\nChanges not staged for commit:\n (use \"git add file ...\" to update what will be committed)\n (use \"git checkout -- file ...\" to discard changes in working directory)\n\nmodified: docs/sources/installation/mac.md\nmodified: docs/sources/installation/rhel.md The status command lists what changed in the repository. Make sure you see\nthe changes you expect. Add your change to Git. $ git add docs/sources/installation/mac.md\n$ git add docs/sources/installation/rhel.md Commit your changes making sure you use the -s flag to sign your work. $ git commit -s -m \"Fixing RHEL link\" Push your change to your repository. $ git push origin\nUsername for 'https://github.com': moxiegirl\nPassword for 'https://moxiegirl@github.com': \nCounting objects: 60, done.\nCompressing objects: 100% (7/7), done.\nWriting objects: 100% (7/7), 582 bytes | 0 bytes/s, done.\nTotal 7 (delta 6), reused 0 (delta 0)\nTo https://github.com/moxiegirl/docker.git\n * [new branch] 11038-fix-rhel-link - 11038-fix-rhel-link\nBranch 11038-fix-rhel-link set up to track remote branch 11038-fix-rhel-link from origin. The first time you push a change, you must specify the branch. Later, you can just do this: git push origin",
|
|
"title": "How to work on your local branch"
|
|
},
|
|
{
|
|
"loc": "/project/work-issue#review-your-branch-on-github",
|
|
"tags": "",
|
|
"text": "After you push a new branch, you should verify it on GitHub: Open your browser to GitHub . Go to your Docker fork. Select your branch from the dropdown. Use the \"Compare\" button to compare the differences between your branch and master. Depending how long you've been working on your branch, your branch maybe\n behind Docker's upstream repository. Review the commits. Make sure your branch only shows the work you've done.",
|
|
"title": "Review your branch on GitHub"
|
|
},
|
|
{
|
|
"loc": "/project/work-issue#pull-and-rebase-frequently",
|
|
"tags": "",
|
|
"text": "You should pull and rebase frequently as you work. Return to the terminal on your local machine. Make sure you are in your branch. $ git branch 11038-fix-rhel-link Fetch all the changes from the upstream/master branch. $ git fetch upstream/master This command says get all the changes from the master branch belonging to\nthe upstream remote. Rebase your local master with Docker's upstream/master branch. $ git rebase -i upstream/master This command starts an interactive rebase to merge code from Docker's upstream/master branch into your local branch. If you aren't familiar or\ncomfortable with rebase, you can learn more about rebasing on the web. Rebase opens an editor with a list of commits. pick 1a79f55 Tweak some of the other text for grammar \n pick 53e4983 Fix a link \n pick 3ce07bb Add a new line about RHEL If you run into trouble, git --rebase abort removes any changes and gets\nyou back to where you started. Squash the pick keyword with squash on all but the first commit. pick 1a79f55 Tweak some of the other text for grammar\n squash 53e4983 Fix a link\n squash 3ce07bb Add a new line about RHEL After closing the file, git opens your editor again to edit the commit\nmessage. Edit and save your commit message. Make sure you include your signature. Push any changes to your fork on GitHub. $ git push origin 11038-fix-rhel-link",
|
|
"title": "Pull and rebase frequently"
|
|
},
|
|
{
|
|
"loc": "/project/work-issue#where-to-go-next",
|
|
"tags": "",
|
|
"text": "At this point, you should understand how to work on an issue. In the next\nsection, you learn how to make a pull request .",
|
|
"title": "Where to go next"
|
|
},
|
|
{
|
|
"loc": "/project/create-pr/",
|
|
"tags": "",
|
|
"text": "Create a pull request (PR)\nA pull request (PR) sends your changes to the Docker maintainers for review. You\ncreate a pull request on GitHub. A pull request \"pulls\" changes from your forked\nrepository into the docker/docker repository.\nYou can see the\nlist of active pull requests to Docker on GitHub.\nCheck Your Work\nBefore you create a pull request, check your work.\n\n\nIn a terminal window, go to the root of your docker-fork repository. \n$ cd ~/repos/docker-fork\n\n\n\nCheckout your feature branch.\n$ git checkout 11038-fix-rhel-link\nAlready on '11038-fix-rhel-link'\n\n\n\nRun the full test suite on your branch.\n$ make test\n\nAll the tests should pass. If they don't, find out why and correct the\nsituation. \n\n\nOptionally, if modified the documentation, build the documentation:\n$ make docs\n\n\n\nCommit and push any changes that result from your checks.\n\n\nRebase your branch\nAlways rebase and squash your commits before making a pull request. \n\n\nFetch any of the last minute changes from docker/docker.\n$ git fetch upstream master\nFrom github.com:docker/docker\n * branch master - FETCH_HEAD\n\n\n\nStart an interactive rebase.\n$ git rebase -i upstream/master\n\n\n\nRebase opens an editor with a list of commits.\npick 1a79f55 Tweak some of the other text for grammar\npick 53e4983 Fix a link\npick 3ce07bb Add a new line about RHEL\n\nIf you run into trouble, git --rebase abort removes any changes and gets\nyou back to where you started. \n\n\nSquash the pick keyword with squash on all but the first commit.\npick 1a79f55 Tweak some of the other text for grammar\nsquash 53e4983 Fix a link\nsquash 3ce07bb Add a new line about RHEL\n\nAfter closing the file, git opens your editor again to edit the commit\nmessage. \n\n\nEdit and save your commit message.\n`git commit -s`\n\nMake sure your message includes a href=\"./set-up-git\" target=\"_blankyour signature/a.\n\n\n\nPush any changes to your fork on GitHub.\n$ git push origin 11038-fix-rhel-link\n\n\n\nCreate a PR on GitHub\nYou create and manage PRs on GitHub:\n\n\nOpen your browser to your fork on GitHub.\nYou should see the latest activity from your branch.\n\n\n\nClick \"Compare pull request.\"\nThe system displays the pull request dialog. \n\nThe pull request compares your changes to the master branch on the\ndocker/docker repository.\n\n\nEdit the dialog's description and add a reference to the issue you are fixing.\nGitHub helps you out by searching for the issue as you type.\n\n\n\nScroll down and verify the PR contains the commits and changes you expect.\nFor example, is the file count correct? Are the changes in the files what\nyou expect.\n\n\n\nPress \"Create pull request\".\nThe system creates the request and opens it for you in the docker/docker\nrepository.\n\n\n\nWhere to go next\nCongratulations, you've created your first pull request to Docker. The next\nstep is for you learn how to participate in your PR's\nreview.",
|
|
"title": "Create a pull request"
|
|
},
|
|
{
|
|
"loc": "/project/create-pr#create-a-pull-request-pr",
|
|
"tags": "",
|
|
"text": "A pull request (PR) sends your changes to the Docker maintainers for review. You\ncreate a pull request on GitHub. A pull request \"pulls\" changes from your forked\nrepository into the docker/docker repository. You can see the\nlist of active pull requests to Docker on GitHub.",
|
|
"title": "Create a pull request (PR)"
|
|
},
|
|
{
|
|
"loc": "/project/create-pr#check-your-work",
|
|
"tags": "",
|
|
"text": "Before you create a pull request, check your work. In a terminal window, go to the root of your docker-fork repository. $ cd ~/repos/docker-fork Checkout your feature branch. $ git checkout 11038-fix-rhel-link\nAlready on '11038-fix-rhel-link' Run the full test suite on your branch. $ make test All the tests should pass. If they don't, find out why and correct the\nsituation. Optionally, if modified the documentation, build the documentation: $ make docs Commit and push any changes that result from your checks.",
|
|
"title": "Check Your Work"
|
|
},
|
|
{
|
|
"loc": "/project/create-pr#rebase-your-branch",
|
|
"tags": "",
|
|
"text": "Always rebase and squash your commits before making a pull request. Fetch any of the last minute changes from docker/docker . $ git fetch upstream master\nFrom github.com:docker/docker\n * branch master - FETCH_HEAD Start an interactive rebase. $ git rebase -i upstream/master Rebase opens an editor with a list of commits. pick 1a79f55 Tweak some of the other text for grammar\npick 53e4983 Fix a link\npick 3ce07bb Add a new line about RHEL If you run into trouble, git --rebase abort removes any changes and gets\nyou back to where you started. Squash the pick keyword with squash on all but the first commit. pick 1a79f55 Tweak some of the other text for grammar\nsquash 53e4983 Fix a link\nsquash 3ce07bb Add a new line about RHEL After closing the file, git opens your editor again to edit the commit\nmessage. Edit and save your commit message. `git commit -s`\n\nMake sure your message includes a href=\"./set-up-git\" target=\"_blank your signature /a . Push any changes to your fork on GitHub. $ git push origin 11038-fix-rhel-link",
|
|
"title": "Rebase your branch"
|
|
},
|
|
{
|
|
"loc": "/project/create-pr#create-a-pr-on-github",
|
|
"tags": "",
|
|
"text": "You create and manage PRs on GitHub: Open your browser to your fork on GitHub. You should see the latest activity from your branch. Click \"Compare pull request.\" The system displays the pull request dialog. The pull request compares your changes to the master branch on the docker/docker repository. Edit the dialog's description and add a reference to the issue you are fixing. GitHub helps you out by searching for the issue as you type. Scroll down and verify the PR contains the commits and changes you expect. For example, is the file count correct? Are the changes in the files what\nyou expect. Press \"Create pull request\". The system creates the request and opens it for you in the docker/docker \nrepository.",
|
|
"title": "Create a PR on GitHub"
|
|
},
|
|
{
|
|
"loc": "/project/create-pr#where-to-go-next",
|
|
"tags": "",
|
|
"text": "Congratulations, you've created your first pull request to Docker. The next\nstep is for you learn how to participate in your PR's\nreview .",
|
|
"title": "Where to go next"
|
|
},
|
|
{
|
|
"loc": "/project/review-pr/",
|
|
"tags": "",
|
|
"text": "Participate in the PR Review\nCreating a pull request is nearly the end of the contribution process. At this\npoint, your code is reviewed both by our continuous integration (CI) systems and\nby our maintainers. \nThe CI system is an automated system. The maintainers are human beings that also\nwork on Docker. You need to understand and work with both the \"bots\" and the\n\"beings\" to review your contribution.\nHow we proces your review\nFirst to review your pull request is Gordon. Gordon is fast. He checks your\npull request (PR) for common problems like a missing signature. If Gordon finds a\nproblem, he'll send an email through your GitHub user account:\n\nOur build bot system starts building your changes while Gordon sends any emails. \nThe build system double-checks your work by compiling your code with Docker's master\ncode. Building includes running the same tests you ran locally. If you forgot\nto run tests or missed something in fixing problems, the automated build is our\nsafety check. \nAfter Gordon and the bots, the \"beings\" review your work. Docker maintainers look\nat your pull request and comment on it. The shortest comment you might see is\nLGTM which means looks-good-to-me. If you get an LGTM, that\nis a good thing, you passed that review. \nFor complex changes, maintainers may ask you questions or ask you to change\nsomething about your submission. All maintainer comments on a PR go to the\nemail address associated with your GitHub account. Any GitHub user who \n\"participates\" in a PR receives an email to. Participating means creating or \ncommenting on a PR.\nOur maintainers are very experienced Docker users and open source contributors.\nSo, they value your time and will try to work efficiently with you by keeping\ntheir comments specific and brief. If they ask you to make a change, you'll\nneed to update your pull request with additional changes.\nUpdate an Existing Pull Request\nTo update your existing pull request:\n\n\nChange one or more files in your local docker-fork repository.\n\n\nCommit the change with the git commit --amend command.\n$ git commit --amend\n\nGit opens an editor containing your last commit message.\n\n\nAdjust your last comment to reflect this new change.\nAdded a new sentence per Anaud's suggestion\n\nSigned-off-by: Mary Anthony mary@docker.com\n\n# Please enter the commit message for your changes. Lines starting\n# with '#' will be ignored, and an empty message aborts the commit.\n# On branch 11038-fix-rhel-link\n# Your branch is up-to-date with 'origin/11038-fix-rhel-link'.\n#\n# Changes to be committed:\n# modified: docs/sources/installation/mac.md\n# modified: docs/sources/installation/rhel.md\n\n\n\nPush to your origin.\n$ git push origin\n\n\n\nOpen your browser to your pull request on GitHub.\nYou should see your pull request now contains your newly pushed code.\n\n\nAdd a comment to your pull request.\nGitHub only notifies PR participants when you comment. For example, you can\nmention that you updated your PR. Your comment alerts the maintainers that\nyou made an update.\n\n\nA change requires LGTMs from an absolute majority of an affected component's\nmaintainers. For example, if you change docs/ and registry/ code, an\nabsolute majority of the docs/ and the registry/ maintainers must approve\nyour PR. Once you get approval, we merge your pull request into Docker's \nmaster code branch. \nAfter the merge\nIt can take time to see a merged pull request in Docker's official release. \nA master build is available almost immediately though. Docker builds and\nupdates its development binaries after each merge to master.\n\n\nBrowse to https://master.dockerproject.com/.\n\n\nLook for the binary appropriate to your system.\n\n\nDownload and run the binary.\nYou might want to run the binary in a container though. This\nwill keep your local host environment clean.\n\n\nView any documentation changes at docs.master.dockerproject.com. \n\n\nOnce you've verified everything merged, feel free to delete your feature branch\nfrom your fork. For information on how to do this, \n\nsee the GitHub help on deleting branches. \nWhere to go next\nAt this point, you have completed all the basic tasks in our contributors guide.\nIf you enjoyed contributing, let us know by completing another beginner\nissue or two. We really appreciate the help. \nIf you are very experienced and want to make a major change, go on to \nlearn about advanced contributing.",
|
|
"title": "Participate in the PR review"
|
|
},
|
|
{
|
|
"loc": "/project/review-pr#participate-in-the-pr-review",
|
|
"tags": "",
|
|
"text": "Creating a pull request is nearly the end of the contribution process. At this\npoint, your code is reviewed both by our continuous integration (CI) systems and\nby our maintainers. The CI system is an automated system. The maintainers are human beings that also\nwork on Docker. You need to understand and work with both the \"bots\" and the\n\"beings\" to review your contribution.",
|
|
"title": "Participate in the PR Review"
|
|
},
|
|
{
|
|
"loc": "/project/review-pr#how-we-proces-your-review",
|
|
"tags": "",
|
|
"text": "First to review your pull request is Gordon. Gordon is fast. He checks your\npull request (PR) for common problems like a missing signature. If Gordon finds a\nproblem, he'll send an email through your GitHub user account: Our build bot system starts building your changes while Gordon sends any emails. The build system double-checks your work by compiling your code with Docker's master\ncode. Building includes running the same tests you ran locally. If you forgot\nto run tests or missed something in fixing problems, the automated build is our\nsafety check. After Gordon and the bots, the \"beings\" review your work. Docker maintainers look\nat your pull request and comment on it. The shortest comment you might see is LGTM which means l ooks- g ood- t o- m e. If you get an LGTM , that\nis a good thing, you passed that review. For complex changes, maintainers may ask you questions or ask you to change\nsomething about your submission. All maintainer comments on a PR go to the\nemail address associated with your GitHub account. Any GitHub user who \n\"participates\" in a PR receives an email to. Participating means creating or \ncommenting on a PR. Our maintainers are very experienced Docker users and open source contributors.\nSo, they value your time and will try to work efficiently with you by keeping\ntheir comments specific and brief. If they ask you to make a change, you'll\nneed to update your pull request with additional changes.",
|
|
"title": "How we proces your review"
|
|
},
|
|
{
|
|
"loc": "/project/review-pr#update-an-existing-pull-request",
|
|
"tags": "",
|
|
"text": "To update your existing pull request: Change one or more files in your local docker-fork repository. Commit the change with the git commit --amend command. $ git commit --amend Git opens an editor containing your last commit message. Adjust your last comment to reflect this new change. Added a new sentence per Anaud's suggestion\n\nSigned-off-by: Mary Anthony mary@docker.com \n\n# Please enter the commit message for your changes. Lines starting\n# with '#' will be ignored, and an empty message aborts the commit.\n# On branch 11038-fix-rhel-link\n# Your branch is up-to-date with 'origin/11038-fix-rhel-link'.\n#\n# Changes to be committed:\n# modified: docs/sources/installation/mac.md\n# modified: docs/sources/installation/rhel.md Push to your origin. $ git push origin Open your browser to your pull request on GitHub. You should see your pull request now contains your newly pushed code. Add a comment to your pull request. GitHub only notifies PR participants when you comment. For example, you can\nmention that you updated your PR. Your comment alerts the maintainers that\nyou made an update. A change requires LGTMs from an absolute majority of an affected component's\nmaintainers. For example, if you change docs/ and registry/ code, an\nabsolute majority of the docs/ and the registry/ maintainers must approve\nyour PR. Once you get approval, we merge your pull request into Docker's master code branch.",
|
|
"title": "Update an Existing Pull Request"
|
|
},
|
|
{
|
|
"loc": "/project/review-pr#after-the-merge",
|
|
"tags": "",
|
|
"text": "It can take time to see a merged pull request in Docker's official release. \nA master build is available almost immediately though. Docker builds and\nupdates its development binaries after each merge to master . Browse to https://master.dockerproject.com/ . Look for the binary appropriate to your system. Download and run the binary. You might want to run the binary in a container though. This\nwill keep your local host environment clean. View any documentation changes at docs.master.dockerproject.com . Once you've verified everything merged, feel free to delete your feature branch\nfrom your fork. For information on how to do this, \nsee the GitHub help on deleting branches .",
|
|
"title": "After the merge"
|
|
},
|
|
{
|
|
"loc": "/project/review-pr#where-to-go-next",
|
|
"tags": "",
|
|
"text": "At this point, you have completed all the basic tasks in our contributors guide.\nIf you enjoyed contributing, let us know by completing another beginner\nissue or two. We really appreciate the help. If you are very experienced and want to make a major change, go on to learn about advanced contributing .",
|
|
"title": "Where to go next"
|
|
},
|
|
{
|
|
"loc": "/project/advanced-contributing/",
|
|
"tags": "",
|
|
"text": "Advanced contributing\nIn this section, you learn about the more advanced contributions you can make.\nThey are advanced because they have a more involved workflow or require greater\nprogramming experience. Don't be scared off though, if you like to stretch and\nchallenge yourself, this is the place for you.\nThis section gives generalized instructions for advanced contributions. You'll\nread about the workflow but there are not specific descriptions of commands.\nYour goal should be to understand the processes described.\nAt this point, you should have read and worked through the earlier parts of\nthe project contributor guide. You should also have\n made at least one project contribution.\nRefactor or cleanup proposal\nA refactor or cleanup proposal changes Docker's internal structure without\naltering the external behavior. To make this type of proposal:\n\n\nFork docker/docker.\n\n\nMake your changes in a feature branch.\n\n\nSync and rebase with master as you work.\n\n\nRun the full test suite.\n\n\nSubmit your code through a pull request (PR).\nThe PR's title should have the format:\nCleanup: short title\nIf your changes required logic changes, note that in your request.\n\n\nWork through Docker's review process until merge.\n\n\nDesign proposal\nA design proposal solves a problem or adds a feature to the Docker software.\nThe process for submitting design proposals requires two pull requests, one\nfor the design and one for the implementation.\n\nThe important thing to notice is that both the design pull request and the\nimplementation pull request go through a review. In other words, there is\nconsiderable time commitment in a design proposal; so, you might want to pair\nwith someone on design work.\nThe following provides greater detail on the process:\n\n\nCome up with an idea.\nIdeas usually come from limitations users feel working with a product. So,\ntake some time to really use Docker. Try it on different platforms; explore\nhow it works with different web applications. Go to some community events\nand find out what other users want.\n\n\nReview existing issues and proposals to make sure no other user is proposing a similar idea.\nThe design proposals are all online in our GitHub pull requests. \n\n\nTalk to the community about your idea.\nWe have lots of community forums\nwhere you can get feedback on your idea. Float your idea in a forum or two\nto get some commentary going on it.\n\n\nFork docker/docker and clone the repo to your local host.\n\n\nCreate a new Markdown file in the area you wish to change. \nFor example, if you want to redesign our daemon create a new file under the\ndaemon/ folder. \n\n\nName the file descriptively, for example redesign-daemon-proposal.md.\n\n\nWrite a proposal for your change into the file.\nThis is a Markdown file that describes your idea. Your proposal\nshould include information like:\n\nWhy is this changed needed or what are the use cases?\nWhat are the requirements this change should meet?\nWhat are some ways to design/implement this feature?\nWhich design/implementation do you think is best and why?\nWhat are the risks or limitations of your proposal?\n\nThis is your chance to convince people your idea is sound. \n\n\nSubmit your proposal in a pull request to docker/docker.\nThe title should have the format:\nProposal: short title\nThe body of the pull request should include a brief summary of your change\nand then say something like \"See the file for a complete description\".\n\n\nRefine your proposal through review.\nThe maintainers and the community review your proposal. You'll need to\nanswer questions and sometimes explain or defend your approach. This is\nchance for everyone to both teach and learn.\n\n\nPull request accepted.\nYour request may also be rejected. Not every idea is a good fit for Docker.\nLet's assume though your proposal succeeded. \n\n\nImplement your idea.\nImplementation uses all the standard practices of any contribution.\n\nfork docker/docker\ncreate a feature branch\nsync frequently back to master\ntest as you go and full test before a PR\n\nIf you run into issues, the community is there to help.\n\n\nWhen you have a complete implementation, submit a pull request back to docker/docker.\n\n\nReview and iterate on your code.\nIf you are making a large code change, you can expect greater scrutiny\nduring this phase. \n\n\nAcceptance and merge!",
|
|
"title": "Advanced contributing"
|
|
},
|
|
{
|
|
"loc": "/project/advanced-contributing#advanced-contributing",
|
|
"tags": "",
|
|
"text": "In this section, you learn about the more advanced contributions you can make.\nThey are advanced because they have a more involved workflow or require greater\nprogramming experience. Don't be scared off though, if you like to stretch and\nchallenge yourself, this is the place for you. This section gives generalized instructions for advanced contributions. You'll\nread about the workflow but there are not specific descriptions of commands.\nYour goal should be to understand the processes described. At this point, you should have read and worked through the earlier parts of\nthe project contributor guide. You should also have made at least one project contribution .",
|
|
"title": "Advanced contributing"
|
|
},
|
|
{
|
|
"loc": "/project/advanced-contributing#refactor-or-cleanup-proposal",
|
|
"tags": "",
|
|
"text": "A refactor or cleanup proposal changes Docker's internal structure without\naltering the external behavior. To make this type of proposal: Fork docker/docker . Make your changes in a feature branch. Sync and rebase with master as you work. Run the full test suite. Submit your code through a pull request (PR). The PR's title should have the format: Cleanup: short title If your changes required logic changes, note that in your request. Work through Docker's review process until merge.",
|
|
"title": "Refactor or cleanup proposal"
|
|
},
|
|
{
|
|
"loc": "/project/advanced-contributing#design-proposal",
|
|
"tags": "",
|
|
"text": "A design proposal solves a problem or adds a feature to the Docker software.\nThe process for submitting design proposals requires two pull requests, one\nfor the design and one for the implementation. The important thing to notice is that both the design pull request and the\nimplementation pull request go through a review. In other words, there is\nconsiderable time commitment in a design proposal; so, you might want to pair\nwith someone on design work. The following provides greater detail on the process: Come up with an idea. Ideas usually come from limitations users feel working with a product. So,\ntake some time to really use Docker. Try it on different platforms; explore\nhow it works with different web applications. Go to some community events\nand find out what other users want. Review existing issues and proposals to make sure no other user is proposing a similar idea. The design proposals are all online in our GitHub pull requests . Talk to the community about your idea. We have lots of community forums \nwhere you can get feedback on your idea. Float your idea in a forum or two\nto get some commentary going on it. Fork docker/docker and clone the repo to your local host. Create a new Markdown file in the area you wish to change. For example, if you want to redesign our daemon create a new file under the daemon/ folder. Name the file descriptively, for example redesign-daemon-proposal.md . Write a proposal for your change into the file. This is a Markdown file that describes your idea. Your proposal\nshould include information like: Why is this changed needed or what are the use cases? What are the requirements this change should meet? What are some ways to design/implement this feature? Which design/implementation do you think is best and why? What are the risks or limitations of your proposal? This is your chance to convince people your idea is sound. Submit your proposal in a pull request to docker/docker . The title should have the format: Proposal: short title The body of the pull request should include a brief summary of your change\nand then say something like \" See the file for a complete description \". Refine your proposal through review. The maintainers and the community review your proposal. You'll need to\nanswer questions and sometimes explain or defend your approach. This is\nchance for everyone to both teach and learn. Pull request accepted. Your request may also be rejected. Not every idea is a good fit for Docker.\nLet's assume though your proposal succeeded. Implement your idea. Implementation uses all the standard practices of any contribution. fork docker/docker create a feature branch sync frequently back to master test as you go and full test before a PR If you run into issues, the community is there to help. When you have a complete implementation, submit a pull request back to docker/docker . Review and iterate on your code. If you are making a large code change, you can expect greater scrutiny\nduring this phase. Acceptance and merge!",
|
|
"title": "Design proposal"
|
|
},
|
|
{
|
|
"loc": "/project/get-help/",
|
|
"tags": "",
|
|
"text": "/* @TODO add 'no-zebra' table-style to the docs-base stylesheet */\n/* Table without \"zebra\" striping */\n.content-body table.no-zebra tr {\n background-color: transparent;\n}\n\n\nWhere to chat or get help\nThere are several communications channels you can use to chat with Docker\ncommunity members and developers.\n\n \n \n \n Internet Relay Chat (IRC)\n \n \n IRC a direct line to our most knowledgeable Docker users.\n The #docker and #docker-dev group on \n irc.freenode.net. IRC was first created in 1988. \n So, it is a rich chat protocol but it can overwhelm new users. You can search\n our chat archives.\n \n Read our IRC quickstart guide below for an easy way to get started.\n \n \n \n Google Groups\n \n There are two groups.\n Docker-user\n is for people using Docker containers. \n The docker-dev \n group is for contributors and other people contributing to the Docker \n project.\n \n \n \n Twitter\n \n You can follow Docker's twitter\n to get updates on our products. You can also tweet us questions or just \n share blogs or stories.\n \n \n \n Stack Overflow\n \n Stack Overflow has over 7000K Docker questions listed. We regularly \n monitor Docker questions\n and so do many other knowledgeable Docker users.\n \n \n\n\nIRC Quickstart\nIRC can also be overwhelming for new users. This quickstart shows you \nthe easiest way to connect to IRC. \n\n\nIn your browser open http://webchat.freenode.net\n\n\n\nFill out the form.\n\n \n Nickname\n The short name you want to be known as in IRC.\n \n \n Channels\n #docker\n \n \n reCAPTCHA\n Use the value provided.\n \n\n\n\nClick \"Connect\".\nThe system connects you to chat. You'll see a lot of text. At the bottom of\nthe display is a command line. Just above the command line the system asks \nyou to register.\n\n\n\nIn the command line, register your nickname.\n/msg NickServ REGISTER password youremail@example.com\n\n\nThe IRC system sends an email to the address you\nenter. The email contains instructions for completing your registration.\n\n\nOpen your mail client and look for the email.\n\n\n\nBack in the browser, complete the registration according to the email.\n/msg NickServ VERIFY REGISTER moxiegirl_ acljtppywjnr\n\n\n\nJoin the #docker group using the following command.\n/j #docker\n\nYou can also join the #docker-dev group.\n/j #docker-dev\n\n\n\nTo ask questions to the channel just type messages in the command line.\n\n\n\nTo quit, close the browser window.\n\n\nTips and learning more about IRC\nNext time you return to log into chat, you'll need to re-enter your password \non the command line using this command:\n/msg NickServ identify password\n\nIf you forget or lose your password see the FAQ on\nfreenode.net to learn how to recover it.\nThis quickstart was meant to get you up and into IRC very quickly. If you find \nIRC useful there is a lot more to learn. Drupal, another open source project, \nactually has \nwritten a lot of good documentation about using IRC for their project \n(thanks Drupal!).",
|
|
"title": "Where to get help"
|
|
},
|
|
{
|
|
"loc": "/project/get-help#where-to-chat-or-get-help",
|
|
"tags": "",
|
|
"text": "There are several communications channels you can use to chat with Docker\ncommunity members and developers. \n \n \n \n Internet Relay Chat (IRC) \n \n \n IRC a direct line to our most knowledgeable Docker users.\n The #docker and #docker-dev group on \n irc.freenode.net . IRC was first created in 1988. \n So, it is a rich chat protocol but it can overwhelm new users. You can search\n our chat archives .\n \n Read our IRC quickstart guide below for an easy way to get started.\n \n \n \n Google Groups \n \n There are two groups.\n Docker-user \n is for people using Docker containers. \n The docker-dev \n group is for contributors and other people contributing to the Docker \n project.\n \n \n \n Twitter \n \n You can follow Docker's twitter \n to get updates on our products. You can also tweet us questions or just \n share blogs or stories.\n \n \n \n Stack Overflow \n \n Stack Overflow has over 7000K Docker questions listed. We regularly \n monitor Docker questions \n and so do many other knowledgeable Docker users.",
|
|
"title": "Where to chat or get help"
|
|
},
|
|
{
|
|
"loc": "/project/get-help#irc-quickstart",
|
|
"tags": "",
|
|
"text": "IRC can also be overwhelming for new users. This quickstart shows you \nthe easiest way to connect to IRC. In your browser open http://webchat.freenode.net Fill out the form. \n \n Nickname \n The short name you want to be known as in IRC. \n \n \n Channels \n #docker \n \n \n reCAPTCHA \n Use the value provided. \n Click \"Connect\". The system connects you to chat. You'll see a lot of text. At the bottom of\nthe display is a command line. Just above the command line the system asks \nyou to register. In the command line, register your nickname. /msg NickServ REGISTER password youremail@example.com The IRC system sends an email to the address you\nenter. The email contains instructions for completing your registration. Open your mail client and look for the email. Back in the browser, complete the registration according to the email. /msg NickServ VERIFY REGISTER moxiegirl_ acljtppywjnr Join the #docker group using the following command. /j #docker You can also join the #docker-dev group. /j #docker-dev To ask questions to the channel just type messages in the command line. To quit, close the browser window. Tips and learning more about IRC Next time you return to log into chat, you'll need to re-enter your password \non the command line using this command: /msg NickServ identify password If you forget or lose your password see the FAQ on\nfreenode.net to learn how to recover it. This quickstart was meant to get you up and into IRC very quickly. If you find \nIRC useful there is a lot more to learn. Drupal, another open source project, \nactually has \nwritten a lot of good documentation about using IRC for their project \n(thanks Drupal!).",
|
|
"title": "IRC Quickstart"
|
|
},
|
|
{
|
|
"loc": "/project/coding-style/",
|
|
"tags": "",
|
|
"text": "Coding Style Checklist\nThis checklist summarizes the material you experienced working through make a\ncode contribution and advanced\ncontributing. The checklist applies to code\nthat is program code or code that is documentation code.\nChange and commit code\n\n\nFork the docker/docker repository.\n\n\nMake changes on your fork in a feature branch. Name your branch XXXX-something\n where XXXX is the issue number you are working on.\n\n\nRun gofmt -s -w file.go on each changed file before\n committing your changes. Most editors have plug-ins that do this automatically.\n\n\nUpdate the documentation when creating or modifying features. \n\n\nCommits that fix or close an issue should reference them in the commit message\n Closes #XXXX or Fixes #XXXX. Mentions help by automatically closing the\n issue on a merge.\n\n\nAfter every commit, run the test suite and ensure it is passing.\n\n\nSync and rebase frequently as you code to keep up with docker master.\n\n\nSet your git signature and make sure you sign each commit.\n\n\nDo not add yourself to the AUTHORS file. This file is autogenerated from the\n Git history.\n\n\nTests and testing\n\n\nSubmit unit tests for your changes. \n\n\nMake use of the builtin Go test framework built. \n\n\nUse existing Docker test files (name_test.go) for inspiration. \n\n\nRun the full test suite on your\n branch before submitting a pull request.\n\n\nRun make docs to build the documentation and then check it locally.\n\n\nUse an online grammar\n checker or similar to test you documentation changes for clarity,\n concision, and correctness.\n\n\nPull requests\n\n\nSync and cleanly rebase on top of Docker's master without multiple branches\n mixed into the PR.\n\n\nBefore the pull request, squash your commits into logical units of work using\n git rebase -i and git push -f. \n\n\nInclude documentation changes in the same commit so that a revert would\n remove all traces of the feature or fix.\n\n\nReference each issue in your pull request description (#XXXX)\n\n\nRespond to pull requests reviews\n\n\nDocker maintainers use LGTM (looks-good-to-me) in PR comments\n to indicate acceptance.\n\n\nCode review comments may be added to your pull request. Discuss, then make\n the suggested modifications and push additional commits to your feature\n branch.\n\n\nIncorporate changes on your feature branch and push to your fork. This\n automatically updates your open pull request.\n\n\nPost a comment after pushing to alert reviewers to PR changes; pushing a\n change does not send notifications.\n\n\nA change requires LGTMs from an absolute majority maintainers of an\n affected component. For example, if you change docs/ and registry/ code,\n an absolute majority of the docs/ and the registry/ maintainers must\n approve your PR.\n\n\nMerges after pull requests\n\n\nAfter a merge, a master build is\n available almost immediately.\n\n\nIf you made a documentation change, you can see it at\n docs.master.dockerproject.com.",
|
|
"title": "Coding style guide"
|
|
},
|
|
{
|
|
"loc": "/project/coding-style#coding-style-checklist",
|
|
"tags": "",
|
|
"text": "This checklist summarizes the material you experienced working through make a\ncode contribution and advanced\ncontributing . The checklist applies to code\nthat is program code or code that is documentation code.",
|
|
"title": "Coding Style Checklist"
|
|
},
|
|
{
|
|
"loc": "/project/coding-style#change-and-commit-code",
|
|
"tags": "",
|
|
"text": "Fork the docker/docker repository. Make changes on your fork in a feature branch. Name your branch XXXX-something \n where XXXX is the issue number you are working on. Run gofmt -s -w file.go on each changed file before\n committing your changes. Most editors have plug-ins that do this automatically. Update the documentation when creating or modifying features. Commits that fix or close an issue should reference them in the commit message\n Closes #XXXX or Fixes #XXXX . Mentions help by automatically closing the\n issue on a merge. After every commit, run the test suite and ensure it is passing. Sync and rebase frequently as you code to keep up with docker master. Set your git signature and make sure you sign each commit. Do not add yourself to the AUTHORS file. This file is autogenerated from the\n Git history.",
|
|
"title": "Change and commit code"
|
|
},
|
|
{
|
|
"loc": "/project/coding-style#tests-and-testing",
|
|
"tags": "",
|
|
"text": "Submit unit tests for your changes. Make use of the builtin Go test framework built. Use existing Docker test files ( name_test.go ) for inspiration. Run the full test suite on your\n branch before submitting a pull request. Run make docs to build the documentation and then check it locally. Use an online grammar\n checker or similar to test you documentation changes for clarity,\n concision, and correctness.",
|
|
"title": "Tests and testing"
|
|
},
|
|
{
|
|
"loc": "/project/coding-style#pull-requests",
|
|
"tags": "",
|
|
"text": "Sync and cleanly rebase on top of Docker's master without multiple branches\n mixed into the PR. Before the pull request, squash your commits into logical units of work using\n git rebase -i and git push -f . Include documentation changes in the same commit so that a revert would\n remove all traces of the feature or fix. Reference each issue in your pull request description ( #XXXX )",
|
|
"title": "Pull requests"
|
|
},
|
|
{
|
|
"loc": "/project/coding-style#respond-to-pull-requests-reviews",
|
|
"tags": "",
|
|
"text": "Docker maintainers use LGTM ( l ooks- g ood- t o- m e) in PR comments\n to indicate acceptance. Code review comments may be added to your pull request. Discuss, then make\n the suggested modifications and push additional commits to your feature\n branch. Incorporate changes on your feature branch and push to your fork. This\n automatically updates your open pull request. Post a comment after pushing to alert reviewers to PR changes; pushing a\n change does not send notifications. A change requires LGTMs from an absolute majority maintainers of an\n affected component. For example, if you change docs/ and registry/ code,\n an absolute majority of the docs/ and the registry/ maintainers must\n approve your PR.",
|
|
"title": "Respond to pull requests reviews"
|
|
},
|
|
{
|
|
"loc": "/project/coding-style#merges-after-pull-requests",
|
|
"tags": "",
|
|
"text": "After a merge, a master build is\n available almost immediately. If you made a documentation change, you can see it at\n docs.master.dockerproject.com .",
|
|
"title": "Merges after pull requests"
|
|
},
|
|
{
|
|
"loc": "/project/doc-style/",
|
|
"tags": "",
|
|
"text": "Docker documentation: style grammar conventions\nStyle standards\nOver time, different publishing communities have written standards for the style\nand grammar they prefer in their publications. These standards are called\nstyle guides. Generally, Docker\u2019s\ndocumentation uses the standards described in the\nAssociated Press's (AP) style guide. \nIf a question about syntactical, grammatical, or lexical practice comes up,\nrefer to the AP guide first. If you don\u2019t have a copy of (or online subscription\nto) the AP guide, you can almost always find an answer to a specific question by\nsearching the web. If you can\u2019t find an answer, please ask a\nmaintainer and\nwe will find the answer.\nThat said, please don't get too hung up on using correct style. We'd rather have\nyou submit good information that doesn't conform to the guide than no\ninformation at all. Docker's tech writers are always happy to help you with the\nprose, and we promise not to judge or use a red pen!\n\nNote:\nThe documentation is written with paragraphs wrapped at 80 column lines to\nmake it easier for terminal use. You can probably set up your favorite text\neditor to do this automatically for you.\n\nProse style\nIn general, try to write simple, declarative prose. We prefer short,\nsingle-clause sentences and brief three-to-five sentence paragraphs. Try to\nchoose vocabulary that is straightforward and precise. Avoid creating new terms,\nusing obscure terms or, in particular, using a lot of jargon. For example, use\n\"use\" instead of leveraging \"leverage\".\nThat said, don\u2019t feel like you have to write for localization or for\nEnglish-as-a-second-language (ESL) speakers specifically. Assume you are writing\nfor an ordinary speaker of English with a basic university education. If your\nprose is simple, clear, and straightforward it will translate readily.\nOne way to think about this is to assume Docker\u2019s users are generally university\neducated and read at at least a \"16th\" grade level (meaning they have a\nuniversity degree). You can use a readability\ntester to help guide your judgement. For\nexample, the readability score for the phrase \"Containers should be ephemeral\"\nis around the 13th grade level (first year at university), and so is acceptable.\nIn all cases, we prefer clear, concise communication over stilted, formal\nlanguage. Don't feel like you have to write documentation that \"sounds like\ntechnical writing.\"\nMetaphor and figurative language\nOne exception to the \"don\u2019t write directly for ESL\" rule is to avoid the use of\nmetaphor or other\nfigurative language to\ndescribe things. There are too many cultural and social issues that can prevent\na reader from correctly interpreting a metaphor.\nSpecific conventions\nBelow are some specific recommendations (and a few deviations) from AP style\nthat we use in our docs.\nContractions\nAs long as your prose does not become too slangy or informal, it's perfectly\nacceptable to use contractions in our documentation. Make sure to use\napostrophes correctly.\nUse of dashes in a sentence.\nDashes refers to the en dash (\u2013) and the em dash (\u2014). Dashes can be used to\nseparate parenthetical material.\nUsage Example: This is an example of a Docker client \u2013 which uses the Big Widget\nto run \u2013 and does x, y, and z.\nUse dashes cautiously and consider whether commas or parentheses would work just\nas well. We always emphasize short, succinct sentences.\nMore info from the always handy Grammar Girl site.\nPronouns\nIt's okay to use first and second person pronouns. Specifically, use \"we\" to\nrefer to Docker and \"you\" to refer to the user. For example, \"We built the\nexec command so you can resize a TTY session.\"\nAs much as possible, avoid using gendered pronouns (\"he\" and \"she\", etc.).\nEither recast the sentence so the pronoun is not needed or, less preferably,\nuse \"they\" instead. If you absolutely can't get around using a gendered pronoun,\npick one and stick to it. Which one you choose is up to you. One common\nconvention is to use the pronoun of the author's gender, but if you prefer to\ndefault to \"he\" or \"she\", that's fine too.\nCapitalization\nIn general\nOnly proper nouns should be capitalized in body text. In general, strive to be\nas strict as possible in applying this rule. Avoid using capitals for emphasis\nor to denote \"specialness\".\nThe word \"Docker\" should always be capitalized when referring to either the\ncompany or the technology. The only exception is when the term appears in a code\nsample.\nStarting sentences\nBecause code samples should always be written exactly as they would appear\non-screen, you should avoid starting sentences with a code sample.\nIn headings\nHeadings take sentence capitalization, meaning that only the first letter is\ncapitalized (and words that would normally be capitalized in a sentence, e.g.,\n\"Docker\"). Do not use Title Case (i.e., capitalizing every word) for headings. Generally, we adhere to AP style\nfor titles.\nPeriods\nWe prefer one space after a period at the end of a sentence, not two. \nSee lists below for how to punctuate list items.\nAbbreviations and acronyms\n\n\nExempli gratia (e.g.) and id est ( i.e.): these should always have periods and\nare always followed by a comma.\n\n\nAcronyms are pluralized by simply adding \"s\", e.g., PCs, OSs.\n\n\nOn first use on a given page, the complete term should be used, with the\nabbreviation or acronym in parentheses. E.g., Red Hat Enterprise Linux (RHEL).\nThe exception is common, non-technical acronyms like AKA or ASAP. Note that\nacronyms other than i.e. and e.g. are capitalized.\n\n\nOther than \"e.g.\" and \"i.e.\" (as discussed above), acronyms do not take\nperiods, PC not P.C.\n\n\nLists\nWhen writing lists, keep the following in mind:\nUse bullets when the items being listed are independent of each other and the\norder of presentation is not important.\nUse numbers for steps that have to happen in order or if you have mentioned the\nlist in introductory text. For example, if you wrote \"There are three config\nsettings available for SSL, as follows:\", you would number each config setting\nin the subsequent list.\nIn all lists, if an item is a complete sentence, it should end with a\nperiod. Otherwise, we prefer no terminal punctuation for list items.\nEach item in a list should start with a capital.\nNumbers\nWrite out numbers in body text and titles from one to ten. From 11 on, use numerals.\nNotes\nUse notes sparingly and only to bring things to the reader's attention that are\ncritical or otherwise deserving of being called out from the body text. Please\nformat all notes as follows:\n **Note:**\n One line of note text\n another line of note text\n\nAvoid excess use of \"i.e.\"\nMinimize your use of \"i.e.\". It can add an unnecessary interpretive burden on\nthe reader. Avoid writing \"This is a thing, i.e., it is like this\". Just\nsay what it is: \"This thing is \u2026\"\nPreferred usages\nLogin vs. log in.\nA \"login\" is a noun (one word), as in \"Enter your login\". \"Log in\" is a compound\nverb (two words), as in \"Log in to the terminal\".\nOxford comma\nOne way in which we differ from AP style is that Docker\u2019s docs use the Oxford\ncomma in all cases. That\u2019s our\nposition on this controversial topic, we won't change our mind, and that\u2019s that!\nCode and UI text styling\nWe require code font styling (monospace, sans-serif) for all text that refers\nto a command or other input or output from the CLI. This includes file paths\n(e.g., /etc/hosts/docker.conf). If you enclose text in backticks (`) markdown\nwill style the text as code. \nText from a CLI should be quoted verbatim, even if it contains errors or its\nstyle contradicts this guide. You can add \"(sic)\" after the quote to indicate\nthe errors are in the quote and are not errors in our docs.\nText taken from a GUI (e.g., menu text or button text) should appear in \"double\nquotes\". The text should take the exact same capitalisation, etc. as appears in\nthe GUI. E.g., Click \"Continue\" to save the settings.\nText that refers to a keyboard command or hotkey is capitalized (e.g., Ctrl-D).\nWhen writing CLI examples, give the user hints by making the examples resemble\nexactly what they see in their shell: \n\nIndent shell examples by 4 spaces so they get rendered as code blocks.\nStart typed commands with $ (dollar space), so that they are easily\n differentiated from program output.\nProgram output has no prefix.\nComments begin with # (hash space).\nIn-container shell commands, begin with $$ (dollar dollar space).\n\nPlease test all code samples to ensure that they are correct and functional so\nthat users can successfully cut-and-paste samples directly into the CLI.\nPull requests\nThe pull request (PR) process is in place so that we can ensure changes made to\nthe docs are the best changes possible. A good PR will do some or all of the\nfollowing:\n\nExplain why the change is needed\nPoint out potential issues or questions\nAsk for help from experts in the company or the community\nEncourage feedback from core developers and others involved in creating the\n software being documented.\n\nWriting a PR that is singular in focus and has clear objectives will encourage\nall of the above. Done correctly, the process allows reviewers (maintainers and\ncommunity members) to validate the claims of the documentation and identify\npotential problems in communication or presentation. \nCommit messages\nIn order to write clear, useful commit messages, please follow these\nrecommendations.\nLinks\nFor accessibility and usability reasons, avoid using phrases such as \"click\nhere\" for link text. Recast your sentence so that the link text describes the\ncontent of the link, as we did in the\n\"Commit messages\" section above.\nYou can use relative links (../linkeditem) to link to other pages in Docker's\ndocumentation.\nGraphics\nWhen you need to add a graphic, try to make the file-size as small as possible.\nIf you need help reducing file-size of a high-resolution image, feel free to\ncontact us for help.\nUsually, graphics should go in the same directory as the .md file that\nreferences them, or in a subdirectory for images if one already exists.\nThe preferred file format for graphics is PNG, but GIF and JPG are also\nacceptable. \nIf you are referring to a specific part of the UI in an image, use\ncall-outs (circles and arrows or lines) to highlight what you\u2019re referring to.\nLine width for call-outs should not exceed five pixels. The preferred color for\ncall-outs is red.\nBe sure to include descriptive alt-text for the graphic. This greatly helps\nusers with accessibility issues.\nLastly, be sure you have permission to use any included graphics.",
|
|
"title": "Documentation style guide"
|
|
},
|
|
{
|
|
"loc": "/project/doc-style#docker-documentation-style-grammar-conventions",
|
|
"tags": "",
|
|
"text": "",
|
|
"title": "Docker documentation: style & grammar conventions"
|
|
},
|
|
{
|
|
"loc": "/project/doc-style#style-standards",
|
|
"tags": "",
|
|
"text": "Over time, different publishing communities have written standards for the style\nand grammar they prefer in their publications. These standards are called style guides . Generally, Docker\u2019s\ndocumentation uses the standards described in the Associated Press's (AP) style guide . \nIf a question about syntactical, grammatical, or lexical practice comes up,\nrefer to the AP guide first. If you don\u2019t have a copy of (or online subscription\nto) the AP guide, you can almost always find an answer to a specific question by\nsearching the web. If you can\u2019t find an answer, please ask a maintainer and\nwe will find the answer. That said, please don't get too hung up on using correct style. We'd rather have\nyou submit good information that doesn't conform to the guide than no\ninformation at all. Docker's tech writers are always happy to help you with the\nprose, and we promise not to judge or use a red pen! Note: \nThe documentation is written with paragraphs wrapped at 80 column lines to\nmake it easier for terminal use. You can probably set up your favorite text\neditor to do this automatically for you. Prose style In general, try to write simple, declarative prose. We prefer short,\nsingle-clause sentences and brief three-to-five sentence paragraphs. Try to\nchoose vocabulary that is straightforward and precise. Avoid creating new terms,\nusing obscure terms or, in particular, using a lot of jargon. For example, use\n\"use\" instead of leveraging \"leverage\". That said, don\u2019t feel like you have to write for localization or for\nEnglish-as-a-second-language (ESL) speakers specifically. Assume you are writing\nfor an ordinary speaker of English with a basic university education. If your\nprose is simple, clear, and straightforward it will translate readily. One way to think about this is to assume Docker\u2019s users are generally university\neducated and read at at least a \"16th\" grade level (meaning they have a\nuniversity degree). You can use a readability\ntester to help guide your judgement. For\nexample, the readability score for the phrase \"Containers should be ephemeral\"\nis around the 13th grade level (first year at university), and so is acceptable. In all cases, we prefer clear, concise communication over stilted, formal\nlanguage. Don't feel like you have to write documentation that \"sounds like\ntechnical writing.\" Metaphor and figurative language One exception to the \"don\u2019t write directly for ESL\" rule is to avoid the use of\nmetaphor or other figurative language to\ndescribe things. There are too many cultural and social issues that can prevent\na reader from correctly interpreting a metaphor.",
|
|
"title": "Style standards"
|
|
},
|
|
{
|
|
"loc": "/project/doc-style#specific-conventions",
|
|
"tags": "",
|
|
"text": "Below are some specific recommendations (and a few deviations) from AP style\nthat we use in our docs. Contractions As long as your prose does not become too slangy or informal, it's perfectly\nacceptable to use contractions in our documentation. Make sure to use\napostrophes correctly. Use of dashes in a sentence. Dashes refers to the en dash (\u2013) and the em dash (\u2014). Dashes can be used to\nseparate parenthetical material. Usage Example: This is an example of a Docker client \u2013 which uses the Big Widget\nto run \u2013 and does x, y, and z. Use dashes cautiously and consider whether commas or parentheses would work just\nas well. We always emphasize short, succinct sentences. More info from the always handy Grammar Girl site . Pronouns It's okay to use first and second person pronouns. Specifically, use \"we\" to\nrefer to Docker and \"you\" to refer to the user. For example, \"We built the exec command so you can resize a TTY session.\" As much as possible, avoid using gendered pronouns (\"he\" and \"she\", etc.).\nEither recast the sentence so the pronoun is not needed or, less preferably,\nuse \"they\" instead. If you absolutely can't get around using a gendered pronoun,\npick one and stick to it. Which one you choose is up to you. One common\nconvention is to use the pronoun of the author's gender, but if you prefer to\ndefault to \"he\" or \"she\", that's fine too. Capitalization In general Only proper nouns should be capitalized in body text. In general, strive to be\nas strict as possible in applying this rule. Avoid using capitals for emphasis\nor to denote \"specialness\". The word \"Docker\" should always be capitalized when referring to either the\ncompany or the technology. The only exception is when the term appears in a code\nsample. Starting sentences Because code samples should always be written exactly as they would appear\non-screen, you should avoid starting sentences with a code sample. In headings Headings take sentence capitalization, meaning that only the first letter is\ncapitalized (and words that would normally be capitalized in a sentence, e.g.,\n\"Docker\"). Do not use Title Case (i.e., capitalizing every word) for headings. Generally, we adhere to AP style\nfor titles .",
|
|
"title": "Specific conventions"
|
|
},
|
|
{
|
|
"loc": "/project/doc-style#periods",
|
|
"tags": "",
|
|
"text": "We prefer one space after a period at the end of a sentence, not two. See lists below for how to punctuate list items. Abbreviations and acronyms Exempli gratia (e.g.) and id est ( i.e.): these should always have periods and\nare always followed by a comma. Acronyms are pluralized by simply adding \"s\", e.g., PCs, OSs. On first use on a given page, the complete term should be used, with the\nabbreviation or acronym in parentheses. E.g., Red Hat Enterprise Linux (RHEL).\nThe exception is common, non-technical acronyms like AKA or ASAP. Note that\nacronyms other than i.e. and e.g. are capitalized. Other than \"e.g.\" and \"i.e.\" (as discussed above), acronyms do not take\nperiods, PC not P.C. Lists When writing lists, keep the following in mind: Use bullets when the items being listed are independent of each other and the\norder of presentation is not important. Use numbers for steps that have to happen in order or if you have mentioned the\nlist in introductory text. For example, if you wrote \"There are three config\nsettings available for SSL, as follows:\", you would number each config setting\nin the subsequent list. In all lists, if an item is a complete sentence, it should end with a\nperiod. Otherwise, we prefer no terminal punctuation for list items.\nEach item in a list should start with a capital. Numbers Write out numbers in body text and titles from one to ten. From 11 on, use numerals. Notes Use notes sparingly and only to bring things to the reader's attention that are\ncritical or otherwise deserving of being called out from the body text. Please\nformat all notes as follows: **Note:** One line of note text another line of note text Avoid excess use of \"i.e.\" Minimize your use of \"i.e.\". It can add an unnecessary interpretive burden on\nthe reader. Avoid writing \"This is a thing, i.e., it is like this\". Just\nsay what it is: \"This thing is \u2026\" Preferred usages Login vs. log in. A \"login\" is a noun (one word), as in \"Enter your login\". \"Log in\" is a compound\nverb (two words), as in \"Log in to the terminal\". Oxford comma One way in which we differ from AP style is that Docker\u2019s docs use the Oxford\ncomma in all cases. That\u2019s our\nposition on this controversial topic, we won't change our mind, and that\u2019s that! Code and UI text styling We require code font styling (monospace, sans-serif) for all text that refers\nto a command or other input or output from the CLI. This includes file paths\n(e.g., /etc/hosts/docker.conf ). If you enclose text in backticks (`) markdown\nwill style the text as code. Text from a CLI should be quoted verbatim, even if it contains errors or its\nstyle contradicts this guide. You can add \"(sic)\" after the quote to indicate\nthe errors are in the quote and are not errors in our docs. Text taken from a GUI (e.g., menu text or button text) should appear in \"double\nquotes\". The text should take the exact same capitalisation, etc. as appears in\nthe GUI. E.g., Click \"Continue\" to save the settings. Text that refers to a keyboard command or hotkey is capitalized (e.g., Ctrl-D). When writing CLI examples, give the user hints by making the examples resemble\nexactly what they see in their shell: Indent shell examples by 4 spaces so they get rendered as code blocks. Start typed commands with $ (dollar space), so that they are easily\n differentiated from program output. Program output has no prefix. Comments begin with # (hash space). In-container shell commands, begin with $$ (dollar dollar space). Please test all code samples to ensure that they are correct and functional so\nthat users can successfully cut-and-paste samples directly into the CLI.",
|
|
"title": "Periods"
|
|
},
|
|
{
|
|
"loc": "/project/doc-style#pull-requests",
|
|
"tags": "",
|
|
"text": "The pull request (PR) process is in place so that we can ensure changes made to\nthe docs are the best changes possible. A good PR will do some or all of the\nfollowing: Explain why the change is needed Point out potential issues or questions Ask for help from experts in the company or the community Encourage feedback from core developers and others involved in creating the\n software being documented. Writing a PR that is singular in focus and has clear objectives will encourage\nall of the above. Done correctly, the process allows reviewers (maintainers and\ncommunity members) to validate the claims of the documentation and identify\npotential problems in communication or presentation. Commit messages In order to write clear, useful commit messages, please follow these recommendations .",
|
|
"title": "Pull requests"
|
|
},
|
|
{
|
|
"loc": "/project/doc-style#links",
|
|
"tags": "",
|
|
"text": "For accessibility and usability reasons, avoid using phrases such as \"click\nhere\" for link text. Recast your sentence so that the link text describes the\ncontent of the link, as we did in the \"Commit messages\" section above. You can use relative links (../linkeditem) to link to other pages in Docker's\ndocumentation.",
|
|
"title": "Links"
|
|
},
|
|
{
|
|
"loc": "/project/doc-style#graphics",
|
|
"tags": "",
|
|
"text": "When you need to add a graphic, try to make the file-size as small as possible.\nIf you need help reducing file-size of a high-resolution image, feel free to\ncontact us for help.\nUsually, graphics should go in the same directory as the .md file that\nreferences them, or in a subdirectory for images if one already exists. The preferred file format for graphics is PNG, but GIF and JPG are also\nacceptable. If you are referring to a specific part of the UI in an image, use\ncall-outs (circles and arrows or lines) to highlight what you\u2019re referring to.\nLine width for call-outs should not exceed five pixels. The preferred color for\ncall-outs is red. Be sure to include descriptive alt-text for the graphic. This greatly helps\nusers with accessibility issues. Lastly, be sure you have permission to use any included graphics.",
|
|
"title": "Graphics"
|
|
}
|
|
]
|
|
} |