We talked about how containers work from a high perspective but the platform is not only the Docker Engine. Naturally it’s indispensable, because everything is built around it, but there are some other interesting things, such as the registry for instance. Docker Registry comes in two variants – public and private. You could think about it as if it were a database but that’s just boring. Think of the registry as you would about a library with lots of shelf for books – repositories for our Docker Images.
The library can be public and contain a public zone with books available to everyone. You can borrow a book, you can also write something and send it to the library, it will be available to everyone but only you will be the author and only you will be able to change it while everyone will only have access to it. This library can also have private zones, where only you and others you choose will have access, will be able to read and write books. By analogy the public Docker Registry, it is called Docker Hub and in 2016 its database contained almost half a million container images and over 6 billion downloads were made. While we’re only talking about public repositories we need to keep in mind that there are also private ones… Docker Hub is available from the cloud, if there is a need to install the registry locally in your Datacenter it’s possible and called (commercial) Docker Trusted Registry or (free) Docker Untrusted Registry. They differ by several details but the most important one is existence of support by Docker Inc. for the commercial version and its lack for open source. It doesn’t matter which version you choose to run image, Docker will have to use the ‘docker pull’ command.
It means, no more and no less, downloading the registry from the image repository container to our Hardware Land and making it available to Docker Engine to run. You can also create your own container using the ‘docker push’ command and pass it into the registry of the selected repository. Thus, consider the Docker Registry as a base with ready-for-use containers both yours and those prepared by the community. It is also worth mentioning that the Docker Registry is not the only option, there are quite a few of them available – GitLab Container Registry or Artifactory are just some flagship examples. This is important because if you already have a container registry in the organization, using other software such as GitLab (which is usually used as a code repository) it also allows you to run the registry. It is worth looking around rather than to create new entities that may not be needed. As Master Miyagi reminds us: the best way to success is the simplicity of architecture and the use of tools that are already known to us.
We have an engine, we have a registry and at this point it’s worth mentioning a little about the Docker Content Trust mechanism. This is, generally speaking, a mechanism used to verify the origin of a container and check its compliance. Before the potential author sends anything to the Docker’s registry, let’s take for example the Docker Hub (a container created by you) it will be signed with your private key. When a potential consumer uses ‘pull’ for the first time a trust relationship is established and checks are made to ensure that the container has a valid signature and if it is what it claims to be. This can be compared to the trust relationship model obtained with the SSH connection where the only difference is that the initiation of a secure connection occurs through SSH and the use of PKI keys. This mechanism primarily protects us from the man-in-the-middle attacks, where the container is replaced on the fly and instead of what we asked for we’re getting a bomb. For the order of things, apart from Docker Content Trust, the platform gives us the ability to access to a defined, in the form of a role (RBAC) integration with AD / LDAP and SSO.
Now, that you can picture how to run a container with the Docker Engine, how the Docker Content Trust mechanism works (protecting us from downloading a dud from the Docker Registry) it’s high time to answer the question, what happens when we have more than one physical/virtual server with the Docker Engine on board… this is the moment when the staged Docker environment goes out of your laptop to become production! You’d need to figure out how to run containers on more than one nod, right? You need think about cluster, set of servers that will provide you a kind of overhead that in turn will allow scaling up of resources and launch dozens, hundreds or thousands of containers. This is the moment when Docker Swarm comes into play to help. It is a system that can be compared to VMware vCenter Server (without a GUI, only the console) managing a group of ESX servers, which in our case the Docker Engine. Of course, there are more differences similarities but it’s all about getting the picture of how it all works. Docker Swarm gives us:
- Cluster Management integrated with Docker Engine – many Docker engines become available from one place as a resource pool and this enables the automated deployment of new containers on cluster nodes – meaning we do not have to say that container A should be located on host 1 and container B on host 2.
- Decentralized Architecture – there are no differences in the deployment models, both Manager nodes and Worker nodes are installed from the same set of binaries, so you can use one operating system image with one type of Docker Engine installed.
- Declarative Action Model – Docker uses a declarative approach in defining the application stack, their status and role. This means no less that we can describe a complex application with service front end with a message queuing service and a backend database and run it in such a defined cluster.
- Scaling – It is possible to automatically add more services represented by containers, as well as to delete them.
- Determining the Desired Application Status – Swarm continuously monitors the status of the cluster, and if it finds that e.g. in a service, we defined, consisting of 10 container replicas only 8 are running, it will automatically launch 2 missing ones or restart the currently not operating containers according to the rules we defined.
- Network between Cluster Nodes – One, in my opinion, of the most interesting mechanism in Docker Swarm is connected with network support, more precisely with the possibility of establishing an automatic overlay network between cluster nodes. It works on the basis VXLAN so it’s not anything original just a standard in current overlay networks. Thanks to this, we can achieve micro segmentation between sites and, most importantly, Docker Swarm takes care of it automatically. The only thing you need to do is to prepare the transport network properly.
- Service Discovery – Swarm cluster nodes assign a unique DNS name to each service, thanks to which you can query any container with DNS at any time.
- Load Balancing – Swarm offers you an internal Load Balancing mechanism but you can also use an external Load Balancer.
- Security by Default – Each cluster node forces authentication and encryption using TLS in communication. You can use self-signed certificates or use an external or company root CA.
- Cyclic Updates – the ability to update a part of the cluster, wait some time and update subsequent nodes. This allows a quick detection of problems on a small part of the cluster and an appropriate response without compromising the entire environment.
The high-level architecture of the solution is presented in the diagram below:
As you can see here we have three types of entities. The first is a Manager – cluster node responsible for assigning tasks to the Worker nodes, general cluster management and orchestration of tasks; it also provides endpoint for the cluster API. The cluster requires min. 3 nodes of the Manager type, where a maximum of one can fail without disabling it, or 5 allowing a simultaneous failure of 2 nodes. Failure of the cluster will make it unmanageable, but the services running on it will continue to work. The second type nodes are the type Worker nodes and are instances of Docker Engine whose main role is to run containers. Worker nodes need a minimum of one Manager node to work. The last entity is our good old service… it allows you to run containers on the Swarm cluster. We define the service type ourselves, for example it may be a web frontend, database, etc. The definition of the service may include information that image of the container is to be used as well which commands are to be used inside the already running cluster, we can choose from the following elements and actions:
- Service Options – When you create a service, you can specify and determine: a service port that is to be mapped to the external network, network overlay which is to belong to the service, RAM and CPU limitations, policy updates, and the number of working replicas of containers within the site.
- Replicated vs. Global Services – A replica in the context of a service means the number of identical tasks, i.e. containers that are launched to provide a given service. To use an example – say you want to run 4 Gateway API replies for your application. You can do it in two modes: global service which will involve running it on each node of the cluster (you run as many replicas as many Docker Engines you have) and when you add another host with installed Docker Engine, another replica will be created. Alternatively, you can specify the number of replicas, for example, the mentioned 4 and let the cluster Managers create exactly the same number of containers.
- Determining the Desired State of the Application – when you start a service in Docker Swarm mode need to define what is the state of the service you expect that is to be maintained by the cluster. For example, as above, you have defined that you want to have a service running 4 instances of the API Gateway for your application together with Load Balancing between them. Swarm Manager launches a service consisting of 4 replicas of containers for the 4 available Docker Engine in the cluster. In case one instance goes down, no matter for what reasons, the cluster manager recognizes that the condition of the site has changed. If it’s required it turns off the damaged container (replica) and starts it again from the predefined image so that it can return to the state defined by our application.
- Operation during a Container Failure – Docker Swarm, when it detects that a container has a problem with its operation, does not perform a restart or attempt to repair the container. An orchestral sewn into the cluster nodes simply removes the damaged container and replaces it with a new one.
- Pending Services – When defining a service, you can set up certain requirements related to Hardware Land and its configuration which are necessary for its operation. For example, we have defined that the site cannot run on something weaker than the Docker Engine equipped with 128GB RAM, and until such a node is added to the Swarm cluster, the service will remain in the pending state.
Summing up, Docker Swarm is a mode of operation in which many Docker Engines can run many services comprised of replicas i.e. tasks represented by containers. It is worth mentioning here that a single container is equal to one task. This is a very important thing! We are currently used to the fact that servers provide more than one task – e.g. a web server and a database on one VM / physical host. Docker parts from this model … it’s not banned, of course, but it’s generally not a standard practice. One container, one task. Of course, one container can service many users but as a rule one container performs only one function.
I believe that one important thing here is to mention that Docker Swarm is not the only option as a tool for clustering and orchestration. In autumn 2017 at DockerCon it was announced that Docker Inc. will also support, in addition to Swarm, a solution called Kubernetes… but this is a topic for a completely different story. For the sake of order, I will only mention that there are still other engines for orchestration and container clustering, such as Mesosphere or CoreOS Fleet apart from Docker Swarm and Kubernets. For now, let’s just focus on Docker Swarm, because it is readily available, in a package with Docker Engine, so you don’t have to uninstall anything, just run it and start using it. At the beginning it’s still a lot, and over time you can freely change the orchestrator, or use a PaaS such as Google Container Engine or Amazon Elastic Container Service.
Next part coming soon! Stay tuned.