At LinuxCon+ContainerCon North America this month, Jérôme Petazzoni of Docker will present a free, all-day tutorial “Orchestrating Containers in Production at Scale with Docker Swarm.” As a preview to that talk, this article takes a look specifically at SwarmKit, an open source toolkit used to build multi-node systems.
SwarmKit is a reusable library, like libcontainer, libnetwork, and vpnkit. It is also a plumbing part of the Docker ecosystem. The SwarmKit repository comes with two examples:
-
swarmctl (a CLI tool to “speak” the SwarmKit API);
-
swarmd (an agent that can federate existing Docker Engines into a Swarm).
This organization is similar to the libcontainer codebase, where libcontainer is the reusable library, containerd is a lightweight container engine using it, and container-ctr is a CLI to control containerd.
In this short tutorial, we’ll give an overview of SwarmKit features and its Docker CLI commands, show you how to enable Swarm mode, and then set up your first Swarm cluster. These are the first steps necessary to create and run Swarm services, which can easily be scaled in a pinch.
SwarmKit Features and Concepts
Some of SwarmKit’s features include:
-
highly available, distributed store based on Raft
-
services managed with a declarative API (implementing desired state and reconciliation loop)
-
automatic TLS keying, signing, key renewal and rotation
-
dynamic promotion/demotion of nodes, allowing you to change how many nodes (“managers”) are actively part of the Raft consensus
-
integration with overlay networks and load balancing
Although a useful cluster will typically be at least one node, SwarmKit can function in single-node scenarios. This is useful for testing and allows you to use a consistent API and set of tools from single-node development mode to cluster deployments.
Nodes can be either managers or workers. Workers merely run containers, while managers also actively take part in the Raft consensus. Managers are controlled through the SwarmKit API. One manager is elected as the leader; other managers merely forward requests to it. The managers expose the SwarmKit API, and using the API, you can indicate that you want to run a service.
A service, in turn, is specified by its desired state: for example, which image, how many instances, and so forth:
-
The leader uses different subsystems to break down services into tasks, such as orchestrator, scheduler, allocator, dispatcher
-
A task corresponds to a specific container, assigned to a specific node
-
Nodes know which tasks should be running, and will start or stop containers accordingly (through the Docker Engine API)
You can refer to the nomenclature in the SwarmKit repository for more details.
Swarm Mode
Docker Engine 1.12 features SwarmKit integration, meaning that all the features of SwarmKit can be enabled in Docker 1.12, and you can leverage them using Docker CLI and API. The Docker CLI features three new commands:
-
docker swarm (enable Swarm mode; join a Swarm; adjust cluster parameters)
-
docker node (view nodes; promote/demote managers; manage nmodes)
-
docker service (create and manage services)
The Docker API exposes the same concepts, and the SwarmKit API is also exposed (on a separate socket).
To follow along this demo, you’ll need a VM with Docker 1.12 and Compose 1.8. To experiment with scaling, load balancing, and failover, you will ideally need a few VMs connected together. If you are using a Mac, the easiest way to get started on a single node is to install Docker Mac.
You will also need a Dockerized application. If you need one for demo and testing purposes, you can use DockerCoins: it is built around a microservices architecture and features four very simple services in different languages, as well as a Redis data store.
You need to enable Swarm mode to use the new stuff. By default, everything runs as usual. With Swarm mode enabled, you “unlock” SwarmKit functions (i.e., services, out-of-the-box overlay networks).
Now, try a Swarm-specific command:
$ docker node ls Error response from daemon: this node is not participating as a Swarm manager
Creating your first Swarm
The cluster is initialized with docker swarm init. This should be executed on a first, seed node. DO NOT execute docker swarm init on multiple nodes! You would have multiple disjointed clusters.
To create your cluster from node1, do:
docker swarm init
To check that Swarm mode is enabled, you can run the traditional docker info command:
docker info
The output should include:
Swarm: active NodeID: 8jud7o8dax3zxbags3f8yox4b Is Manager: true ClusterID: 2vcw2oa9rjps3a24m91xhvv0c ...
Next, we will run our first Swarm mode command. Let’s try the exact same command as earlier to list the nodes (well, the only node) of our cluster:
docker node ls
The output should look like the following:
ID NAME MEMBERSHIP STATUS AVAILABILITY MANAGER STATUS d1kf...12wt * ip-172-31-25-65 Accepted Ready Active Leader
Now you have a Swarm cluster!
If you have another node, you can add it to the cluster very easily. In fact, when we did docker swarm init, it showed us which command to use. If you missed it, you can see it again very easily by running:
docker swarm join-token worker
Then, log into the other node (for instance, with SSH) and copy-paste the docker swarm join command that was displayed before. That’s it! The node immediately joins the cluster and can run your workloads.
At this point, if you want to use your cluster to run an arbitrary container, you can do:
docker service create --name helloweb --publish 1234:80 nginx
This will create a container using the official NGINX image, and make it available from the outside world on port 1234 of the cluster. You can now connect to any cluster node on port 1234 and you will see the NGINX “welcome to NGINX” page.
Here, I provided a simple introduction on how to enable Swarm mode and set up your first Swarm cluster. My in-depth ContainerCon training course will provided details about adding nodes to your Swarm, running and testing Swarm services, and more.
Register for LinuxCon + ContainerCon and sign up now to attend Orchestrating Containers in Production at Scale with Docker Swarm” presented by Jérôme Petazzoni.
Jérôme Petazzoni works at Docker, where he helps others to containerize all the things. In another life he built clouds when EC2 was just the name of a plane, developed a GIS to deploy dark fiber through the French subway, managed commando deployments of large-scale video streaming systems in bandwidth-constrained environments such as conference centers, operated and scaled the dotCloud PAAS, and other feats of technical wizardry. When annoyed he threatens to replace things with a very small shell script.