Docker swarming

If you haven’t used docker swarming yet, it is a wonderful feature that i had yet to test. I tried different things to understand the power behind this and tried a software to manage the swarm that we’ll see in the next article.

What is a swarm

A swarm is a collection of machines that you can band together to run multiple copies of containers in orchestration without the hassle of managing the resources and the links between them.

If you have been working with docker and hopefully with docker-compose so far, docker swarm is just the next step to this. A swarm allows you to easily distribute and safely scale up or down your docker-compose stack without having to type tons of commands and manage multiple machines at once. Everything is done from a managing machine and it supports as many workers as you want.

Creating a set of virtual machines

To test this, we first need to create a cluster of machines. Because i do not want to spend money to test this on AWS and i don’t have machines laying around in my house, i created a relatively simple multi-machine Vagrant environment like so:

Vagrant.configure("2") do |config|

  config.vm.define "master" do |worker| = "ubuntu/bionic64"
       config.vm.hostname = "docker-master" "private_network", type: "dhcp" "public_network"
    worker.vm.provision "docker" "forwarded_port", guest: 8080, host: 8080 "forwarded_port", guest: 9000, host: 9000

  config.vm.define "worker1" do |worker| = "ubuntu/bionic64"
       config.vm.hostname = "docker-worker-1" "private_network", type: "dhcp" "public_network"
    worker.vm.provision "docker"

  config.vm.define "worker2" do |worker| = "ubuntu/bionic64"
       config.vm.hostname = "docker-worker-2" "private_network", type: "dhcp" "public_network"
    worker.vm.provision "docker"


This simple Vagrantfile creates 3 machines that are named master, worker1 and worker2. If you use “vagrant up” it will bring them up all up and install docker on each of them.

Note that i removed the brigdes and the file syncs, you can put whatever you feel in there.

Swarm the machines together

Once your virtual machines are up, time to connect to each and issue a few commands to band them together into a docker swarm.

vagrant ssh master
docker swarm init --advertise-addr <private ip>

vagrant ssh worker1
<run the code docker swarm init displays to join the swarm>

vagrant ssh worker2
<run the code docker swarm init displays to join the swarm>

This will join worker1 and worker2 to the master. Back to the master, you can how see the nodes are well registered using the following commands:

vagrant ssh master
docker node ls

This should show that you now have the two nodes connected. Now we can launch a stack, but first, let’s get a little bit of docker-compose magic in there.

Prepare the docker-compose environment

We’ll need to install docker-compose on the master machine. We can easily do this with apt. You don’t need to do this on workers, just on master.

sudo apt install docker-compose

And now let’s create a docker-compose file and supporting files in the master machine. Create yourself a folder and put the following code in each file.

# docker-compose.yml
version: "3.4"
      image: nginx:latest
          - "8080:80"
          - ./code:/code
          - ./site.conf:/etc/nginx/conf.d/site.conf
        - swarm-test
      image: php:7-fpm
          - ./code:/code
        - swarm-test
# ./code/index.php
echo gethostname();
# ./nginx.conf
server {
    index index.php index.html;
    server_name php-docker.local;
    error_log  /var/log/nginx/error.log;
    access_log /var/log/nginx/access.log;
    root /code;

    location ~ \.php$ {
        try_files $uri =404;
        fastcgi_split_path_info ^(.+\.php)(/.+)$;
        fastcgi_pass php:9000;
        fastcgi_index index.php;
        include fastcgi_params;
        fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
        fastcgi_param PATH_INFO $fastcgi_path_info;

And finaly, to test this setup, you will need to add a “php-docker.local” to your hosts files (/etc/hosts) on the machine that will be running a “curl” command. You can do this on the master vm or on your host machine if it is a Linux/Mac machine. If it is a Windows machine, you’ll have to do this is the “C:\windows\system32\drivers\etc\hosts” file.

Let’s stack

Now that we have our setup, let’s launch a stack using the docker stack deploy command:

docker stack deploy test --compose-file docker-compose.yml

This will start a new stack called test if it doesn’t exist yet and will launch the different containers from the docker-compose.yml file to define the services it needs to run.

You do not need to tell it where to run the different containers and they will automatically connect together as needed and run on different machines.

Typing the next command should show the status of your services:

docker service ls

ID                  NAME                  MODE                REPLICAS            IMAGE                        PORTS
i9738cbpo05v        test_php              replicated          1/1                 php:7-fpm
x9nf7szq2gyy        test_web              replicated          1/1                 nginx:latest                 *:8080->80/tcp

As you can see thse containers are running and if you ps the nodes, you can see where each item are running:

docker node ps $(docker node ls -q)

Scaling up

Now that we have multiple nodes and 2 containers running, it is time to get them to work a little and you can do this by scalling up the services using:

docker service scale test_php=10 test_web=2

This will show you a progress of the containers getting started. Obviously, you probably don’t need multiple php-fpm handlers nor 2 nginx but for the test you can see that it works well by hitting this a few times:

curl http://php-docker.local:8080

This should output different hostnames as you repeat the same command showing that you don’t even need to worry about the routing, the routing mesh takes care of all that for you.

Next time, i’ll show you a nice tool called Portainer and we’ll look into setting up your own registry for super fast deployments.