In this article, I described the creation of a Docker Swarm cluster consisting of three nodes and connecting to it a GlusterFS replicated volume, which is common to all nodes, to it.
Docker Swarm mode is used to create a cluster of Docker-hosts. In this case, if container A with the voldata named volume connected to it is running on node1 , all changes to the voldata will be saved locally on node1 . If container A is turned off and it happens that it starts up again, say, on node3 , when the voldata volume is connected, this storage will be empty and will not contain changes made on node1 .
One of the ways to solve the problem is to use GlusterFS to replicate volumes, which will make data available to all nodes at any time. In addition, for each Docker host, the named volumes will remain local.
To perform this exercise, I used three AWS EC2 instances, each of which was connected to one EBS volume.
We will use Ubuntu 16.04 as the OS.
First, we write the node names in / etc / hosts:
XX.XX.XX.XX node1 XX.XX.XX.XX node2 XX.XX.XX.XX node3
Then update the system:
$ sudo apt update $ sudo apt upgrade
Reboot and run the installation of the required packages on all nodes:
$ sudo apt install -y docker.io $ sudo apt install -y glusterfs-server
Run the services:
$ sudo systemctl start glusterfs-server $ sudo systemctl start docker
Create a GlusterFS repository:
$ sudo mkdir -p /gluster/data /swarm/volumes
On all nodes we will prepare the file system for the Gluster storage:
$ sudo mkfs.xfs /dev/xvdb $ sudo mount /dev/xvdb /gluster/data/
On node1 :
$ sudo gluster peer probe node2 peer probe: success. $ sudo gluster peer probe node3 peer probe: success.
Create a replicable volume:
$ sudo gluster volume create swarm-vols replica 3 node1:/gluster/data node2:/gluster/data node3:/gluster/data force volume create: swarm-vols: success: please start the volume to access data
Allow mounting only from localhost:
$ sudo gluster volume set swarm-vols auth.allow 127.0.0.1 volume set: success
Run the volume:
$ sudo gluster volume start swarm-vols volume start: swarm-vols: success
Then we mount it on each Gluster node:
$ sudo mount.glusterfs localhost:/swarm-vols /swarm/volumes
Our goal: to create 1 manager and 2 working nodes.
$ sudo docker swarm init Swarm initialized: current node (82f5ud4z97q7q74bz9ycwclnd) is now a manager. To add a worker to this swarm, run the following command: docker swarm join \ --token SWMTKN-1-697xeeiei6wsnsr29ult7num899o5febad143ellqx7mt8avwn-1m7wlh59vunohq45x3g075r2h \ 172.31.24.234:2377 To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.
We get a token for working nodes:
$ sudo docker swarm join-token worker To add a worker to this swarm, run the following command: ```docker docker swarm join \ --token SWMTKN-1-697xeeiei6wsnsr29ult7num899o5febad143ellqx7mt8avwn-1m7wlh59vunohq45x3g075r2h \ 172.31.24.234:2377
On both working nodes we will execute:
$ sudo docker swarm join --token SWMTKN-1-697xeeiei6wsnsr29ult7num899o5febad143ellqx7mt8avwn-1m7wlh59vunohq45x3g075r2h 172.31.24.234:2377 This node joined a swarm as a worker.
Check the swarm cluster:
$ sudo docker node ls ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS 6he3dgbanee20h7lul705q196 ip-172-31-27-191 Ready Active 82f5ud4z97q7q74bz9ycwclnd * ip-172-31-24-234 Ready Active Leader c7daeowfoyfua2hy0ueiznbjo ip-172-31-26-52 Ready Active
We will act as follows: create labels for node1 and node3 , create a container on node1 , turn it off, create again on node3 , mount the same volumes, and see if the files created by the container on node1 remain in our storage.
Put labels on swarm nodes:
$ sudo docker node update --label-add nodename=node1 ip-172-31-24-234 ip-172-31-24-234 $ sudo docker node update --label-add nodename=node3 ip-172-31-26-52 ip-172-31-26-52
Check the tags:
$ sudo docker node inspect --pretty ip-172-31-26-52 ID: c7daeowfoyfua2hy0ueiznbjo Labels: - nodename = node3 Hostname: ip-172-31-26-52 Joined at: 2017-01-06 22:44:17.323236832 +0000 utc Status: State: Ready Availability: Active Platform: Operating System: linux Architecture: x86_64 Resources: CPUs: 1 Memory: 1.952 GiB Plugins: Network: bridge, host, null, overlay Volume: local Engine Version: 1.12.1
Let's create a Docker service on node1 , which will be used to test working with files in the shared repository:
$ sudo docker service create --name testcon --constraint 'node.labels.nodename == node1' --mount type=bind,source=/swarm/volumes/testvol,target=/mnt/testvol /bin/touch /mnt/testvol/testfile1.txt duvqo3btdrrlwf61g3bu5uaom
Check service:
$ sudo docker service ls ID NAME REPLICAS IMAGE COMMAND duvqo3btdrrl testcon 0/1 busybox /bin/bash
Ensure that it is running on node1 :
$ sudo docker service ps testcon ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR 6nw6sm8sak512x24bty7fwxwz testcon.1 ubuntu:latest ip-172-31-24-234 Ready Ready 1 seconds ago 6ctzew4b3rmpkf4barkp1idhx \_ testcon.1 ubuntu:latest ip-172-31-24-234 Shutdown Complete 1 seconds ago
Also check the mounted volumes:
$ sudo docker inspect testcon [ { "ID": "8lnpmwcv56xwmwavu3gc2aay8", "Version": { "Index": 26 }, "CreatedAt": "2017-01-06T23:03:01.93363267Z", "UpdatedAt": "2017-01-06T23:03:01.935557744Z", "Spec": { "ContainerSpec": { "Image": "busybox", "Args": [ "/bin/bash" ], "Mounts": [ { "Type": "bind", "Source": "/swarm/volumes/testvol", "Target": "/mnt/testvol" } ] }, "Resources": { "Limits": {}, "Reservations": {} }, "RestartPolicy": { "Condition": "any", "MaxAttempts": 0 }, "Placement": { "Constraints": [ "nodename == node1" ] } }, "ServiceID": "duvqo3btdrrlwf61g3bu5uaom", "Slot": 1, "Status": { "Timestamp": "2017-01-06T23:03:01.935553276Z", "State": "allocated", "Message": "allocated", "ContainerStatus": {} }, "DesiredState": "running" } ]
Turn off the service and create it again on node3 :
$ sudo docker service create --name testcon --constraint 'node.labels.nodename == node3' --mount type=bind,source=/swarm/volumes/testvol,target=/mnt/testvol ubuntu:latest /bin/touch /mnt/testvol/testfile3.txt 5y99c0bfmc2fywor3lcsvmm9q
Ensure that it is now running on node3 :
$ sudo docker service ps testcon ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR 5p57xyottput3w34r7fclamd9 testcon.1 ubuntu:latest ip-172-31-26-52 Ready Ready 1 seconds ago aniesakdmrdyuq8m2ddn3ga9b \_ testcon.1 ubuntu:latest ip-172-31-26-52 Shutdown Complete 2 seconds ago
As a result, we should see that the files created in both containers are located in the same repository:
$ ls -l /swarm/volumes/testvol/ total 0 -rw-r--r-- 1 root root 0 Jan 6 23:59 testfile3.txt -rw-r--r-- 1 root root 0 Jan 6 23:58 testfile1.txt
Source: https://habr.com/ru/post/321062/
All Articles