This June, as a keynote to the DockerCon conference, we saw a demo in which a 3-node Swarm cluster was created in 30 seconds using the Swarm clustering toolkit integrated into Docker Engine 1.12.
Impressive, but naturally, I had to try to do it myself in order to see with my own eyes.
docker-machine create -d virtualbox node1 docker-machine create -d virtualbox node2 docker-machine create -d virtualbox node3 docker-machine ssh node1 # 2, 3
I found the easiest way. To start working with Swarm (Swarm) use docker-machine. I used the virtualbox driver to create the hosts with Docker Engine installed on them, but you can use any driver you want, for example amazonec2.
docker swarm init --advertise-addr [advertise ip]:2377
Copy the command displayed below for the first node of your workers to attach it to the cluster.
docker swarm join --token [token] [manager ip]:[manager port]
You now have a swarm cluster!
To start the application, we use the Docker service command create. Using the --replicas flag, you can scale the service very simply.
Initially, we want to create a top-level network (overlay network) to deploy our application.
docker network create -d overlay mynetwork
Before Docker Engine 1.12, overlay networks required external key / value storage, but with the creation of distributed storage in Docker 1.12, this is no longer required.
Let's deploy a simple Apache application on the public port 5001. I use a special Apache image that I found on DockerHub, which displays the ID of the container that serves the request. This will later be used to demonstrate load balancing.
docker service create --name web --network mynetwork --replicas 3 -p 5001:80 francois/apache-hostname
You can use the following commands to check your new service:
docker service ls docker service tasks web
Adding a routing grid and decentralized architecture in Docker 1.12 allows any node-workers in the cluster to route (direct) traffic to other nodes.
In our web service (described above), we opened a cluster (cluster-wide) port 5001. You can send a request to any node on port 5001, and the routing network will send a request to the node that launched the container.
curl [ip]:5001
Whenever a new service is created, a virtual IP is created along with this service. IPVS owns (implements) load balancing, high-performance level 4 load balancing, which is built into the Linux kernel.
To show this, run curl several times, to demonstrate changing the container ID.
Load Balancing with Docker 1.12 is container-aware. The host-aware balancing system, such as nginx or haproxy, removes or adds containers to the required configuration of the load balancing update and restart of these services. There is a useful library called Interlock, which listens for the Docker Events API and updates the configuration / restarts the service on the fly. But this tool is more needed after the new load balancing addition in Docker 1.12.
This image from Nigel Poulton summarizes the differences between the old Swarm and the new Swarm very well.
Translation image:
Old way (many steps and commands not shown)
New way
With Docker 1.12, you can also install external distributed services (consul, etcd, zookeeper), or a separate scheduling service as before. TLS pass-through setting out of the box, no “unprotected mode”. I have no doubt that the new Docker Swarm is the fastest way to get a running and running docker-native cluster ready to be deployed in production.
What about large scale? Thanks to the efforts of the “captain” Docker'a Chanwit Kaewkasi and DockerSwarm2000, they showed us that you can create a cluster of 2384 nodes and 96287 containers.
Turning on the Swarm mode is completely optional. You can think of Swarm mode as a set of hidden procedures that are run with just the command
docker swarm init
Swarm in Docker 1.12 also supports matching, rolling updates of the image, global and scheduled services based on restrictions.
I also want to touch on the topic of node failure in Docker Swarm. Commands related to Docker Engine 1.12 services are declarative. For example, if you specify the “I want 3 replicas of this service” command, the cluster will maintain this state.
In case of node failure of which containers were involved, swarm will detect that the desired state does not coincide with the actual one and will automatically correct the situation by redistributing the containers to other available nodes.
To demonstrate this, let's set a new service with three replicas.
docker service create --name web --replicas 3 francois/apache-hostname
Run to test the service
docker service tasks web
Now we have one container on each of the nodes. Let's disable node 3 to look at adjusting the swarm in action.
# Run this on node3 docker swarm leave
Now the desired state does not coincide with the actual one. We only have 2 active containers, whereas we designated 3 containers when we started the service.
Using the docker service gives you the opportunity to see that the number of replicas has decreased to two, and then again returned to three
docker service tasks web
will show you a new container assigned to another node in your cluster.
This example shows only the regulation of containers on work nodes. A completely different process occurs when a manager fails, especially if it is the leader of a raft group for solving consensus tasks.
Global services are useful when you want to create a container on each node in your cluster. Consider keeping a record or monitoring.
docker service create --mode=global --name prometheus prom/prometheus
Remember, I said that services are declarative? When you add a new node to your cluster, swarm determines that the desired state does not match the actual state, and starts an instance of the global container on that node.
To demonstrate the limitations, I used the docker machine to speed up the new machine using the engine label.
docker-machine create -d virtualbox --engine-label com.example.storage="ssd" sw3
Then I added it to the swarm.
docker swarm join --token [token] [manager ip]:[manager port]
Next, I created a service that refers to this restriction.
docker service create --name web2 --replicas 3 --constraint 'engine.labels.com.example.storage == ssd' francois/apache-hostname
Remember, I said that services are declarative? ;) This means that when we scale this service, it remembers our limitations and scales only the nodes that meet the conditions.
Docker Captain and Software Engineer at Ippon Technologies, who likes to throw things into the market quickly. Specializes in agile, microservices, containers, automation, REST, devops.
Source: https://habr.com/ru/post/310606/
All Articles