At the crossing, the orchestrator is
not changed.
After I finally got tired of Docker Swarm because of its pseudo-simplicity and constant doping, not very convenient work with distributed file systems, a slightly damp web-interface and narrow functionality, as well as the lack of support out of the box for GitLab integration, it was decided to deploy your Kubernetes cluster on your own hardware, that is, by deploying Rancher Management Server 2.0.
Installation experience, fault tolerance scheme, work with haProxy and two dashboards under the cat:
Input data:
')
HP Proliant DL320e Gen8 Host Server - 2 pcs.
VM Ubuntu Server 16.04, 2Gb RAM, 2vCPU, 20Gb HDD - 1 pc. on each Host (VM-haProxy).
VM Ubuntu Server 16.04, 4Gb RAM, 4vCPU, 40Gb HDD, 20 Gb SSD - 3 pcs. on each Host (VM - * - Cluster).
VM Ubuntu Server 16.04, 4Gb RAM, 4vCPU, 100Gb HDD - 1 pc. on anyone Host (VM-NFS).
Network layout:
Getting Started:
VM-haProxy has haProxy, fail2ban, iptables rules onboard. It acts as a gateway for all the machines behind it. We have two gateways and all machines in case of loss of the gateway connection on their host will switch to another.
The main task of these nodes (VM-haProxy) is to distribute access to the backend, balance, forward ports, collect statistics.
My choice fell on haProxy, as a more narrowly focused tool regarding balancing and health checking. For all this, I like the syntax of configuration directives and work with IP whitelists and blacklists, as well as working with a multi-domain SSL connection.
HaProxy configuration:
haproxy.conf with comments
Important: All machines must “know” each other by the host name.
add-host-entry.pp puppet manifest for adding hostnames in / etc / hosts class host_entries { host { 'proxy01': ip => '10.10.10.11', } host { 'proxy02': ip => '10.10.10.12', } host { 'master': ip => '10.10.10.100', } host { 'node01': ip => '10.10.10.101', } host { 'node02': ip => '10.10.10.102', } host { 'node03': ip => '10.10.10.103', } host { 'node04': ip => '10.10.10.104', } host { 'node05': ip => '10.10.10.105', } host { 'nfs': ip => '10.10.10.200', } }
VM-Master-Cluster - the main control machine. Different from other nodes, it has Puppet Master, GlusterFS Server, Rancher Server (container), etcd (container), control manager (container) on board. In case of shutdown of this host, production services continue to work.
VM-Node-Cluster - Nodes, Workers. Worker machines whose resources will be combined into one fault tolerant environment. Nothing interesting.
VM-NFS - NFS server (nfs-kernel-server). The main task is to provide buffer space. Stores configuration files and any. Does not store anything important. His fall can be corrected slowly, drinking coffee.
Important: All environment machines must have onboard: docker.io, nfs-common, gluster-server.
must-have-packages.pp puppet manifest for installing the necessary software class musthave { package { 'docker.io': ensure => 'installed', } package { 'nfs-common': ensure => 'installed', } package { 'gluster-server': ensure => 'installed', } }
Mounting nfs-volume and setting up GlusterFS will not be described, since it is generously described in large quantities.
If you notice, in the specification description, there are SSD disks, they are prepared for the work of the distributed file system Gluster. Create partitions and vaults on high-speed disks.
Note. In fact, Rancher does not require a mirror-like environment to run. All of the above is my vision of the cluster and a description of what practices I follow.
To run Rancher,
one machine is enough, with 4CPU, 4Gb RAM, 10Gb HDD.
5 minutes to Rancher.
On the VM-Master-Cluster we perform:
sudo docker run -d --restart=unless-stopped -p 80:80 -p 443:443 rancher/rancher
Check availability:
curl -k https://localhost
If you saw the API - I congratulate you exactly half way passed.
Having looked at the network diagram again, we will remember that we will work from the outside, through haProxy, in our configuration we publish the link
rancher.domain.ru , go on, set our password.
The next page is the Kubernetes cluster creation page.
In the Cluster Options menu, select Flannel. With other network providers did not work. I can not advise.
It is worth paying attention to the fact that we installed the checkboxes etcd and Control Plane, the worker’s checkbox is not set, in case you don’t plan to use the manager in worker mode.
We work inside the local network, with the same address on the NIC, so the same IP is indicated in the Public and Internal Address fields.
Copy the resulting code above, run it in the console.
After some time in the web interface you will see a message about the addition of the node. And after some time you start the Kubernetes cluster.
To add a worker, go to cluster editing in the Rancher web interface, you will see the same menu that generates the connection command.
Set the checkbox only in the worker position, specify the IP of the future worker, copy the command and execute it in the console of the node you need.
After some time, the cluster power will increase, exactly like the number of nodes.
Installing Kubernetes Dashboard:
Go to the Projects / Namespaces menu.
After installation, you will see that the Kubernetes namespaces will be contained outside the projects. To fully work with these namespaces, they must be placed in the project.
Add a project, name it at your discretion. Move the namespaces (cattle-system, ingress-nginx, kube-public, kube-system) to the project you created using the Move context menu. It should work like this:
Click directly on the project name, you will be taken to the workload control panel. It is here that we will analyze how to create a simple service.
Click "Import YAML" in the upper right corner. Copy and paste the contents of
this file into the textbox of the window that opens, select the namespace "kube-system", click "Import".
After some time, the pod kubernetes-dashboard will start.
Go to pod editing, open the ports publishing menu, set the following values:
Check access on the node where pod is running.
curl -k https://master:9090
See the answer? The publication is complete, it remains to reach the administrative part.
On the main cluster management page in Rancher, there are very handy tools, such as, kubectl - cluster management console and Kubeconfig File - configuration file containing the API address, ca.crt, etc.
Go to kubectl and execute:
kubectl create serviceaccount cluster-admin-dashboard-sa kubectl create clusterrolebinding cluster-admin-dashboard-sa --clusterrole=cluster-admin --serviceaccount=default:cluster-admin-dashboard-sa
We have created a service account with higher privileges, now we need a token to access the Dashboard.
Find the secret of the created account:
kubectl get secret | grep cluster-admin-dashboard-sa
We will see the account name with a certain hash at the end, copy it and execute it:
kubectl describe secret cluster-admin-dashboard-sa-$( )
Again, remember that we have all safely published through haProxy.
Follow the link
kubernetes.domain.ru . Enter the received token.
Rejoice:
PS
In summary, I would like to thank Rancher for creating an intuitive interface, an easily deployable instance, simple documentation, the ability to quickly move and scalability at the cluster level. Perhaps, I was too harsh at the beginning of the post that Swarm was tired, rather obvious development trends, kind of forced to stare at the side and not bring the boring routine affairs to the end. Docker has created an era of development. And to judge this project certainly not for me.