⬆️ ⬇️

Rancher: Kubernetes in 5 minutes on bare iron

At the crossing, the orchestrator is not changed.



After I finally got tired of Docker Swarm because of its pseudo-simplicity and constant doping, not very convenient work with distributed file systems, a slightly damp web-interface and narrow functionality, as well as the lack of support out of the box for GitLab integration, it was decided to deploy your Kubernetes cluster on your own hardware, that is, by deploying Rancher Management Server 2.0.



Installation experience, fault tolerance scheme, work with haProxy and two dashboards under the cat:



Input data:

')

HP Proliant DL320e Gen8 Host Server - 2 pcs.

VM Ubuntu Server 16.04, 2Gb RAM, 2vCPU, 20Gb HDD - 1 pc. on each Host (VM-haProxy).

VM Ubuntu Server 16.04, 4Gb RAM, 4vCPU, 40Gb HDD, 20 Gb SSD - 3 pcs. on each Host (VM - * - Cluster).

VM Ubuntu Server 16.04, 4Gb RAM, 4vCPU, 100Gb HDD - 1 pc. on anyone Host (VM-NFS).



Network layout:



Pic1




Getting Started:



VM-haProxy has haProxy, fail2ban, iptables rules onboard. It acts as a gateway for all the machines behind it. We have two gateways and all machines in case of loss of the gateway connection on their host will switch to another.



The main task of these nodes (VM-haProxy) is to distribute access to the backend, balance, forward ports, collect statistics.



My choice fell on haProxy, as a more narrowly focused tool regarding balancing and health checking. For all this, I like the syntax of configuration directives and work with IP whitelists and blacklists, as well as working with a multi-domain SSL connection.



HaProxy configuration:



haproxy.conf with comments
########################################################## #Global # ########################################################## global log 127.0.0.1 local0 notice maxconn 2000 user haproxy group haproxy tune.ssl.default-dh-param 2048 defaults log global mode http option httplog option dontlognull retries 3 option redispatch timeout connect 5000 timeout client 10000 timeout server 10000 option forwardfor option http-server-close ########################################################## #TCP # ########################################################## #    API  Kubernetes listen kube-api-tls bind *:6443 mode tcp option tcplog server VM-Master-Cluster Master:6443 ########################################################## #HTTP/HTTPS - Frontend and backend # ########################################################## #      "  ", frontend  backend. frontend http-in bind *:80 acl network_allowed src -f /path/allowed-ip #     IP.     . http-request deny if !network_allowed #       IP  . reqadd X-Forwarded-Proto:\ http mode http option httpclose acl is_haproxy hdr_end(host) -i haproxy.domain.ru acl is_rancher hdr_end(host) -i rancher.domain.ru acl is_kubernetes hdr_end(host) -i kubernetes.domain.ru use_backend kubernetes if is_kubernetes use_backend rancher if is_rancher use_backend haproxy if is_haproxy frontend https-in bind *:443 ssl crt-list /path/crt-list #    .        . acl network_allowed src -f /path/allowed-ip http-request deny if !network_allowed reqadd X-Forwarded-Proto:\ https acl is_rancher hdr_end(host) -i rancher.etraction.ru acl is_kubernetes hdr_end(host) -i kubernetes.etraction.ru use_backend kubernetes if is_kubernetes { ssl_fc_sni kubernetes.domain.ru } use_backend rancher if is_rancher { ssl_fc_sni rancher.domain.ru } # Backend  haProxy.    . backend haproxy stats enable stats uri /haproxy?stats stats realm Strictly\ Private stats auth login:passwd cookie SERVERID insert nocache indirect #  , , backend   dashboard rancher  kubernetes. backend rancher acl network_allowed src -f /path/allowed-ip http-request deny if !network_allowed mode http redirect scheme https if !{ ssl_fc } server master master:443 check ssl verify none backend kubernetes acl network_allowed src -f /path/allowed-ip http-request deny if !network_allowed mode http balance leastconn redirect scheme https if !{ ssl_fc } server master master:9090 check ssl verify none 




Important: All machines must “know” each other by the host name.



add-host-entry.pp puppet manifest for adding hostnames in / etc / hosts
 class host_entries { host { 'proxy01': ip => '10.10.10.11', } host { 'proxy02': ip => '10.10.10.12', } host { 'master': ip => '10.10.10.100', } host { 'node01': ip => '10.10.10.101', } host { 'node02': ip => '10.10.10.102', } host { 'node03': ip => '10.10.10.103', } host { 'node04': ip => '10.10.10.104', } host { 'node05': ip => '10.10.10.105', } host { 'nfs': ip => '10.10.10.200', } } 




VM-Master-Cluster - the main control machine. Different from other nodes, it has Puppet Master, GlusterFS Server, Rancher Server (container), etcd (container), control manager (container) on board. In case of shutdown of this host, production services continue to work.

VM-Node-Cluster - Nodes, Workers. Worker machines whose resources will be combined into one fault tolerant environment. Nothing interesting.



VM-NFS - NFS server (nfs-kernel-server). The main task is to provide buffer space. Stores configuration files and any. Does not store anything important. His fall can be corrected slowly, drinking coffee.



Important: All environment machines must have onboard: docker.io, nfs-common, gluster-server.



must-have-packages.pp puppet manifest for installing the necessary software
 class musthave { package { 'docker.io': ensure => 'installed', } package { 'nfs-common': ensure => 'installed', } package { 'gluster-server': ensure => 'installed', } } 




Mounting nfs-volume and setting up GlusterFS will not be described, since it is generously described in large quantities.



If you notice, in the specification description, there are SSD disks, they are prepared for the work of the distributed file system Gluster. Create partitions and vaults on high-speed disks.



Note. In fact, Rancher does not require a mirror-like environment to run. All of the above is my vision of the cluster and a description of what practices I follow.



To run Rancher, one machine is enough, with 4CPU, 4Gb RAM, 10Gb HDD.



5 minutes to Rancher.



On the VM-Master-Cluster we perform:



 sudo docker run -d --restart=unless-stopped -p 80:80 -p 443:443 rancher/rancher 


Check availability:



 curl -k https://localhost 


If you saw the API - I congratulate you exactly half way passed.



Having looked at the network diagram again, we will remember that we will work from the outside, through haProxy, in our configuration we publish the link rancher.domain.ru , go on, set our password.



The next page is the Kubernetes cluster creation page.



Pic2




In the Cluster Options menu, select Flannel. With other network providers did not work. I can not advise.



It is worth paying attention to the fact that we installed the checkboxes etcd and Control Plane, the worker’s checkbox is not set, in case you don’t plan to use the manager in worker mode.

We work inside the local network, with the same address on the NIC, so the same IP is indicated in the Public and Internal Address fields.



Copy the resulting code above, run it in the console.



After some time in the web interface you will see a message about the addition of the node. And after some time you start the Kubernetes cluster.



Pic.3




To add a worker, go to cluster editing in the Rancher web interface, you will see the same menu that generates the connection command.



Set the checkbox only in the worker position, specify the IP of the future worker, copy the command and execute it in the console of the node you need.



After some time, the cluster power will increase, exactly like the number of nodes.



Installing Kubernetes Dashboard:



Go to the Projects / Namespaces menu.



After installation, you will see that the Kubernetes namespaces will be contained outside the projects. To fully work with these namespaces, they must be placed in the project.



Add a project, name it at your discretion. Move the namespaces (cattle-system, ingress-nginx, kube-public, kube-system) to the project you created using the Move context menu. It should work like this:



Pic.4




Click directly on the project name, you will be taken to the workload control panel. It is here that we will analyze how to create a simple service.



Pic.5




Click "Import YAML" in the upper right corner. Copy and paste the contents of this file into the textbox of the window that opens, select the namespace "kube-system", click "Import".



After some time, the pod kubernetes-dashboard will start.



Go to pod editing, open the ports publishing menu, set the following values:



Pic.6




Check access on the node where pod is running.



 curl -k https://master:9090 


See the answer? The publication is complete, it remains to reach the administrative part.



On the main cluster management page in Rancher, there are very handy tools, such as, kubectl - cluster management console and Kubeconfig File - configuration file containing the API address, ca.crt, etc.



Go to kubectl and execute:



 kubectl create serviceaccount cluster-admin-dashboard-sa kubectl create clusterrolebinding cluster-admin-dashboard-sa --clusterrole=cluster-admin --serviceaccount=default:cluster-admin-dashboard-sa 


We have created a service account with higher privileges, now we need a token to access the Dashboard.



Find the secret of the created account:



 kubectl get secret | grep cluster-admin-dashboard-sa 


We will see the account name with a certain hash at the end, copy it and execute it:



 kubectl describe secret cluster-admin-dashboard-sa-$( ) 


Again, remember that we have all safely published through haProxy.



Follow the link kubernetes.domain.ru . Enter the received token.



Rejoice:



Fig.7




PS

In summary, I would like to thank Rancher for creating an intuitive interface, an easily deployable instance, simple documentation, the ability to quickly move and scalability at the cluster level. Perhaps, I was too harsh at the beginning of the post that Swarm was tired, rather obvious development trends, kind of forced to stare at the side and not bring the boring routine affairs to the end. Docker has created an era of development. And to judge this project certainly not for me.

Source: https://habr.com/ru/post/418691/



All Articles