⬆️ ⬇️

Run a full-fledged cluster on Kubernetes from scratch on Ubuntu 16.04

Already quite a lot of articles have been written on installing and launching Kubernetes , however, not everything is so smooth (I spent several days launching my cluster).



This article is intended to provide comprehensive information not only on installing k8s, but also to explain each step: why and why we are doing exactly as it is written (this is very important for a successful launch).



What you need to know



Servers:

A cluster means that you have more than one physical server between which resources will be distributed. Servers are called nodes.

')

Disks:

Normal hards in k8s are not supported. Work with disks occurs by means of distributed file storages. This is necessary so that k8s can “move” docker containers to other nodes, if necessary, without losing data (files).



You need to start creating a cluster by creating your own distributed file storage. If you are sure that you will never need disks, you can skip this step.

I chose Ceph . And I recommend reading this wonderful article .



The minimum reasonable number of servers for Ceph is 3 (you can build on one, but this makes little sense because of the high probability of losing data).



Network:

We need Flannel - it allows you to organize a software defined network (Software Defined Network, SDN). It is SDN that allows all our containers to communicate with each other within the cluster (the Flannel installation is done with k8s and described below).



Server Preparation



In our example, we use 3 physical servers. Install Ubuntu 16.04 on all servers. Do not create swap partitions (k8s requirement).



Provide at least one disk (or partition) for Ceph in each server.



Do not enable SELinux support (it is turned off by default in Ubuntu 16.04).



We called the server like this: kub01 kub02 kub03. The sda2 part on each server is created for Ceph (formatting is optional).



Install and configure Ceph



I will describe the Ceph installation fairly briefly. There are many examples on the web and the Ceph site itself has pretty good documentation.



We perform all operations from under the privileged root user.



Create a temporary directory:



mkdir ~/ceph-admin cd ~/ceph-admin 


Install Ceph:



 apt install ceph-deploy ceph-common 


You need to create a key and decompose it across all servers. This is needed for the ceph-deploy utility:



 ssh-keygen ssh-copy-id kub01 ssh-copy-id kub02 ssh-copy-id kub03 




(you may need to correct the ssh config to allow it to log in as root).



Check that kub01 is not registered in your / etc / hosts as 127.0.0.1 (if registered, delete this line).



Create a disk cluster and initialize it:



 ceph-deploy new kub01 kub02 kub03 ceph-deploy install kub01 kub02 kub03 ceph-deploy mon create-initial ceph-deploy osd prepare kub01:sda2 kub02:sda2 kub03:sda2 ceph-deploy osd activate kub01:sda2 kub02:sda2 kub03:sda2 


Check our disk cluster:



 ceph -s cluster 363a4cd8-4cb3-4955-96b2-73da72b63cf5 health HEALTH_OK 


You can use the following useful commands:



 ceph -s ceph df ceph osd tree 


Now that we have verified that Ceph is working, we will create a separate pool for k8s:



 ceph osd pool create kube 100 100 


(you can view all existing pools with: ceph df)



Now we will create a separate user for our kube pool and save the keys:



 ceph auth get-or-create client.kube mon 'allow r' osd 'allow rwx pool=kube' ceph auth get-key client.admin > /etc/ceph/client.admin ceph auth get-key client.kube > /etc/ceph/client.kube 


(you will need the keys to access the k8s storage)



Install Kubernetes



Add the k8s repository to our system:



 curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add - cat <<EOF >/etc/apt/sources.list.d/kubernetes.list deb http://apt.kubernetes.io/ kubernetes-xenial main EOF 


Now install the main packages:



 apt update apt install -y docker.io kubelet kubeadm kubernetes-cni 


Initialize and run k8s



 kubeadm init --pod-network-cidr=10.244.0.0/16 


(such a network 10.244.0.0/16 is necessary for the flannel to work - do not change it)



Save the script printed command to attach the nodes to the cluster.



It is convenient to use a separate unprivileged user for working with k8s. Create it and copy the k8s configuration file into it:



 useradd -s /bin/bash -m kube mkdir ~kube/.kube cp /etc/kubernetes/admin.conf ~kube/.kube/config chown kube: ~kube/.kube/config 


The utility is used to work with k8s: kubectl . We use it only from under our user kube . To go under the user run:



 su - kube 


Allow containers to run on the wizard:



 kubectl taint nodes --all node-role.kubernetes.io/master- 


Configuring rights:



 kubectl create clusterrolebinding add-on-cluster-admin --clusterrole=cluster-admin --serviceaccount=kube-system:default 


Install the flannel (network subsystem):



 kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml 


How to check that everything works
Run the command:



 kubectl -n kube-system get pods 


The output should be something like:



 NAME READY STATUS RESTARTS AGE etcd-kub01.domain.com 1/1 Running 1 4d kube-apiserver-kub01.domain.com 1/1 Running 1 4d kube-controller-manager-kub01.domain.com 1/1 Running 0 4d kube-dns-7c6d8859cb-dmqrn 3/3 Running 0 1d kube-flannel-ds-j948h 1/1 Running 0 1d kube-proxy-rmbqq 1/1 Running 0 1d kube-scheduler-kub01.domain.com 1/1 Running 1 4d 




Install and configure the web interface



 kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml 


Create a user to access the web interface:

 cat << EOF > account.yaml apiVersion: v1 kind: ServiceAccount metadata: name: admin-user namespace: kube-system --- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRoleBinding metadata: name: admin-user roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-admin subjects: - kind: ServiceAccount name: admin-user namespace: kube-system EOF kubectl -n kube-system create -f account.yaml 


Run kube-proxy, you can do it like this:



 kubectl proxy & 


And forward port 8001 from your working machine to kub01 server:



 ssh -L 8001:127.0.0.1:8001 -N kub01 & 


Now we can access the web interface from our working machine at:



http://127.0.0.1:8001/ui

(a web interface will open where you need to specify a token)



You can get a token to access the web interface like this:



 kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}') 


Customize Kubernetes and Ceph



The current kube-controller-manager-amd64 controller: v1.9.2 is missing the rbd binary required for working with Ceph , so we will create our own kube-controller-manager:



What is RBD, why is it needed and why is it missing?
RBD (Rados Block Device) is a block device that is used by k8s to create and mount Docker container partitions. In this case, this is the binary included in the package: ceph-common .



k8s does not include this package in its controller apparently because it depends on the distribution of the operating system you are using. Therefore, when assembling your controller, be sure to specify your distribution exactly, so that RBD is relevant.



To create our kube-controller-manager, do the following:

(we execute all commands from under the root user)



 mkdir docker cat << EOF > docker/Dockerfile FROM ubuntu:16.04 ARG KUBERNETES_VERSION=v1.9.2 ENV DEBIAN_FRONTEND=noninteractive \ container=docker \ KUBERNETES_DOWNLOAD_ROOT=https://storage.googleapis.com/kubernetes-release/release/${KUBERNETES_VERSION}/bin/linux/amd64 \ KUBERNETES_COMPONENT=kube-controller-manager RUN set -x \ && apt-get update \ && apt-get install -y \ ceph-common \ curl \ && curl -L ${KUBERNETES_DOWNLOAD_ROOT}/${KUBERNETES_COMPONENT} -o /usr/bin/${KUBERNETES_COMPONENT} \ && chmod +x /usr/bin/${KUBERNETES_COMPONENT} \ && apt-get purge -y --auto-remove \ curl \ && rm -rf /var/lib/apt/lists/* EOF docker build -t "my-kube-controller-manager:v1.9.2" docker/ 


(Be sure to include current versions of the k8s and the OS distribution)



We check that our controller is successfully created:



 docker images | grep my-kube-controller-manager 


Check that our image has rbd:



 docker run my-kube-controller-manager:v1.9.2 whereis rbd 


Should see something like this: rbd: / usr / bin / rbd /usr/share/man/man8/rbd.8.gz



We replace the standard controller with ours, for this we edit the file:

/etc/kubernetes/manifests/kube-controller-manager.yaml



Replace the string:

image: gcr.io/google_containers/kube-controller-manager-amd64:v1.9.2

on:

image: my-kube-controller-manager: v1.9.2

imagePullPolicy: IfNotPresent

(be sure to add the imagePullPolicy directive so that k8s does not try to download this image from the Internet)



We go under the user kube and wait until our controller starts up (nothing needs to be done).



 kubectl -n kube-system describe pods | grep kube-controller 


Must see that our image is used:

Image: my-kube-controller-manager: v1.9.2



Now that our controller with RBD support has been launched, we can start setting up a bunch of k8s and Ceph.



Setting up the disk subsystem bundle (k8s + Ceph)



Add keys to k8s to access Ceph:



 kubectl create secret generic ceph-secret --type="kubernetes.io/rbd" --from-file=/etc/ceph/client.admin --namespace=kube-system kubectl create secret generic ceph-secret-kube --type="kubernetes.io/rbd" --from-file=/etc/ceph/client.kube --namespace=default 


Create StorageClass (default):



 cat << EOF > ceph_storage.yaml apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: ceph-rbd annotations: {"storageclass.kubernetes.io/is-default-class":"true"} provisioner: kubernetes.io/rbd parameters: monitors: kub01:6789,kub02:6789,kub03:6789 pool: kube adminId: admin adminSecretName: ceph-secret adminSecretNamespace: "kube-system" userId: kube userSecretName: ceph-secret-kube fsType: ext4 imageFormat: "2" imageFeatures: "layering" EOF kubectl create -f ceph_storage.yaml 


How to check the operation of the disk subsystem?
Checking the availability of StorageClass:



 kube@kub01:~$ kubectl get storageclass NAME PROVISIONER AGE ceph-rbd (default) kubernetes.io/rbd 4d 


Create a test pod with a disk:



 cat << EOF > test_pod.yaml apiVersion: v1 kind: PersistentVolumeClaim metadata: name: claim1 spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi --- apiVersion: v1 kind: Pod metadata: name: test-pod-with-pvc spec: volumes: - name: test-pvc-storage persistentVolumeClaim: claimName: claim1 containers: - name: test-container image: kubernetes/pause volumeMounts: - name: test-pvc-storage mountPath: /var/lib/www/html EOF kubectl create -f test_pod.yaml 


Check that the Pod was created (you need to wait for the creation and launch):



 kube@kub01:~$ kubectl get pods NAME READY STATUS RESTARTS AGE test-pod-with-pvc 1/1 Running 0 15m 


Check that Claim was created (request for a disc):



 kube@kub01:~$ kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE claim1 Bound pvc-076df6ee-0ce9-11e8-8b93-901b0e8fc39b 1Gi RWO ceph-rbd 12m 


Check that the disk itself is created:



 kube@kub01:~$ kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE pvc-076df6ee-0ce9-11e8-8b93-901b0e8fc39b 1Gi RWO Delete Bound default/claim1 ceph-rbd 


Finally, check that the drive is mounted on the system:



 root@kub01:~$ mount | grep pvc-076df6ee-0ce9-11e8-8b93-901b0e8fc39b /dev/rbd0 on /var/lib/kubelet/pods/076fff13-0ce9-11e8-8b93-901b0e8fc39b/volumes/kubernetes.io~rbd/pvc-076df6ee-0ce9-11e8-8b93-901b0e8fc39b type ext4 (rw,relatime,stripe=1024,data=ordered) 




Adding new (additional) nodes to the k8s cluster



On the new server, run the following commands:



 curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add - cat <<EOF >/etc/apt/sources.list.d/kubernetes.list deb http://apt.kubernetes.io/ kubernetes-xenial main EOF apt update apt install -y docker.io kubelet kubeadm kubernetes-cni ceph-common python 


We attach the node to the master:

We need a key. You can get it on the wizard by running the command:



 kubeadm token list 


Or, create it:

 kubeadm token create --print-join-command 




An example of a command to join (execute on a new node):



 kubeadm join --token cb9141.6a912d1dd7f66ff5 8.8.8.8:6443 --discovery-token-ca-cert-hash sha256:f0ec6d8f9699169089c89112e0e6b5905b4e1b42db22815186240777970dc6fd 


Install and configure Helm



For quick and easy installation of applications in k8s Helm was invented.



A list of available applications can be found here .



Install and initialize Helm:



 curl https://storage.googleapis.com/kubernetes-helm/helm-v2.8.0-linux-amd64.tar.gz | tar -xz ./linux-amd64/helm init 


PS: it took 6 hours to write this article. Do not judge strictly if there are typos somewhere. Ask questions, gladly answer and help.

Source: https://habr.com/ru/post/348688/



All Articles