📜 ⬆️ ⬇️

We assemble your OpenShift Origin Cluster

"All development - in containers" - this phrase began my fascinating journey into the world of Docker. Attempts to please the requirements of the developers led to the choice of OpenShift Origin. However, to start a full-fledged cluster, as it turned out, the task is not trivial. During the construction of container infrastructure, I tried to find something on the topic, including on Habré, and did not find, oddly enough. Therefore, below I will try to describe the entire basic installation process and try to protect you from the rakes, which you actually walked through.

Let's start:

Environment preparation


All infrastructure objects are a set of dedicated VMs with a different set of resources. Minimum hardware requirements are outlined here . It is understood that between the VM traffic goes freely and without restrictions. If this is not the case, then you can see which ports should be open.

At the time of installation of this cluster, I used:
')

So, suppose that we have a domain of the form habra.cloud, which we actively use for our infrastructural needs.

Add type “A” records for all hosts of our cluster and select the subdomain apps.habra.cloud for future services in our cloud.

After adding we get a picture of the form:

Name Type Data apps (same as parent folder) Start of Authority (SOA) [16], dc-infra.habra.cloud, (same as parent folder) Name Server (NS) 172.28.246.50. ansible Host (A) 172.28.247.200 master Host (A) 172.28.247.211 nfs Host (A) 172.28.247.51 node01 Host (A) 172.28.247.212 node02 Host (A) 172.28.247.213 

For the apps.habra.cloud zone, the output will be as follows:

 Name Type Data Timestamp * Host (A) 172.28.247.211 static 

We configured DNS - you need to configure the network adapters and the names of the NFS server and cluster nodes.

We will not dwell on this for long; information about this car and a small truck. I will focus only on a few points:


 root@master# hostname master.habra.cloud 


 root@OpenShiftCluster# cat /etc/resolv.conf # Generated by NetworkManager search habra.cloud default.svc.cluster.local svc.cluster.local cloud.local default.svc svc local nameserver 172.28.246.50 

Deal with the hosts and the network. The next step is to install the docker on all nodes of the cluster and configure docker-storage.

 root@OpenShiftCluster# yum -y install docker 

In the / etc / sysconfig / docker file in the section OPTIONS add:

 OPTIONS='--selinux-enabled --insecure-registry 172.30.0.0/16' 

Next, in accordance with this manual, create docker-storage.

I recommend using option “B” so that it can be controlled using native LVM. Also, it is necessary to take into account that exactly in this storage OpenShift will copy docker-images from any docker registry. At the same time, OpenShift does not delete old and unused docker images. Therefore, I recommend doing docker-storage with a capacity of at least 30-50GB on each node. The rest - depending on your needs.

How I did it:


 root@OpenShiftCluster# fdisk /dev/sdb n t 8e w root@OpenShiftCluster# pvcreate /dev/sdb1 root@OpenShiftCluster# vgcreate docker-vg /dev/sdb1 


 # Edit this file to override any configuration options specified in # /usr/lib/docker-storage-setup/docker-storage-setup. # # For more details refer to "man docker-storage-setup" VG=docker-vg 


 root@OpenShiftCluster# docker-storage-setup 


 root@OpenShiftCluster# systemctl is-active docker 


 root@OpenShiftCluster# systemctl enable docker root@OpenShiftCluster# systemctl start docker 


 root@OpenShiftCluster# systemctl stop docker root@OpenShiftCluster# rm -rf /var/lib/docker/* root@OpenShiftCluster# systemctl restart docker 

Well, the docker is up and running. Set up SELinux as required by OpenShift.

/ etc / selinux / config:
 # This file controls the state of SELinux on the system. # SELINUX= can take one of these three values: # enforcing - SELinux security policy is enforced. # permissive - SELinux prints warnings instead of enforcing. # disabled - No SELinux policy is loaded. SELINUX=enforcing # SELINUXTYPE= can take one of these three values: # targeted - Targeted processes are protected, # minimum - Modification of targeted policy. Only selected processes are protected. # mls - Multi Level Security protection. SELINUXTYPE=targeted 



 root@OpenShiftCluster# setsebool -P virt_use_nfs 1 root@OpenShiftCluster# setsebool -P virt_sandbox_use_nfs 1 

Components:
 root@OpenShiftCluster# yum install -y wget git net-tools bind-utils iptables-services bridge-utils bash-completion nfs-utils nfs-utils-lib root@OpenShiftCluster# yum update root@OpenShiftCluster# yum -y install \ https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm root@OpenShiftCluster# sed -i -e "s/^enabled=1/enabled=0/" /etc/yum.repos.d/epel.repo root@OpenShiftCluster# yum -y --enablerepo=epel install pyOpenSSL 



 root@master# yum -y --enablerepo=epel install ansible pyOpenSSL 

Go ahead to the NFS server. Since we have already installed clients and even set up SELinux specifically for this, it was nothing to prepare a NFS server.

In this article, so that it does not grow to a huge size, it is assumed that the NFS server is already working with you. If this is not the case, then here is a good article on this topic.
I add that my NFS server is looking at the directory / nfs / and I will build on this.


 root@nfs# mkdir -R /nfs/infrastructure/registry root@nfs# chmod 755 /nfs/infrastructure root@nfs# chmod 755 /nfs/infrastructure/registry root@nfs# chown nfsnobody:nfsnobody /nfs/infrastructure root@nfs# chown nfsnobody:nfsnobody /nfs/infrastructure/registry 


 /nfs/infrastructure/registry *(rw,sync,root_squash,no_subtree_check,no_wdelay) 


 root@nfs# exportfs -a 

Great, now we can connect with the client to this directory.

Preparing Ansible and its inventory.


In case Ansible is already installed somewhere, like mine, you should perform actions on it. In case you want to use Master for this, then you will need to do the same for it.

So, the first - Ansible should know our VM by hostname. It means that we either edit / etc / hosts, or, which is much more correct, set our VM on our DNS server in any way we can (if you use Master, you don’t need to do this, since everything is ready).

Now we need to replicate our public SSH key on the cluster nodes so that Ansible can connect to them without any problems.

 root@ansible# for host in master.habra.cloud \ node01.habra.cloud \ node02.habra.cloud; \ do ssh-copy-id -i ~/.ssh/id_rsa.pub $host; \ done 

At the same time, I already had key pairs ready. If this is not the case, then you should use the ssh-keygen utility and create your own pair.

The time has come for the most important thing - editing the inventory file for Ansible with the necessary parameters.

My inventory file:
 root@ansible# cat inventory [OSEv3:children] masters nodes [masters] master.habra.cloud [nodes] master.habra.cloud openshift_schedulable=false openshift_node_labels="{'region': 'infra', 'zone': 'default'}" node01.habra.cloud openshift_node_labels="{'region': 'primary', 'zone': 'firstzone'}" node02.habra.cloud openshift_node_labels="{'region': 'primary', 'zone': 'secondzone'}" [OSEv3:vars] ansible_ssh_user=root openshift_master_default_subdomain=apps.habra.cloud containerized=false deployment_type=origin 


Details about the variables OpenShift in Ansible can be read here.

Just a little remains - download the latest archive with the required OpenShift roles for Ansible (you need an installed git-client):

 root@ansible# git clone https://github.com/openshift/openshift-ansible 

Important note. If you, like me, deploy to VM - I would recommend using snapshots. If something goes wrong, then during installation it will be much easier to change something in the original configuration option than to go through a long and tedious debag.

Let's get down to the gourmet - finally install OpenShift Origin on our nodes. The following command assumes that you are in the same directory as your inventory file.

Note: Currently, the latest version of the OpenShift Origin (1.4) playbook for Ansible works correctly with Ansible version 2.2.0.0. When upgrading to version 1.4, I had to roll back Ansible so that everything fell correctly.

 root@ansible# ansible-playbook -i ./inventory openshift-ansible/playbooks/byo/config.yml 

Installation lasts about 20 minutes. According to the results there should be no files.

Primary setup


On the Master, we will request the status of the nodes and get the output:

 root@master# oc get nodes NAME STATUS AGE master.habra.cloud Ready,SchedulingDisabled 1d node01.habra.cloud Ready 1d node02.habra.cloud Ready 1d 

If this is the picture, open the browser and go to the URL: https://master.habra.cloud:8443/console

You can already log in with any user.

However, the joy will not be complete.

For complete happiness, we need to perform a few more actions. Firstly, the router and private docker-registry are passed through. Second, modify the docker registry so that it is located on our NFS server.


 root@master# oadm manage-node master.habra.cloud --schedulable=true root@master# oc get nodes NAME STATUS AGE master.habra.cloud Ready 1d node1.habra.cloud Ready 1d node2.habra.cloud Ready 1d 

The default installer creates tasks for deploying the router and registry in the namespace default. But at the same time, deployment will occur only if the status of the Master ʻa will be Schedulable.

Let's check how it goes:
 root@master# oc project default root@master# oc get all NAME REVISION DESIRED CURRENT TRIGGERED BY dc/docker-registry 4 1 1 config dc/router 3 1 1 config NAME DESIRED CURRENT READY AGE rc/docker-registry-1 0 0 0 1d rc/router-1 1 1 1 1d NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE svc/docker-registry 172.30.7.135 <none> 5000/TCP 1d svc/kubernetes 172.30.0.1 <none> 443/TCP,53/UDP,53/TCP 1d svc/router 172.30.79.17 <none> 80/TCP,443/TCP,1936/TCP 1d NAME READY STATUS RESTARTS AGE po/docker-registry-1-ayuuo 1/1 Running 11 1d po/router-1-lzewh 1/1 Running 8 1d 


This means that the router is ready. And our future services will be able to respond to their DNS names. Since we have already created the necessary directories on the NFS server, it remains to point at them to OpenShift.


nfs-pv.yaml
 apiVersion: v1 kind: PersistentVolume metadata: name: registrypv spec: capacity: storage: 20Gi accessModes: - ReadWriteOnce nfs: path: /nfs/infrastructure/registry server: nfs.habra.cloud persistentVolumeReclaimPolicy: Recycle 


nfs-claim1.yaml
 apiVersion: v1 kind: PersistentVolumeClaim metadata: name: registry-claim1 spec: accessModes: - ReadWriteOnce resources: requests: storage: 20Gi 

 root@master# oc create -f nfs-pv.yaml 


 root@master# oc get pv NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS ... registrypv 20Gi RWO Recycle Available 


Persistent Volume Claim
 root@master# oc create -f nfs-claim1.yaml root@master# oc get pvc NAME STATUS VOLUME CAPACITY ACCESSMODES AGE registry-claim1 Bound registrypv 20Gi RWO 1d 


 root@master# oc volume deploymentconfigs/docker-registry --add --name=registry-storage -t pvc \ --claim-name=registry-claim1 --overwrite 


 root@master# oc get pods NAME READY STATUS RESTARTS AGE docker-registry-2-sdfhk 1/1 Running 1 1d 

Almost done, a little twist DNS on Master`e.

/etc/dnsmasq.conf:
 # Reverse DNS record for master host-record=master.habra.cloud,172.28.247.211 # Wildcard DNS for OpenShift Applications - Points to Router server=/habra.cloud/172.28.246.50 address=/apps.habra.cloud/172.28.247.211 server=/apps.habra.cloud/172.28.246.50 # Forward .local queries to SkyDNS server=/local/127.0.0.1#8053 # Forward reverse queries for service network to SkyDNS. # This is for default OpenShift SDN - change as needed. server=/17.30.172.in-addr.arpa/127.0.0.1#8053 # Forward .habra.cloud queries to DC server=/habra.cloud/172.28.246.50#53 



 dnsConfig: bindAddress: 0.0.0.0:8053 


 root@master# service dnsmasq restart root@master# service origin-master restart 

Done! The cluster is running.

Conclusion


In the end I would like to say that this is only the beginning of a long journey. In the process of exploitation a lot of inconsistencies and flaws emerge. Not all containers or even all services from templates will work. For most of the tasks I encountered, I had to either modify the templates with a coarse file or create my own. But still, how much more pleasant to operate with ready services!

Thanks for attention.


Comments and additions are welcome.

Source: https://habr.com/ru/post/324240/


All Articles