⬆️ ⬇️

Create persistent storage with provisioning in Ceber based Kubernetes



Preface of the translator : When we gathered to finally prepare our material on the deployment of Ceph in Kubernetes, we found the ready and, not least, fresh (from April 2017) instruction from Cron (from Bosnia and Herzegovina) in English. Convinced of its simplicity and practicality, we decided to share with other system administrators and DevOps engineers in an “as is” format, just by adding one small missing fragment to the listings.



Software-defined data warehouses are gaining popularity over the past few years, especially with the massive proliferation of private cloud infrastructures. Such storage is a critical part of Docker containers, and the most popular of them is Ceph. If you already use Ceph storage, thanks to its full support in Kubernetes, it is easy to set up dynamic volume creation for the users' request. Automation of their creation is implemented using Kubernetes StorageClasses . This instruction shows how the Ceph storage is implemented in the Kubernetes cluster. (The creation of the Kubernetes test installation, managed by kubeadm, is described in this material in English.)



For starters, you also [apart from the Kubernetes installation] will need a functioning Ceph cluster and the presence of the rbd client on all the Kubernetes nodes. Note trans. : RBD or RADOS Block Device is a Linux kernel driver that allows you to connect Ceph objects as block devices. The author of the article uses the Ceph version of Jewel and the Linux distribution of Ubuntu 16.04 on the Kubernetes nodes, so installing the Ceph client libraries (including the mentioned rbd ) looks simple:

')

 $ sudo apt-get install ceph-common 


There is no rbd client installed in the official kube-controller-manager rbd , so we will use a different image. To do this, change the image name in /etc/kubernetes/manifests/kube-controller-manager.yaml to quay.io/attcomdev/kube-controller-manager:v1.6.1 (version 1.6.3 is currently available, but we are tested only on 1.5.3 and 1.6.1 - approx. transl. ) and wait until the kube-controller-manager restarts in a new way.



In order for the kube-controller-manager to perform the provisioning for the repository, it needs an administrator key from Ceph. You can get this key like this:



 $ sudo ceph --cluster ceph auth get-key client.admin 


Add it to the secrets of Kubernetes:



 $ kubectl create secret generic ceph-secret --type="kubernetes.io/rbd" \ --from-literal=key='AQBwruNY/lEmCxAAKS7tzZHSforkUE85htnA/g==' --namespace=kube-system 


For the Kubernetes nodes in the Ceph cluster, we will create a separate pool — we will use it in rbd on the nodes:



 $ sudo ceph --cluster ceph osd pool create kube 1024 1024 


We will also create a client key (cephx authentication is enabled in the Ceph cluster):



 $ sudo ceph --cluster ceph auth get-or-create client.kube mon 'allow r' osd 'allow rwx pool=kube' 


For greater isolation between different namespaces, create a separate pool for each namespace in the Kubernetes cluster. Get the client.kube key:



 $ sudo ceph --cluster ceph auth get-key client.kube 


And create a new secret in the default namespace:



 $ kubectl create secret generic ceph-secret-kube --type="kubernetes.io/rbd" \ --from-literal=key='AQC/c+dYsXNUNBAAMTEW1/WnzXdmDZIBhcw6ug==' --namespace=default 


When both secrets are added, you can define and create a new StorageClass:



 $ cat > ceph-storage-fast_rbd.yml <<EOF 


 apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: fast_rbd provisioner: kubernetes.io/rbd parameters: monitors: <monitor-1-ip>:6789, <monitor-2-ip>:6789, <monitor-3-ip>:6789 adminId: admin adminSecretName: ceph-secret adminSecretNamespace: "kube-system" pool: kube userId: kube userSecretName: ceph-secret-kube 


 EOF 


( Note : this code is for some reason not in the original article, so we added our own and told the author about the detected omission.)



Now create a “persistent volume request” ( PersistentVolumeClaim ) using the created StorageClass named fast_rbd :



 $ cat > ceph-vc.yml <<EOF 


 apiVersion: v1 kind: PersistentVolumeClaim metadata: name: claim1 spec: accessModes: - ReadWriteOnce resources: requests: storage: 3Gi storageClassName: fast_rbd 


 EOF $ kubectl create -f ceph-vc.yml --namespace=default 


Check that everything works correctly:



 $ kubectl describe pvc Name: claim1 Namespace: default StorageClass: fast_rbd Status: Bound Volume: pvc-c1ffa983-1b8f-11e7-864f-0243fc58af9d Labels: Annotations: pv.kubernetes.io/bind-completed=yes pv.kubernetes.io/bound-by-controller=yes volume.beta.kubernetes.io/storage-provisioner=kubernetes.io/rbd Capacity: 3Gi Access Modes: RWO Events: FirstSeen LastSeen Count From SubObjectPath Type … --------- -------- ----- ---- ------------- -------- … 3m 3m 1 persistentvolume-controller Normal … … Reason Message … ------ ------- … ProvisioningSucceeded Successfully provisioned volume pvc-c1ffa983-… using kubernetes.io/rbd $ kubectl describe pv Name: pvc-c1ffa983-1b8f-11e7-864f-0243fc58af9d Labels: Annotations: pv.kubernetes.io/bound-by-controller=yes pv.kubernetes.io/provisioned-by=kubernetes.io/rbd StorageClass: fast_rbd Status: Bound Claim: default/claim1 Reclaim Policy: Delete Access Modes: RWO Capacity: 3Gi Message: Source: Type: RBD (a Rados Block Device mount on the host that shares a pod's lifetime) CephMonitors: [192.168.42.10:6789] RBDImage: kubernetes-dynamic-pvc-c201abb5-1b8f-11e7-84a4-0243fc58af9d FSType: RBDPool: kube RadosUser: kube Keyring: /etc/ceph/keyring SecretRef: &{ceph-secret-kube} ReadOnly: false Events: 


The last step is to create a test under using the created PersistentVolumeClaim ( claim1 ):

 $ cat > pod.yml <<EOF 


 apiVersion: v1 kind: ReplicationController metadata: name: nginx spec: replicas: 1 selector: app: nginx template: metadata: name: nginx labels: app: nginx spec: containers: - name: nginx image: nginx ports: - containerPort: 80 volumeMounts: - mountPath: /var/lib/www/html name: mypvc volumes: - name: mypvc persistentVolumeClaim: claimName: claim1 


 EOF $ kubectl create -f pod.yml --namespace=default 


Everything: the new container uses a Ceph image that is dynamically created at the user's request (PersistentVolumeClaim).

Source: https://habr.com/ru/post/329666/



All Articles