rbd
client on all the Kubernetes nodes. Note trans. : RBD or RADOS Block Device is a Linux kernel driver that allows you to connect Ceph objects as block devices. The author of the article uses the Ceph version of Jewel and the Linux distribution of Ubuntu 16.04 on the Kubernetes nodes, so installing the Ceph client libraries (including the mentioned rbd
) looks simple:
$ sudo apt-get install ceph-common
rbd
client installed in the official kube-controller-manager
rbd
, so we will use a different image. To do this, change the image name in /etc/kubernetes/manifests/kube-controller-manager.yaml
to quay.io/attcomdev/kube-controller-manager:v1.6.1
(version 1.6.3 is currently available, but we are tested only on 1.5.3 and 1.6.1 - approx. transl. ) and wait until the kube-controller-manager
restarts in a new way.
kube-controller-manager
to perform the provisioning for the repository, it needs an administrator key from Ceph. You can get this key like this:
$ sudo ceph --cluster ceph auth get-key client.admin
$ kubectl create secret generic ceph-secret --type="kubernetes.io/rbd" \ --from-literal=key='AQBwruNY/lEmCxAAKS7tzZHSforkUE85htnA/g==' --namespace=kube-system
$ sudo ceph --cluster ceph osd pool create kube 1024 1024
$ sudo ceph --cluster ceph auth get-or-create client.kube mon 'allow r' osd 'allow rwx pool=kube'
client.kube
key:
$ sudo ceph --cluster ceph auth get-key client.kube
$ kubectl create secret generic ceph-secret-kube --type="kubernetes.io/rbd" \ --from-literal=key='AQC/c+dYsXNUNBAAMTEW1/WnzXdmDZIBhcw6ug==' --namespace=default
$ cat > ceph-storage-fast_rbd.yml <<EOF
apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: fast_rbd provisioner: kubernetes.io/rbd parameters: monitors: <monitor-1-ip>:6789, <monitor-2-ip>:6789, <monitor-3-ip>:6789 adminId: admin adminSecretName: ceph-secret adminSecretNamespace: "kube-system" pool: kube userId: kube userSecretName: ceph-secret-kube
EOF
fast_rbd
:
$ cat > ceph-vc.yml <<EOF
apiVersion: v1 kind: PersistentVolumeClaim metadata: name: claim1 spec: accessModes: - ReadWriteOnce resources: requests: storage: 3Gi storageClassName: fast_rbd
EOF $ kubectl create -f ceph-vc.yml --namespace=default
$ kubectl describe pvc Name: claim1 Namespace: default StorageClass: fast_rbd Status: Bound Volume: pvc-c1ffa983-1b8f-11e7-864f-0243fc58af9d Labels: Annotations: pv.kubernetes.io/bind-completed=yes pv.kubernetes.io/bound-by-controller=yes volume.beta.kubernetes.io/storage-provisioner=kubernetes.io/rbd Capacity: 3Gi Access Modes: RWO Events: FirstSeen LastSeen Count From SubObjectPath Type … --------- -------- ----- ---- ------------- -------- … 3m 3m 1 persistentvolume-controller Normal … … Reason Message … ------ ------- … ProvisioningSucceeded Successfully provisioned volume pvc-c1ffa983-… using kubernetes.io/rbd $ kubectl describe pv Name: pvc-c1ffa983-1b8f-11e7-864f-0243fc58af9d Labels: Annotations: pv.kubernetes.io/bound-by-controller=yes pv.kubernetes.io/provisioned-by=kubernetes.io/rbd StorageClass: fast_rbd Status: Bound Claim: default/claim1 Reclaim Policy: Delete Access Modes: RWO Capacity: 3Gi Message: Source: Type: RBD (a Rados Block Device mount on the host that shares a pod's lifetime) CephMonitors: [192.168.42.10:6789] RBDImage: kubernetes-dynamic-pvc-c201abb5-1b8f-11e7-84a4-0243fc58af9d FSType: RBDPool: kube RadosUser: kube Keyring: /etc/ceph/keyring SecretRef: &{ceph-secret-kube} ReadOnly: false Events:
claim1
):
$ cat > pod.yml <<EOF
apiVersion: v1 kind: ReplicationController metadata: name: nginx spec: replicas: 1 selector: app: nginx template: metadata: name: nginx labels: app: nginx spec: containers: - name: nginx image: nginx ports: - containerPort: 80 volumeMounts: - mountPath: /var/lib/www/html name: mypvc volumes: - name: mypvc persistentVolumeClaim: claimName: claim1
EOF $ kubectl create -f pod.yml --namespace=default
Source: https://habr.com/ru/post/329666/