📜 ⬆️ ⬇️

Ceph FS distributed file system in 15 minutes

image

It will take us only a few minutes to lift the distributed Ceph FS file system.

Quick reference

Ceph is an open source development of resilient, highly scalable petabyte storage. The basis is the integration of disk spaces of several dozen servers into object storage, which allows for the implementation of flexible multiple pseudo-random data redundancy. Ceph developers complement this object storage with three more projects:


Example Description

In my small example, I use only 3 servers as storage. Each server has 3 SATA disks available to me: /dev/sda as system and /dev/sdb and /dev/sdc for the Ceph FS file system. The OS in this example will be Ubuntu 12.04 LTS. Another server will mount the file system, that is, in fact act as a client. We use the default level of redundancy, that is, two replicas of one block.
')
At the time of this writing, the developers offer two methods for creating simple configurations - the old one, using mkcephfs or the new ceph-deploy . For newer versions, starting with the 0.6x branch (cuttlefish), it is already recommended to use ceph-deploy . But in this example, I use the earlier, stable release of the 0.56.x branch (bobtail), using mkcephfs .

I'll warn you right away - Ceph FS is still in prepraction status at the moment, but by community activity this project is called one of the hottest among software defined storage.

Let's get started


Step 0. Install the OS

Perform the minimum installation. Additionally, you must install ntpdate and your favorite editor, for example vim .

 aptitude update && aptitude install ntpdate vim 

Step 1. Install Ceph packages

We install Ceph packages for each node of the cluster and client.

 wget -q -O- 'https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc' | sudo apt-key add - echo deb http://ceph.com/debian-bobtail/ $(lsb_release -sc) main | tee /etc/apt/sources.list.d/ceph.list aptitude update && aptitude install ceph 

Step 2. Create a configuration file

On each node and client we create a single configuration file /etc/ceph/ceph.conf

 [global] auth cluster required = cephx auth service required = cephx auth client required = cephx [osd] osd journal size = 2000 osd mkfs type = xfs osd mkfs options xfs = -f -i size=2048 osd mount options xfs = rw,noatime,inode64 [mon.a] host = node01 mon addr = 192.168.2.31:6789 [mon.b] host = node02 mon addr = 192.168.2.32:6789 [mon.c] host = node03 mon addr = 192.168.2.33:6789 [osd.0] host = node01 devs = /dev/sdb [osd.1] host = node01 devs = /dev/sdc [osd.2] host = node02 devs = /dev/sdb [osd.3] host = node02 devs = /dev/sdc [osd.4] host = node03 devs = /dev/sdb [osd.5] host = node03 devs = /dev/sdc [mds.a] host = node01 

Making the file readable for everyone

 chmod 644 /etc/ceph/ceph.conf 

Step 3. Make a password-free entry between nodes.

We set the root password, we generate ssh keys without specifying the passphrase

 passwd root ssh-keygen 

Create ssh aliases in /root/.ssh/config according to the name of the node in your case

 Host node01 Hostname node01.ceph.labspace.studiogrizzly.com User root Host node02 Hostname node02.ceph.labspace.studiogrizzly.com User root Host node03 Hostname node03.ceph.labspace.studiogrizzly.com User root 

Add public keys to neighboring nodes of the cluster.

 ssh-copy-id root@node02 ssh-copy-id root@node03 

Step 4. Expand the cluster

To begin, we will prepare the necessary disks for work

 mkfs -t xfs fs-options -f -i size=2048 /dev/sdb mkfs -t xfs fs-options -f -i size=2048 /dev/sdc 

Next, prepare working directories and mount disks according to our design.

So for node01 we will execute

 mkdir -p /var/lib/ceph/osd/ceph-0 mkdir -p /var/lib/ceph/osd/ceph-1 mount /dev/sdb /var/lib/ceph/osd/ceph-0 -o noatime,inode64 mount /dev/sdc /var/lib/ceph/osd/ceph-1 -o noatime,inode64 

for node02

 mkdir -p /var/lib/ceph/osd/ceph-2 mkdir -p /var/lib/ceph/osd/ceph-3 mount /dev/sdb /var/lib/ceph/osd/ceph-2 -o noatime,inode64 mount /dev/sdc /var/lib/ceph/osd/ceph-3 -o noatime,inode64 

and for node03

 mkdir -p /var/lib/ceph/osd/ceph-4 mkdir -p /var/lib/ceph/osd/ceph-5 mount /dev/sdb /var/lib/ceph/osd/ceph-4 -o noatime,inode64 mount /dev/sdc /var/lib/ceph/osd/ceph-5 -o noatime,inode64 

And finally, on node01 we run the Ceph storage creation script.

 mkcephfs -a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.keyring 

and then copy the key ceph.keyring to the other nodes of the cluster

 scp /etc/ceph/ceph.keyring node02:/etc/ceph/ceph.keyring scp /etc/ceph/ceph.keyring node03:/etc/ceph/ceph.keyring 

and on the client node, in my case - 192.168.2.39

 scp /etc/ceph/ceph.keyring 192.168.2.39:/etc/ceph/ceph.keyring 

Keys set access to read

 chmod 644 /etc/ceph/ceph.keyring 

Step 5. Launch and Status

Thanks to the passwordless entry between the nodes, we start the entire cluster from any node

 service ceph -a start 

We also check the cluster status

 ceph -s 

The most expected status during normal operation is HEALTH_OK

On the client side, we create a directory in the required location, for example /mnt/cephfs , parse the key for the ceph kernel ceph and mount the file system

 mkdir /mnt/cephfs ceph-authtool --name client.admin /etc/ceph/ceph.keyring --print-key | tee /etc/ceph/admin.secret mount -t ceph node01:6789,node02:6789,node03:6789:/ /mnt/cephfs -o name=admin,secretfile=/etc/ceph/admin.secret,noatime 

Afterword

This is how we get the distributed Ceph FS file system in just 15 minutes. Issues of performance, safety and maintenance require more detailed immersion and this is material on a separate article, or even more than one.

PS


Zababahali bundle OpenNebula + Ceph using only Ceph object storage, without the Ceph FS file system. Read more in the hub I am promoting .

Source: https://habr.com/ru/post/179823/


All Articles