📜 ⬆️ ⬇️

GlusterFS, the experience of the new version

Hello.

Last time ( Growing up with GlusterFS ) I described how to configure GlusterFS 3.0.x for my needs. We recently upgraded GlusterFS to 3.2.x., and since there were a lot of differences in configuration between these versions, I decided to describe the process for the general IT mind.

At once I will make a reservation that the transition to the new version was due to glitches of the old one.
It was after the next failures of the Amazon EBS. The disk on the second master degraded and we turned off the service at the time of fixing the problems by the brave AWS engineers. After everything returned to normal, we tried to turn the second master back into the scheme, but something irreparable happened and all the clients simply hung up and the network drives did not corny. There were errors in the clients, a long googling of which led to “hints” on the forums that such hangs were fixed in new versions, which was perceived both happily and sadly at the same time :) It was necessary to update.
')
The first thing that became clear is that our configs from the old versions will not work, moreover, they are no longer needed and everything is configured from the command line.
Setting up the master-master is quite different than before, and this is the first minus of the new scheme, I will consider them at the end of the article.

Initialize peer:

root@files1.domain.com:~# gluster peer probe files2.domain.com Probe successful 


Checking:

 root@files1.domain.com:~# gluster peer status Number of Peers: 1 Hostname: files2.domain.com Uuid: c8f2fg43-ch7e-47f3-9ec5-b4b66f81101d State: Peer in Cluster (Connected) 


Next, create a disk (volume):

 root@files1.domain.com:~# gluster volume create volume_data replica 2 transport tcp files1.domain.com:/data files2.domain.com:/data Creation of volume volume_data has been successful. Please start the volume to access data. 


Start the drive:

 root@files1.domain.com:~# gluster volume start volume_data Starting volume volume_data has been unsuccessful 


We perform on both masters:

 /etc/init.d/glusterfs-server restart 


If everything went smoothly, the output of the following command should look like this:

 root@files1.domain.com:~# gluster volume info Volume Name: volume_data Type: Replicate Status: Started Number of Bricks: 2 Transport-type: tcp Bricks: Brick1: files1.domain.com:/data Brick2: files2.domain.com:/data 


We restrict access to our balls in this way:

 root@files1.domain.com:~# gluster volume set volume_data auth.allow 10.* 


Look again info:

 Volume Name: volume_data Type: Replicate Status: Started Number of Bricks: 2 Transport-type: tcp Bricks: Brick1: files1.domain.com:/data Brick2: files2.domain.com:/data Options Reconfigured: auth.allow: 10.* 


Setup on servers is finished, we go to the client. I assume that you are able to install the necessary version of the package for the client and it is already in the system.

 root@client.domain.com:~# mkdir /data root@client.domain.com:~# mount -t glusterfs files1.domain.com:/volume_data /data 


We look at the result:

 root@client.domain.com:~# df -h Filesystem Size Used Avail Use% Mounted on /dev/sda1 7.9G 6.6G 987M 88% / none 3.4G 112K 3.4G 1% /dev none 3.6G 0 3.6G 0% /dev/shm none 3.6G 64K 3.6G 1% /var/run none 3.6G 0 3.6G 0% /var/lock none 3.6G 0 3.6G 0% /lib/init/rw /dev/sdb 414G 6.1G 387G 2% /mnt tmpfs 10M 8.0K 10M 1% /tmp/tmpfs files1.domain.com:/ volume_data 200G 109G 92G 55% /data 


Register in / etc / fstab on client.domain.com:

 files1.domain.com:/volume_data /data glusterfs defaults,_netdev 0 0 


And in order to save our nerves when rebooting, we try an ordinary trick in such cases:

 root@client.domain.com:~# umount /data root@client.domain.com:~# mount -a 


Check that everything is OK:

 root@client.domain.com:~# df -h Filesystem Size Used Avail Use% Mounted on /dev/sda1 7.9G 6.6G 987M 88% / none 3.4G 112K 3.4G 1% /dev none 3.6G 0 3.6G 0% /dev/shm none 3.6G 64K 3.6G 1% /var/run none 3.6G 0 3.6G 0% /var/lock none 3.6G 0 3.6G 0% /lib/init/rw /dev/sdb 414G 6.1G 387G 2% /mnt tmpfs 10M 8.0K 10M 1% /tmp/tmpfs files1.domain.com:/ volume_data 200G 109G 92G 55% /data 


That's all.

Separately, I wanted to dwell on the minuses in the new GlusterFS:

1) If you do not have online both masters when setting up, then you will stop at the first command.
2) In the previous version I put passwords on my balls, in the new one you can only do as we did above auth.allow: 10. *, which, as you understand, is not always good practice.
3)
 /etc/fstab: files1.domain.com:/volume_data /data glusterfs defaults,_netdev 0 0 

Glasterovtsy say that such a binding to one server only in order to pull out the configuration of peers and other things, BUT!
Indeed, if one master disappears, the client knows where the second, everything works out well in the event that everything is already mounted, but alas, if you had to rebuild, pick up the machine from the image while the host specified in fstab lies. Your ball will not start, because it will not be able to pull out the config. And this change in the new version is very wrong for a distributed file system, IMHO.

Source: https://habr.com/ru/post/157029/


All Articles