📜 ⬆️ ⬇️

Cluster LusterFS or Worldwide

#include

Often, the cornerstone of server performance is the speed of the file system. It can be increased by creating RAID0 disk arrays - writing / reading is done bit by bit on both disks, but sooner or later the disk slots end, the reliability of RAID0 leaves much to be desired - when one of the disks comes out, the entire system collapses. RAID10 again, we rest on the number of disks.

Alternatively, use network fs. The most widespread is NFS, but for combat missions with its speed and non-obviousness of setting up access rights is practically unsuitable.
"Comparing GPFS and LusterFS is how to compare IBM and SUN."
Anonymous from the Internet.
In the TOP-300 of World supercomputers, half of the first 50 use LusterFS, which is very easy to set up initially. A simple solution is the right solution. The settings were made on three identical servers s1 s2 s3, the operating system is CentOS 5.4.

The structure is such that the fs table is stored in the MGS / MDT section, it is also responsible for balancing by file. The structure is somewhat similar to RAID0, when one (non-MGS / MDT) device fails, the system continues to function and returns to full operational state when the lost fighter returns.
Sections are combined into nodes; servers, file storages, ... can be used as nodes.
Free up two sections on s1: sda4 ~ 50MB for the structure and sda3 for the remaining space. On the other servers, only sda3 is sufficient.
int main ()

1. Create a repository.
bash# cd /etc/yum.repos.d/
bash# cat lustre
[lustre]
name=RHEL/CentOS-$releasever lustre
baseurl=http://quattorsrv.lal.in2p3.fr/packages/lustre/
gpgcheck=0
enabled=1


2. Install the kernel and packages
bash# yum install kernel-lustre.x86_64 lustre.x86_64

3. We are overloaded in a new kernel
bash# reboot

4. Install luster-modules and luster-ldiskfs
bash# yum install lustre-modules.x86_64 lustre-ldiskfs.x86_64

5. Utilities for working with FS should be downloaded via the link www.sun.com/software/products/lustre/get.jsp , the standard ones never worked for me. We only need one file.
bash# rpm -Uhv e2fsprogs-1.41.6.sun1-0redhat.rhel5.x86_64.rpm

6. Make MGS and MDT disk (under the structure), mount and write to fstab
root@s1# mkfs.lustre --fsname=spfs --reformat --mdt --mgs /dev/sda4;
root@s1# mdkir /mgs-mds; mount -t lustre /dev/sda4 /mgs-mds;
root@s1# echo "/dev/sda4 /mgs-mds lustre defaults,_netdev 0 0" >> /etc/fstab;


7. Create an OST partition with data, mount, write in / etc / fstab
root@s1# mkfs.lustre --fsname=spfs --reformat --ost --mgsnode=192.168.0.1@tcp0 /dev/sda3
root@s1# mkdir /ost; mount -t lustre /dev/sda3/ost
root@s1# echo "/dev/sda3 /ost lustre defaults,_netdev 0 0" >> /etc/fstab


8. Login to s2 and repeat steps 1-5
')
9. Create an OST data partition, mount, write in / etc / fstab
root@s2# mkfs.lustre --fsname=spfs --reformat --ost --mgsnode=192.168.0.1@tcp0 /dev/sda3
root@s2# mkdir -p /ost; mount -t lustre /dev/sda3 /ost
root@s2# echo "/dev/sda3 /ost lustre defaults,_netdev 0 0" >> /etc/fstab


10. Points 8-9 are repeated for s3, s4, ... sn

11. Mount the working system
root@s2# mkdir -p /work
root@s2# echo "192.168.0.1@tcp0:/spfs /work lustre defaults 0 0" >> /etc/fstab
root@s2# mount /work

return 0

Source: https://habr.com/ru/post/92875/


All Articles