
I decided to try ZFS here the other day, but I did not find a detailed and simple manual on how to implement it on CentOS, I decided to correct the situation. In addition, I wanted to install all this in EFI mode. - not to stand still? And at the same time understand for yourself how
DKMS works, as well as aspects of manual installation of RPM-based distributions.
ZFS was not chosen by chance either, since it was planned to deploy a hypervisor on this machine and use zvol to store images of virtual machines. I wanted something more than a software raid + lvm or simple file storage of images, something like
ceph , but for one host this is too bold. Looking ahead to say that I was very pleased with this file system, its performance and all its
chips .
Training
First, let's take a LiveCD CentOS image, for example
from the yandex cloud , we need live and not netinstall or minimal, since for installation we will need a fully working linux system. We write the image on a pig or a flash drive and download it from it. It is necessary to load in efi mode, otherwise it will not be possible to save the boot record in efi.
')
Install the epel and zol repositories:
yum -y install epel-release yum -y localinstall http://archive.zfsonlinux.org/epel/zfs-release.el7.noarch.rpm
Install the package with kernel headers:
yum -y install kernel-devel
Next, we check a certain feint with our ears, without which zfs simply will not gather on our LiveCD:
rm -f /lib/modules/$(uname -r)/build ln -s /usr/src/kernels/* /lib/modules/$(uname -r)/build
Now you can install the zfs package:
yum -y install zfs
After installing the package, check whether the module is installed, and if everything is OK, then load it:
dkms status modprobe zfs
Otherwise, correct errors and install modules via dkms autoinstallDisk layout
In my case there are three disks of 2 TB each, I want to partition them in such a way that in the end there will be three sections on each:
- efi (fat16 / 100mb) - configuration files and the bootloader will be stored here
- boot (ext4 / 412mb) - we will integrate these partitions from all three disks into software RAID1, here will be the kernels and minimal images for booting the system.
- data (zfs / everything else) - on these partitions from all three disks we will create a zpool with RAIDZ, in which we will create the necessary partitions with mount points in /, / home and / var, etc., and install the system on them.
Getting Started:
parted /dev/sda mklabel gpt mkpart ESP fat16 1MiB 101MiB set 1 boot on mkpart boot 101MiB 513MiB mkpart data 513MiB 100%
Pore the same for / dev / sdb and / dev / sdc. Create a file system for the efi-partition:
mkfs.msdos -F 16 /dev/sd{a,b,c}1
FAT16 is not used randomly, because The size of the minimum partition with FAT32 is 512 MB, and this is too much.
Ok, now let's look at our / boot, create a software RAID from the second partitions on our disks, and immediately the file system on it:
mdadm --create --verbose /dev/md0 --level=1 --metadata=0.90 --raid-devices=3 /dev/sda2 /dev/sdb2 /dev/sdc2 mkfs.ext4 /dev/md0
It's time to create our zpool, for this operation it is recommended to access the disks by ID, here's an example of how this command looked to me:
zpool create -m none -o ashift=12 rpool raidz \ /dev/disk/by-id/ata-TOSHIBA_DT01ACA200_74FWM9LKS-part3 \ /dev/disk/by-id/ata-TOSHIBA_DT01ACA200_74FWMHDKS-part3 \ /dev/disk/by-id/ata-TOSHIBA_DT01ACA200_74FWR4VKS-part3
You only need to decide on the parameter ashift. For disks whose block size is 512b, you must specify the parameter ashift = 9, for 4k disks ashift = 12. You can see the disk block size with the
fdisk -l command .
Now we create the sections we need, in zfs this is easy:
zfs create -o mountpoint=none rpool/ROOT zfs create -o mountpoint=/ rpool/ROOT/centos-1 zfs create -o mountpoint=/home rpool/home zfs create -o mountpoint=/var rpool/var
Done, we mark the wheels.
System installation
We will install manually, so let's get started. We mount all our partitions in / mnt, note that we only mount the efi partition for one disk, we will deal with the rest later:
zpool import -o altroot=/mnt rpool mkdir -p /mnt/boot/efi mount /dev/md0 /mnt/boot/ mount /dev/sda1 /mnt/boot/efi/
We have just prepared a space for our new system. Now we initialize and install the main repository in it, and then the system with the kernel itself:
rpm --root=/mnt --rebuilddb curl -O http://mirror.yandex.ru/centos/7/os/x86_64/Packages/centos-release-7-1.1503.el7.centos.2.8.x86_64.rpm rpm --root /mnt -ivh centos-release-*.rpm yum -y --installroot=/mnt groupinstall base yum -y --installroot=/mnt install kernel
When the guest system is installed, we include in it the system directories of the host system and execute chroot:
mount --bind /dev /mnt/dev mount --bind /proc /mnt/proc mount --bind /sys /mnt/sys chroot /mnt /bin/bash --login
Now we will start its setup. Let's write down a DNS server, better local of course, that would have earned name resolution:
echo 'nameserver 192.168.225.1' > /etc/resolv.conf
Reinstall the epel and zol repositories:
yum -y install epel-release yum -y localinstall --nogpgcheck http://archive.zfsonlinux.org/epel/zfs-release.el7.noarch.rpm
And the zfs itself should now be installed without dancing with a tambourine:
yum -y install kernel-devel zfs
Ok, we will install, set the time zone:
rm -rf /etc/localtime ln -sf /usr/share/zoneinfo/Europe/Moscow /etc/localtime
Hostname, root password:
echo 'one' > /etc/hostname echo "root:newpass" | chpasswd
Write the mount points in fstab. In principle, you do not need to add sections on zfs to fstab:
cat /proc/mounts | grep /boot >> /etc/fstab
It is also necessary to save information about our raid array:
mkdir /etc/mdadm mdadm --examine --scan >> /etc/mdadm/mdadm.conf
Done, our system is installed and configured, now we will deal with the bootloader.
Install the bootloader
In principle, in the case of efi it would be possible to do without a bootloader, since The linux kernel has been supporting
EFISTUB for quite some time (downloading directly via efi without a bootloader), but this is not our case because: firstly: the efi partition on which the kernel should be located cannot be merged into a software raid and therefore every time you update the kernel you will have to copy This section on the other disks, secondly: centos is not very adapted for such loading from the box, it is recommended to use GRUB2.
Install GRUB2 for UEFI:
yum -y install grub2-efi
To prevent grub2-install from swearing on zfs partitions, we need to compile and install another
grub-zfs-fixer package:
yum groupinstall "Development Tools" curl https://codeload.github.com/Rudd-O/zfs-fedora-installer/tar.gz/master | tar xzv cd zfs-fedora-installer-master/ tar cvzf grub-zfs-fixer.tar.gz grub-zfs-fixer/ rpmbuild -ta grub-zfs-fixer.tar.gz yum localinstall ~/rpmbuild/RPMS/noarch/grub-zfs-fixer-0.0.3-1.noarch.rpm
Done, now execute the installation of GRUB2 and generate the config:
grub2-install grub2-mkconfig -o /boot/grub/grub.cfg
GRUB2 should have created an entry in your efi, check:
efibootmgr -v
Copy our efi-section to other disks:
dd if=/dev/sda1 of=/dev/sdb1 bs=4M dd if=/dev/sda1 of=/dev/sdc1 bs=4M
Now it only remains to add the zfs module to the initramfs, for this we will do:
yum -y install zfs-dracut dracut -f /boot/initramfs-3.10.0-229.14.1.el7.x86_64.img 3.10.0-229.14.1.el7.x86_64
Notice here the path to the initramfs is passed as the first argument and the kernel version as the second. If the second parameter is not set, the image will be generated for the current running kernel, and since we are working in chroot, its version will be clearly less than that installed in the guest system.
That's all. Exit the chroot, unmount / mnt. And reboot into our freshly installed system.
exit umount -R /mnt reboot
Sources:
HOWTO install Ubuntu to a Native ZFS Root FilesystemHOWTO install Debian GNU Linux to a Native ZFS Root Filesystemtopic on google groupsHardforum themeUPD: In addition to this article, thatsme published his article where he described in more detail about zfs