A small digression: this l \ r is synthetic.
Some tasks that are described here can be made much easier, but since the task of the l / r is to get acquainted with the functional raid, lvm, some operations are artificially complicated.
This laboratory work is associated with such delicate matter as data integrity - this is an area that allows you to lose all your data because of the smallest error - one extra letter or digit.
Since you are doing laboratory work, nothing threatens you, unless you have to start doing it again.
In real life, everything is much more serious, so you should very carefully enter the names of the disks, understanding what you are doing with the current command and what disks you are working with.
The second important point is the naming of disks and partitions: depending on the situation, the disk numbers may differ from the values presented in the teams in the laboratory work.
So, for example, if you remove the sda disk from the array and then add a new disk, the new disk will be displayed in the system with the name sda. If you reboot before adding a new disk, the new disk will have the name sdb, and the old one will become sda
Lab work must be performed under the superuser (root) since most commands require elevated privileges and it does not make sense to constantly raise privileges through sudo.
Create a new virtual machine with the following characteristics:
Start installing Linux and go to the choice of hard drives to do the following:
Finish the OS installation by installing grub on the first device (sda) and boot the system.
Copy the contents of the / boot partition from the sda disk (ssd1) to the sdb disk (ssd2)
dd if=/dev/sda1 of=/dev/sdb1
Install grub on the second device:
View drives in the system:
fdisk -l lsblk -o NAME,SIZE,FSTYPE,TYPE,MOUNTPOINT
Find the disk on which grub was not installed and perform this installation:
grub-install /dev/sdb
Describe in your own words what you have done and what result you received as a result of the task done.
After completing this task, it is recommended to save a backup of the folder with the virtual machine or make a vagrant box .
Result: Virtual machine with ssd1, ssd2 disks.
cat /proc/mdstat
fdisk -l
sfdisk -d /dev/XXXX | sfdisk /dev/YYY
sfdisk -d /dev/XXXX | sfdisk /dev/YYY
fdisk -l
mdadm --manage /dev/md0 --add /dev/YYY
cat /proc/mdstat
. You should see the sync start.Now you need to manually synchronize non-RAID partitions. To do this, use the dd utility, copying it from a live disk to a new one, which you recently installed:
dd if=/dev/XXX of=/dev/YYY
Describe in your own words what you have done and what result you received as a result of the task done.
Result: the ssd1 disk is deleted, the ssd2 disk is saved, the ssd3 disk is added.
This is the most difficult and voluminous task of all presented. Very carefully check what you are doing and with what disks and partitions. It is recommended to make a copy before making it. This task, regardless of task number 2, can be performed after task number 1, adjusted for the names of the disks.
The second part of the task of this laboratory should bring in exactly the same condition that was after the first part.
In order to make it easier for you to work, I can recommend not to remove the physical disks from the host machine, but only to disconnect them in the properties of the machine. From the point of view of the OS in the VM, it will look exactly the same, but in case of anything, you can connect the disk back and continue the work by rolling back a couple of points if you have problems. For example, you might have done wrong or forgot to copy the / boot partition to a new disk. I can only advise you to recheck several times with which disks and partitions you are working, and even better to write out on a piece of paper the correspondence of disks, partitions and the "physical" disk number. The lsblk
command draws a beautiful and clear tree, use it as often as possible to analyze what you have done and what you need to do.
To the story ...
Imagine that your server worked for a long time on 2 ssd disks, when suddenly ...
Emulate ssd2 disk failure by removing the disk from the properties of the VM and rebooting.
View the current status of disks and RAID:
cat /proc/mdstat fdisk -l lsblk -o NAME,SIZE,FSTYPE,TYPE,MOUNTPOINT
You were lucky - the authorities allowed to buy several new CDs:
2 SATA large volume for the long overdue task of making a partition with logs on a separate disk. 2 SSD for the replacement of the deceased, as well as the replacement is still functioning.
It should be noted that the server cart supports the installation of only 4 discs. at the same time, therefore it is impossible to add all the disks at once
HDD volume to choose 2 times more than SSD.
SSD volume to choose 1.25 times the former SSD.
Add one new ssd disk, calling it ssd4, and after adding, check what happened:
fdisk -l lsblk -o NAME,SIZE,FSTYPE,TYPE,MOUNTPOINT
First of all, you should take care to preserve the data of the old disk. This time we will transfer data using LVM:
First of all, you need to copy the file table from the old disk to the new one:
sfdisk -d /dev/XXX | sfdisk /dev/YYY
Substitute the correct disks instead of x, y and disassemble what this command does.
Using the dd command, copy the / boot data to a new disk:
dd if=/dev/XXX of=/dev/YYY
If / boot remains mounted on the old disk, it should be remounted on a live disk:
mount | grep boot # lsblk # , umount /boot # /boot mount -a # /etc/fstab. # /dev/sda,
Install the bootloader on a new ssd disk:
grub-install /dev/YYY
Why do we perform this operation?
Create a new raid array with the inclusion of only one new ssd disk:
mdadm --create --verbose /dev/md63 --level=1 --raid-devices=1 /dev/YYY
The command above will not work without a special key. Read the help and add this key to the command.
The next step is to configure LVM
Create a new physical volume to include a previously created RAID array:
pvcreate /dev/md63
Increase the size of the Volume Group system using this command:
vgextend system /dev/md63
Execute commands and write down what you see and what has changed.
vgdisplay system -v pvs vgs lvs -a -o+devices
On which physical disk are LV var, log, root now?
Move the data from the old disk to the new one by substituting the correct device names.
pvmove -i 10 -n /dev/system/root /dev/md0 /dev/md63
Repeat the operation for all logical volume.
Execute commands and write down what you see and what has changed.
vgdisplay system -v pvs vgs lvs -a -o+devices lsblk -o NAME,SIZE,FSTYPE,TYPE,MOUNTPOINT
Change our VG by removing the old raid disk from it. Substitute the correct raid name.
vgreduce system /dev/md0
Execute commands and write down what you see and what has changed.
lsblk -o NAME,SIZE,FSTYPE,TYPE,MOUNTPOINT pvs vgs
ls /boot
should show multiple files and folders. Examine what is stored in this section and write down what file \ directory for what is responsible.Remove the ssd3 disk and add ssd5, hdd1, hdd2 according to the above TK, eventually getting:
Check what happened after adding the disks:
fdisk -l lsblk -o NAME,SIZE,FSTYPE,TYPE,MOUNTPOINT
Restore the operation of the main raid array:
Copy the partition table, substituting the correct disks:
sfdisk -d /dev/XXX | sfdisk /dev/YYY
Please note that when we copied the partition table from the old disk, it was said that the new size does not use the entire volume of the hard disk. Therefore, soon we will need to change the size of this section and expand the raid. See for yourself by entering the command:
lsblk -o NAME,SIZE,FSTYPE,TYPE,MOUNTPOINT
Copy the boot partition from the ssd4 disk to ssd5:
dd if=/dev/XXX of=/dev/YYY
Install grub to a new disk (ssd5).
Change the size of the second partition of the ssd5 drive.
Run the disk layout utility:
fdisk /dev/XXX
Re-read the partition table and check the result:
partx -u /dev/XXX lsblk -o NAME,SIZE,FSTYPE,TYPE,MOUNTPOINT
Add a new disk to the current raid array (do not forget to substitute the correct disks):
mdadm --manage /dev/md63 --add /dev/sda2
Expand the number of disks in our array to 2 pieces:
mdadm --grow /dev/md63 --raid-devices=2
See the result: we have 2 arrays marked up, but both sections included in this array have different sizes:
lsblk -o NAME,SIZE,FSTYPE,TYPE,MOUNTPOINT
Increase partition size on ssd4 disk
Run the disk layout utility:
fdisk /dev/XXX
Re-read the partition table and check the result.
partx -u /dev/XXX lsblk -o NAME,SIZE,FSTYPE,TYPE,MOUNTPOINT
Notice that now the sda2, sdc2 partitions are larger than the size of the raid device.
At this stage, the raid size can now be expanded:
mdadm --grow /dev/md63 --size=max lsblk -o NAME,SIZE,FSTYPE,TYPE,MOUNTPOINT # check result
View lsblk and write down what has changed.
However, even though we changed the raid size, the sizes of vg root, var, log themselves have not changed
Look at what size PV equals:
pvs
Expand the size of our PV:
pvresize /dev/md63
Look at what size PV equals:
pvs
Add the newly appeared place VG var, root:
lvs # lvextend -l +50%FREE /dev/system/root lvextend -l +100%FREE /dev/system/var lvs #
At this stage, you have completed the migration of the main array to new disks. Work with ssd1, ssd2 is over.
Our next task is to move / var / log to new disks, for this we will create a new array and lvm on hdd disks.
Let's see what names the new hdd drives have:
fdisk -l
Create a raid array:
mdadm --create /dev/md127 --level=1 --raid-devices=2 /dev/sdc /dev/sdd
Create a new PV on the raid of large disks:
pvcreate data /dev/md127
Create a group in this PV called data:
vgcreate data /dev/md127
Create a logical volume the size of the entire free space and call it val_log:
lvcreate -l 100%FREE -n var_log data # lvs #
Format the created partition in ext4:
mkfs.ext4 /dev/mapper/data-var_log
Let's see the result:
lsblk
Transfer the log data from the old section to the new one
Mount temporarily new log repository:
mount /dev/mapper/data-var_log /mnt
Perform partition synchronization:
apt install rsync rsync -avzr /var/log/ /mnt/
Find out what processes are currently working with / var / log:
apt install lsof lsof | grep '/var/log'
We stop these processes:
systemctl stop rsyslog.service syslog.socket
Perform the final synchronization of partitions (the data that may have changed since the last synchronization):
rsync -avzr /var/log/ /mnt/
Swap the sections:
umount /mnt umount /var/log mount /dev/mapper/data-var_log /var/log
Checking what happened:
lsblk
Rule / etc / fstab
fstab is a file in which the rules are written according to which partitions will be mounted at boot time. Our task is to find the line in which the / var / log is mounted and fix the system-log
device on the data-var_log
.
The most important thing at this stage is to remember to change the table of the sections (ext4, for example). Since no matter how we change any raid, lvm, until the file system on the partition is notified that the partition size has now changed, we will not be able to use the new space. Use the resize2fs
command to change the file system.
Final chord
Perform checks that all we wanted to do was really done:
pvs lvs vgs lsblk cat /proc/mdstat
[OPTIONAL] Perform actions
You now have an unnecessary LV log in the VG system. Distribute this space between root or var, but instead of using the 100% FREE construction, specify the size with your hands using the -L switch:
-L 500M
Source: https://habr.com/ru/post/450896/
All Articles