There are raid1 of 2 disks, there are 2 additional disks, you need to add these 2 disks to the array and migrate to raid10 without data loss. The situation is complicated by the fact that the boot is not located in the raid, but finds only on one of the disks, and to increase the server fault tolerance, the bootloader needs to be moved to raid1.
All described actions were carried out on a working battle server. The scheme is universal, suitable for any other initial conditions. Similarly, you can migrate from raid10 to raid1.
We have:
On disk / dev / sdd1 is / boot
On the array / dev / md1 is /
On the array / dev / md2 is swap
If you have already resolved the issue with the bootloader, you can go directly to the migration section.
We transfer the loader
On the / dev / sdd disk there is data and there is a bootloader, so we will consider it a reference, all other disks can be considered empty. For reliability, we will not place the bootloader on raid10, but leave it on raid1 from 2 disks (it is possible on both 3x and 4x), for greater fault tolerance.
')
We create one-to-one partitions on the sdb disk as on sdd. Or manually, for example, using
fdisk /dev/sdb
Or simply duplicate sections.
sfdisk -d /dev/sdd --force | sfdisk /dev/sdb --force
The bootloader itself is on / dev / sdd1, so create a degraded raid1 / dev / md4 as follows
mdadm --create /dev/md4 --level=1 --raid-disks=2 missing /dev/sdb1 mke2fs -j /dev/md4
After creating any new array, you need to update the information about all raid, otherwise everything will collapse after a reboot.
mdadm --examine --scan >> /etc/mdadm/mdadm.conf
Now we will restart the server, and after the reboot we will see a strange array / dev / md127 that we need to get rid of, and our / dev / md4, which may not appear, since instead of it / dev / md127. Solving this problem is quite simple, just stop these 2 arrays and add / dev / md4 again
mdadm -S /dev/md127 mdadm -S /dev/md4 mdadm --examine --scan >> /etc/mdadm/mdadm.conf mdadm --assemble /dev/md4
For reliability, it is worth rebooting again and after that we come to the most critical part, you need to edit the GRUB2 bootloader so that it loads the server from the created array. To do this, find out the UUID of the old disk with the bootloader / dev / sdd1 and the new array / dev / md4
root@server:~
Edit /boot/grub/grub.cfg. Everywhere where the old UUID is found - 4d7faa7f-25b3-4a14-b644-682ffd52943b is replaced with our new UUID - 29683c02-5bd7-4805-8608-5815ba578b6c, in fact this will be in each search section.
Wherever there is a set root, you also need to make a replacement. For example, it was
set root='(hd0)'
And we will be like this
set root='(md/4)'
An example of the resulting new config
insmod raid
insmod mdraid
insmod part_msdos
insmod part_msdos
insmod part_msdos
insmod part_msdos
insmod ext2
set root = '(md / 4)'
search --no-floppy --fs-uuid --set 59f76eb9-00d2-479e-b94e-6eb54fc574d4
set locale_dir = ($ root) / grub / locale
And section ### BEGIN /etc/grub.d/10_linux ### will look like this
menuentry 'Debian GNU / Linux, with Linux 2.6.32-5-amd64' --class debian --class gnu-linux --class gnu --class os {
insmod raid
insmod mdraid
insmod part_msdos
insmod part_msdos
insmod part_msdos
insmod part_msdos
insmod ext2
set root = '(md / 4)'
search --no-floppy --fs-uuid --set 59f76eb9-00d2-479e-b94e-6eb54fc574d4
echo 'Loading Linux 2.6.32-5-amd64 ...'
linux /vmlinuz-2.6.32-5-amd64 root = / dev / md1 ro quiet
echo 'Loading initial ramdisk ...'
initrd /initrd.img-2.6.32-5-amd64
}
menuentry 'Debian GNU / Linux, with Linux 2.6.32-5-amd64 (recovery mode)' --class debian --class gnu-linux --class gnu --class os {
It is very important that now you do not reboot, so by changing this file, the bootloader will already think that the information for the download is on / dev / md4, and there is nothing there yet. To keep yourself a loophole, do not edit the recovery mode section, so you can boot with old data, although for this you will need access via KVM-IP or for the DC employee to select recovery mode when downloading.
Now you need to update the ramdisk, otherwise the system simply will not boot
update-initramfs -u
After we make changes to the bootloader, we can transfer the contents of / boot to the created array / dev / md4
mkdir /mnt/md4 mount /dev/md4 /mnt/md4 rsync -avHxl --progress --inplace --exclude 'lost+found' /boot/ /mnt/md4/ umount /mnt/md4/
We need to make sure that the GRUB2 boot loader is installed on 2x hard drives (or 4x, depending on how much you wanted to add to the array), as well as on the / dev / md4 array. The safest way to do this is to execute
dpkg-reconfigure grub-pc
where you need to select all the disks to which you want to add the bootloader.
In addition to the bootloader, the system itself must correctly understand that the bootloader is now in a different place; all you need to do is edit the / etc / fstab and / etc / mtab files. We are interested in the line where / boot is mounted. We do not need the UUID, instead we specify the name of our raid array.
In the / etc / fstab file
In the / etc / mtab file
Now you can reboot, and if you did everything correctly, the system will boot from / dev / md4, and / dev / sdd1 is no longer used. It remains only to finish our degraded array
mdadm /dev/md4 --add /dev/sdd1
Migrating from raid1 to raid10 without data loss
The situation is the same, there is a raid1 array of 2 disks and 2 free disks, you need to collect it all in raid10 and the data should be intact.
For new drives, you need to create a partition structure identical to those already in the raid
sfdisk -d /dev/sdd --force | sfdisk /dev/sda --force sfdisk -d /dev/sdd --force | sfdisk /dev/sdb --force
On / dev / md4 is / boot
On / dev / md1 is /
Swap is on / dev / md2
We will not touch / boot, leave it as raid1, save the data only to / dev / md1 (the array consists of / dev / sda6, / dev / sdb6, / dev / sdc6, / dev / sdd6).
To save the data, we will collect a degraded raid10 array of 3 disks, and transfer the data from raid1 there, then we will analyze raid1 and finish raid10.
To get started, pull 1 disk out of raid1, since we need at least 3 disks to create raid10
mdadm /dev/md1 --fail /dev/sdc6 --remove /dev/sdc6
We assemble degraded RAID10 as / dev / md3 and mount it. Be sure to add a record about the new array so that after the reboot it remains
mdadm --create /dev/md3 --level=10 --raid-devices=4 /dev/sda6 /dev/sdb6 /dev/sdc6 missing mke2fs -j /dev/md3 mdadm --examine --scan >> /etc/mdadm/mdadm.conf
If you accidentally rebooted before writing data about arrays, then run
mdadm --examine --scan >> /etc/mdadm/mdadm.conf mdadm --assemble /dev/md3
Transferring data from / dev / md1 to / dev / md3
mkdir /mnt/md3 mount -t ext3 /dev/md3 /mnt/md3 rsync -avHxl --progress --inplace --exclude 'lost+found' / /mnt/md3/ umount /mnt/md3
Everything, data is saved on raid10 and migration is almost complete. Now you need to tell the system to use the new / dev / md3, instead of the old / dev / md1. To do this, edit the files / etc / fstab and / etc / mtab.
In the / etc / fstab file, you need to replace UUID / dev / md1 with UUID / dev / md3
ls -l /dev/disk/by-uuid lrwxrwxrwx 1 root root 9 Nov 9 20:56 29683c02-5bd7-4805-8608-5815ba578b6c -> ../../md3
We get the following
In the / etc / mtab file you just need to replace the new / dev / md3 instead of / dev / md1 everywhere
/dev/md3 / ext3 rw,errors=remount-ro 0 0
When the device changes for / boot or / it is necessary to edit the boot loader configuration / boot / grub / grub.cfg and execute update-initramfs, otherwise it will not be loaded.
In the /boot/grub/grub.cfg file, everywhere where 4d7faa7f-25b3-4a14-b644-682ffd52943b (the old UUID of / dev / md1) occurs is replaced with our new UUID - 29683c02-5bd7-4805-8608-5815ba578b6c, this is important for the search sections.
And inside the section
### BEGIN /etc/grub.d/10_linux ###
replace root = / dev / md1 with root = / dev / md3
And after that, be sure to perform
update-initramfs -u
You need to reboot so that the root / becomes the new array / dev / md3 and there are no more calls to the old raid1. You need to finish creating raid10 by adding the disk that is currently in raid1 (/ dev / sdd6). But first you need to stop it and clear the partition.
mdadm -S /dev/md1 mdadm --zero-superblock /dev/sdd6
And now add the disk to the raid10 array and update the data on the arrays
mdadm /dev/md3 --add /dev/sdd6 mdadm --examine --scan >> /etc/mdadm/mdadm.conf
That's all, migration from raid1 to raid10 without data loss is complete.
PS In the end, I went back to raid1, because in my case, the transition from raid1 to raid10 did not give any impressive results, raid1 from 4 disks showed itself much better.