📜 ⬆️ ⬇️

Migration of PROXMOX VE 3.0 to software RAID1


This material describes the process of migrating the newly installed Proxmox 3.0 hypervisor to software RAID1. Proxmox developers in their wiki write that this solution is not officially supported and not recommended for use. Instead, it is proposed to use solutions on tested hardware raid controllers. Nevertheless, several manuals on this topic and practice of successful use of softraid together with Proxmox can be found online. Unfortunately, most of these manuals cannot be called step-by-step guides to action: they all contain certain errors that prevent you from achieving the desired result. Taking one of these manuals as a basis, I tried to correct this situation. The solution below was tested several times in steps, first in a virtual machine, and then used to migrate data on real hardware. The result is a working how-to, which is offered to your attention.

Before starting the migration, we have the following:
After the migration is completed, both HDDs will be merged into Soft RAID1, while we will save the data that was on the disk before the migration.

Migration will occur in several stages:
0. Install the necessary software.
1. Preparation of disks for transfer to RAID1.
2. Transfer / boot to / dev / md0.
3. Modify / etc / fstab.
4. Modification of Grub2.
5. Transfer LVM to / dev / md1, transfer the source disk to RAID1.

0. Install the necessary software.

We put mdadm and screen. When installing, mdadm will ask which modules to include when booting. Answer “all”. It is not necessary to set the screen, but it will help us to make sure at the 5th stage, when we transfer LVM.
root@kvm0:~# aptitude install mdadm screen 

1. Preparation of disks for transfer to RAID1.

Copy the sda ​​disk partitioning to sdb. Hereinafter I give the full output of commands so that at every step you can check the result.
 root@kvm0:~# sfdisk -d /dev/sda | sfdisk -f /dev/sdb Checking that no-one is using this disk right now ... OK Disk /dev/sdb: 4177 cylinders, 255 heads, 63 sectors/track sfdisk: ERROR: sector 0 does not have an msdos signature /dev/sdb: unrecognized partition table type Old situation: No partitions found New situation: Units = sectors of 512 bytes, counting from 0 Device Boot Start End #sectors Id System /dev/sdb1 * 2048 1048575 1046528 83 Linux /dev/sdb2 1048576 67108863 66060288 8e Linux LVM /dev/sdb3 0 - 0 0 Empty /dev/sdb4 0 - 0 0 Empty Warning: partition 1 does not end at a cylinder boundary Warning: partition 2 does not start at a cylinder boundary Warning: partition 2 does not end at a cylinder boundary Successfully wrote the new partition table Re-reading the partition table ... If you created or changed a DOS partition, /dev/foo7, say, then use dd(1) to zero the first 512 bytes: dd if=/dev/zero of=/dev/foo7 bs=512 count=1 (See fdisk(8).) 

Mark the sdb disk partitions as “Linux raid auto”.
 root@kvm0:~# sfdisk -c /dev/sdb 1 fd Done root@kvm0:~# sfdisk -c /dev/sdb 2 fd Done 

To make sure that there is no information on the disk about previously created RAID-arrays, we do the following.
 root@kvm0:~# mdadm --zero-superblock /dev/sdb1 mdadm: Unrecognised md component device - /dev/sdb1 root@kvm0:~# mdadm --zero-superblock /dev/sdb2 mdadm: Unrecognised md component device - /dev/sdb2 
If the disk is clean, then we see the above messages. Otherwise, the output of the command will be empty.
')
Create a RAID array.
 root@kvm0:~# mdadm --create -l 1 -n 2 /dev/md0 missing /dev/sdb1 --metadata=1.1 mdadm: array /dev/md0 started. root@kvm0:~# mdadm --create -l 1 -n 2 /dev/md1 missing /dev/sdb2 --metadata=1.1 mdadm: array /dev/md1 started. 

Add array information to mdadm.conf
 root@kvm0:~# cp /etc/mdadm/mdadm.conf /etc/mdadm/mdadm.conf_orig root@kvm0:~# mdadm --examine --scan >> /etc/mdadm/mdadm.conf 

2. Transfer / boot to / dev / md0.

Create a file system on / dev / md0. Mount it in / mnt / md0 and copy the contents of / boot there.
 root@kvm0:~# mkfs.ext3 /dev/md0 mke2fs 1.42.5 (29-Jul-2012) Filesystem label= OS type: Linux Block size=1024 (log=0) Fragment size=1024 (log=0) Stride=0 blocks, Stripe width=0 blocks 131072 inodes, 522944 blocks 26147 blocks (5.00%) reserved for the super user First data block=1 Maximum filesystem blocks=67633152 64 block groups 8192 blocks per group, 8192 fragments per group 2048 inodes per group Superblock backups stored on blocks: 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 Allocating group tables: done Writing inode tables: done Creating journal (8192 blocks): done Writing superblocks and filesystem accounting information: done root@kvm0:~# mkdir /mnt/md0 root@kvm0:~# mount /dev/md0 /mnt/md0 root@kvm0:~# cp -ax /boot/* /mnt/md0 

3. Modify / etc / fstab.

We comment in / etc / fstab to mount the boot-partition through the UUID and prescribe the mount of the partition via / dev / md0.
 root@kvm0:~# sed -i 's/^UUID/#UUID/' /etc/fstab root@kvm0:~# echo '/dev/md0 /boot ext3 defaults 0 1' >> /etc/fstab 

As a result, / etc / fstab should look like this.
 root@kvm0:~# cat /etc/fstab # <file system> <mount point> <type> <options> <dump> <pass> /dev/pve/root / ext3 errors=remount-ro 0 1 /dev/pve/data /var/lib/vz ext3 defaults 0 1 #UUID=eb531a48-dea8-4356-9b56-8aa800f14d68 /boot ext3 defaults 0 1 /dev/pve/swap none swap sw 0 0 proc /proc proc defaults 0 0 /dev/md0 /boot ext3 defaults 0 1 


Reboot.

4. Modification of Grub2.

Add support for RAID1.
 root@kvm0:~# echo 'GRUB_DISABLE_LINUX_UUID=true' >> /etc/default/grub root@kvm0:~# echo 'GRUB_PRELOAD_MODULES="raid dmraid"' >> /etc/default/grub root@kvm0:~# echo 'GRUB_TERMINAL=console' >> /etc/default/grub root@kvm0:~# echo raid1 >> /etc/modules root@kvm0:~# echo raid1 >> /etc/initramfs-tools/modules 

Install the bootloader on both disks.
 root@kvm0:~# grub-install /dev/sda --recheck Installation finished. No error reported. root@kvm0:~# grub-install /dev/sdb --recheck Installation finished. No error reported. root@kvm0:~# update-grub Generating grub.cfg ... Found linux image: /boot/vmlinuz-2.6.32-20-pve Found initrd image: /boot/initrd.img-2.6.32-20-pve Found memtest86+ image: /memtest86+.bin Found memtest86+ multiboot image: /memtest86+_multiboot.bin done root@kvm0:~# update-initramfs -u update-initramfs: Generating /boot/initrd.img-2.6.32-20-pve 

5. Transfer LVM to / dev / md1, transfer the source disk to RAID1.

Add the boot partition on the source disk / dev / sda to RAID1.
 root@kvm0:~# sfdisk -c /dev/sda 1 fd Done root@kvm0:~# mdadm --add /dev/md0 /dev/sda1 mdadm: added /dev/sda1 

Now we need to transfer the data from the / dev / sda2 LVM partition to / dev / md1. Transferring data using pvmove takes quite a long time, so further actions are performed in the screen.
 root@kvm0:~# screen bash root@kvm0:~# pvcreate /dev/md1 Writing physical volume data to disk "/dev/md1" Physical volume "/dev/md1" successfully created root@kvm0:~# vgextend pve /dev/md1 Volume group "pve" successfully extended root@kvm0:~# pvmove /dev/sda2 /dev/md1 /dev/sda2: Moved: 2.0% /dev/sda2: Moved: 14.5% /dev/sda2: Moved: 17.5% /dev/sda2: Moved: 19.2% /dev/sda2: Moved: 20.3% /dev/sda2: Moved: 24.7% /dev/sda2: Moved: 31.4% /dev/sda2: Moved: 32.5% /dev/sda2: Moved: 43.6% /dev/sda2: Moved: 63.3% /dev/sda2: Moved: 81.4% /dev/sda2: Moved: 100.0% root@kvm0:~# vgreduce pve /dev/sda2 Removed "/dev/sda2" from volume group "pve" 

Add the second partition of the source disk to RAID1.
 root@kvm0:~# sfdisk --change-id /dev/sda 2 fd Done root@kvm0:~# mdadm --add /dev/md1 /dev/sda2 mdadm: added /dev/sda2 

We pour a cup of coffee and watch the array synchronization through cat / proc / mdstat.
This completes the migration of Proxmox 3.0 VM to software RAID1.

URLs:
pve.proxmox.com/wiki/Software_RAID
pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_Wheezy
dominicpratt.de/proxmox-ve-3-0-software-raid
www.howtoforge.com/proxmox-2-with-software-raid
www.howtoforge.com/how-to-set-up-software-raid1-on-a-running-system-incl-grub2-configuration-debian-squeeze

Source: https://habr.com/ru/post/186818/


All Articles