📜 ⬆️ ⬇️

Another migration of PROXMOX to softRAID1, but now already 3.2 and on GPT partitions, installing FreeNAS 9.2 on a virtual machine and forwarding a physical disk to it

Hello!

Once again, I needed a Proxmox server. The following is iron: AMD FX-4300, 4Gb, two 500Gb disks for proxmox itself and two more for storage. The tasks are as follows: one of the FreeNAS machines, I wanted to forward several disks (preferably physical) into it, in order to place a storage on them, and several other non-article VMs.

I have a catch always trying to install the most recent versions, and not proven old ones. It happened this time.
Downloaded Proxmox VE 3.2 and FreeNAS 9.2. But what came out of it under the cut.

Having installed Proxmox once again (the latest version 3.2 at the moment) decided to transfer it to SoftRAID1. But I found that, unlike 3.0, it (proxmox) converted the disk to GPT. Accordingly, the recommendations in the article on which I was oriented were not entirely relevant. In addition, in all articles on the transfer of Proxmox to SoftRAID, we are talking only about two sections (boot and LVM). In my case, the partitions on the disk were 3. First GRUB, and then the standard boot and LVM.
')
This should not stop us.

Translation of proxmox to softRAID on GPT sections


We go in the standard way and put all the necessary software. And here we are waiting for another surprise due to the fact that from version 3.1 the repository for Proxmox has become paid. Therefore, before installing the necessary packages, you need to disable it (perhaps it is more correct to specify a free test repository instead, but I managed to comment out just the paid one). Open it in any editor.

# nano /etc/apt/sources.list.d/pve-enterprise.list 
and comment out a single line.

If you still want to add a free repository, then run the command:
 echo "deb http://download.proxmox.com/debian wheezy pve pve-no-subscription" >> /etc/apt/sources.list.d/proxmox.list 
Thank you heathen for his comment.

Now we put the necessary packages:

 # aptitude update && aptitude install mdadm initramfs-tools screen 
the latter is needed if you do it remotely. Transferring LVM to RAID takes a long time and it is advisable to do this through the screen.

We check that array creation is now available:

 # modprobe raid1 
Next we copy the partitions from sda to sdb. This is where the differences in the MBR and GPT begin. For GPT, this is done like this:
 # sgdisk -R /dev/sdb /dev/sda The operation has completed successfully. 
Assign a random UUID to a new hard disk.
 # sgdisk -G /dev/sdb The operation has completed successfully. # sgdisk --randomize-guids --move-second-header /dev/sdb The operation has completed successfully. 


Verify that the partitions are created as we wanted:

sda disksdb disk
 # parted -s /dev/sda print Model: ATA WDC WD5000AAKS-0 (scsi) Disk /dev/sda: 500GB Sector size (logical/physical): 512B/512B Partition Table: gpt Number Start End Size File system Name Flags 1 1049kB 2097kB 1049kB primary bios_grub 2 2097kB 537MB 535MB ext3 primary boot 3 537MB 500GB 500GB primary lvm 
 # parted -s /dev/sdb print Model: ATA ST3500320NS (scsi) Disk /dev/sdb: 500GB Sector size (logical/physical): 512B/512B Partition Table: gpt Number Start End Size File system Name Flags 1 1049kB 2097kB 1049kB primary bios_grub 2 2097kB 537MB 535MB primary boot 3 537MB 500GB 500GB primary lvm 


Change the flags of the sdb2 and sdb3 sections to raid:

 # parted -s /dev/sdb set 2 "raid" on # parted -s /dev/sdb set 3 "raid" on # parted -s /dev/sdb print Model: ATA ST3500320NS (scsi) Disk /dev/sdb: 500GB Sector size (logical/physical): 512B/512B Partition Table: gpt Number Start End Size File system Name Flags 1 1049kB 2097kB 1049kB primary bios_grub 2 2097kB 537MB 535MB primary raid 3 537MB 500GB 500GB primary raid 
Everything turned out right.

Go ahead and clean up the superblocks just in case:

 # mdadm --zero-superblock /dev/sdb2 mdadm: Unrecognised md component device - /dev/sdb2 # mdadm --zero-superblock /dev/sdb3 mdadm: Unrecognised md component device - /dev/sdb3 
The output of “mdadm: Unrecognised md component device - / dev / sdb3” means that the disk did not participate in the RAID.
Actually, it's time to create arrays:
 # mdadm --create /dev/md1 --level=1 --raid-disks=2 missing /dev/sdb2 mdadm: Note: this array has metadata at the start and may not be suitable as a boot device. If you plan to store '/boot' on this device please ensure that your boot-loader understands md/v1.x metadata, or use --metadata=0.90 Continue creating array? y mdadm: Defaulting to version 1.2 metadata mdadm: array /dev/md1 started. # mdadm --create /dev/md2 --level=1 --raid-disks=2 missing /dev/sdb3 mdadm: Note: this array has metadata at the start and may not be suitable as a boot device. If you plan to store '/boot' on this device please ensure that your boot-loader understands md/v1.x metadata, or use --metadata=0.90 Continue creating array? y mdadm: Defaulting to version 1.2 metadata mdadm: array /dev/md2 started. 


To the question “Continue to create an array?” We answer in the affirmative.

Let's see what we got:

 # cat /proc/mdstat Personalities : [raid1] md2 : active raid1 sdb3[1] 487731008 blocks super 1.2 [2/1] [_U] md1 : active raid1 sdb2[1] 521920 blocks super 1.2 [2/1] [_U] 


The output shows the state of the arrays - [_U]. This means that there is only one disk in the array. It should be so, because the second (first) we have not yet included in the array. (missing).

Add information about arrays to the configuration file:

 # cp /etc/mdadm/mdadm.conf /etc/mdadm/mdadm.conf_orig # mdadm --examine --scan >> /etc/mdadm/mdadm.conf 


Copy the boot partition to the corresponding array. (I added commands to unmount the partition here. Thanks for the information from the skazkin user. His experience has shown that in some cases, without these actions, the boot partition can be empty after a reboot):

 # mkfs.ext3 /dev/md1 mke2fs 1.42.5 (29-Jul-2012) Filesystem label= OS type: Linux Block size=1024 (log=0) Fragment size=1024 (log=0) Stride=0 blocks, Stripe width=0 blocks 130560 inodes, 521920 blocks 26096 blocks (5.00%) reserved for the super user First data block=1 Maximum filesystem blocks=67633152 64 block groups 8192 blocks per group, 8192 fragments per group 2040 inodes per group Superblock backups stored on blocks: 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 Allocating group tables: done Writing inode tables: done Creating journal (8192 blocks): done Writing superblocks and filesystem accounting information: done # mkdir /mnt/md1 # mount /dev/md1 /mnt/md1 # cp -ax /boot/* /mnt/md1 # umount /mnt/md1 # rmdir /mnt/md1 
Next, we need to comment out the line in / etc / fstab describing the mounting of the boot partition with the UUID and assign the mount of the corresponding array:

 # nano /etc/fstab 


 # <file system> <mount point> <type> <options> <dump> <pass> /dev/pve/root / ext3 errors=remount-ro 0 1 /dev/pve/data /var/lib/vz ext3 defaults 0 1 # UUID=d097457f-cac5-4c7f-9caa-5939785c6f36 /boot ext3 defaults 0 1 /dev/pve/swap none swap sw 0 0 proc /proc proc defaults 0 0 /dev/md1 /boot ext3 defaults 0 1 


It should be something like this.
Reboot:

 # reboot 


Configuring GRUB (we do it in the same way as in the original article ):
 # echo 'GRUB_DISABLE_LINUX_UUID=true' >> /etc/default/grub # echo 'GRUB_PRELOAD_MODULES="raid dmraid"' >> /etc/default/grub # echo 'GRUB_TERMINAL=console' >> /etc/default/grub # echo raid1 >> /etc/modules # echo raid1 >> /etc/initramfs-tools/modules 


Reinstall GRUB:

 # grub-install /dev/sda --recheck Installation finished. No error reported. # grub-install /dev/sdb --recheck Installation finished. No error reported. # update-grub Generating grub.cfg ... Found linux image: /boot/vmlinuz-2.6.32-27-pve Found initrd image: /boot/initrd.img-2.6.32-27-pve Found memtest86+ image: /memtest86+.bin Found memtest86+ multiboot image: /memtest86+_multiboot.bin done # update-initramfs -u update-initramfs: Generating /boot/initrd.img-2.6.32-27-pve 


Now add the boot partition from the first (sda) disk to the array. First, mark it with the flag "raid":

 # parted -s /dev/sda set 2 "raid" on 


And then add:
 # mdadm --add /dev/md1 /dev/sda2 mdadm: added /dev/sda2 


If you now look at the state of the arrays:

 # cat /proc/mdstat Personalities : [raid1] md2 : active (auto-read-only) raid1 sdb3[1] 487731008 blocks super 1.2 [2/1] [_U] md1 : active raid1 sda2[2] sdb2[1] 521920 blocks super 1.2 [2/2] [UU] unused devices: <none> 


then we will see that md1 became “dual-drive” - [UU]

Now you need to move the main section - LVM. There are no differences from the “original”, with the exception of a different numbering of sections and:

 # screen bash # pvcreate /dev/md2 Writing physical volume data to disk "/dev/md2" Physical volume "/dev/md2" successfully created # vgextend pve /dev/md2 Volume group "pve" successfully extended # pvmove /dev/sda3 /dev/md2 /dev/sda3: Moved: 2.0% ... /dev/sda3: Moved: 100.0% # vgreduce pve /dev/sda3 Removed "/dev/sda3" from volume group "pve" # pvremove /dev/sda3 

Here, on the recommendation of skazkin, added the pvremove command. Without it (again, not always) another problem may appear:
the system will not understand what happened to the disks and the initramfs console will not boot further


Add the section sda3 to the array:

 # parted -s /dev/sda set 3 "raid" on # mdadm --add /dev/md2 /dev/sda3 mdadm: added /dev/sda3 # cat /proc/mdstat Personalities : [raid1] md2 : active raid1 sda3[2] sdb3[1] 487731008 blocks super 1.2 [2/1] [_U] [>....................] recovery = 0.3% (1923072/487731008) finish=155.4min speed=52070K/sec md1 : active raid1 sda2[2] sdb2[1] 521920 blocks super 1.2 [2/2] [UU] unused devices: <none> 


and see that it is added.

Since I am acting on the original article , I went to pour coffee.

After the array is rebuilt (and also not a quick one), this part can be considered complete.

For those who, like me, I did not understand why this is all , I explain. Since We actually transferred the LVM volume from one block device to another, and it is not required to register it (as it was with the boot). I stalled for a while at this place.

FreeNAS 9.2 on AMD processors


My next step was to install FreeNAS version 9.2 on proxmox. Long tormented. Until I tried to install from the same image (FreeNAS 9.2) on another proxmox-server. It is slightly different from the one described in the article: firstly, it is on Core i7, secondly it is proxmox 3.1. And there it naturally fell into account once. Those. the problem is either in AMD (there is definitely no such thing), or that proxmox 3.2 broke support for FreeBSD9 (brrr). Long dug. Then he began to experiment himself. In the end, all the same AMD. What do they have there for the problem, but as soon as I set the type of the Core 2 Duo FreeNAS 9.2 processor in the VM properties, it was installed without any problems.

Forwarding a physical disk in KVM (proxmox)


I have been looking for an answer to this question for a long time on the web, but I found only fragments. Maybe someone and they can immediately understand what and how, but not me.
In general, it is done this way (from the console):

 # nano /etc/pve/nodes/proxmox/qemu-server/100.conf 


and add the line at the end:

 virtio0: /dev/sdc 


where sdc is your device. Further, you can specify other parameters separated by a comma (you can see them in the proxmox wiki).

That's all. True, I don’t know how much such a connection raises (or lowers) the speed of disk operations. I still have tests ahead.

Source: https://habr.com/ru/post/218757/


All Articles