Good day.
I would like to tell you my story of raising Xenʻa on HP ProLiant DL160 Gen8. As it turned out, too lazy to study the specification of the hardware that you buy and check compatibility with the planned software ahead of time - they played a pretty bad joke with me. When ordering hardware, I looked at the server's characteristics and made sure that the raid controller is present in it, alas, as it turned out, the server is equipped with a Smart Array B120i SATA RAID controller, which is essentially soft and not natively supported and not seen by the Xen installer. This article is about how to do everything as competently as possible in a similar situation.
A small digression can be skipped to save time.For several years I have been using such a wonderful thing as XCP 1.6 for work, it is spinning on 3 physical hosts about 16 servers, and about a dozen test machines for programmers and administrators. The Headquarters listened to pleading and deigned to buy several more pieces of hardware. Among them - there were two HP DL160 gen 8. Naturally, they will also be raised hosts for virtual machines. To my great regret, XCP stopped at version 1.6. but XEN 6.2 became absolutely "free". So we will put it. We download the image, check the checksum, pump the image again, check the checksum, spit on everything, throw on the download not the FTP, but the torrent and in it we pump everything completely. We write the image on the USB flash drive. We configure the raid on the server, install the download from the controller raid, and one-time download from the USB flash drive. The installation begins and here they are the first difficulties - the installer sees the disks in stride, and not as a single raid array. We google, beat the tambourine, change the raid types, the result is the same. Google says that the Citrix site seems to have firewood for this raid controller for Centos, we are downloading Centos 7 distrib, we are trying to feed it firewood - the raid is still not seen. We find an article about how to create software raids from the console during the installation of Centos. We remake the article for ourselves, and we face a new problem, Centos 7 is cool, even too cool for our xen. In general, further googling and a few days of practical shamanism gave birth to this article.
')
So we have the following source data:
HP ProLiant DL160 Gen 8
4 hard drives of 1Tb
16Gb flash drive (it was on 8Gb0, but as it turned out, this is not enough)
It is necessary to create 3 raid arrays - md0 and md1 over 4GB of raid type 1. Everything else will be occupied by md2 of raid 10. Initially I tried to use raid 10 for all arrays - but for some reason xen does not boot from 10 raid ʻa, I met in an Internet information that only raid 5 had problems with loading. Therefore, the array configuration will be exactly like that.
Initially, the task did not seem very difficult and following the instructions (links # 4-8) everything was great until we get to the boot phase from 2/3 of the hard disk, and here the server gives us a lot of emotions so how to choose a specific device for it’s impossible to boot (that is, if you have several flash drives with different systems or several screws with different operating systems, it will be loaded ONLY from the first line. No other solutions were found to solve this problem. Therefore, another way was chosen - to put the system on a USB flash drive and then the boot uziv with a flash drive to make all the necessary actions for us to configure the raids and transfer and configure the host system.As I said, we don’t give us any choice to choose which USB flash drive to boot the BIOS, and access to the server 24/7 is not always possible. ILO 4 comes to help us - no, it does not help us to choose devices for loading (it is convenient to choose the maximum load order and assign a specific type of device for the next overload, and to do this overload), for it makes it possible to mount any image to virtual CD / DVD-ROM and boot from it.
So, the sequence of actions is as follows: Mount the image via iLO, install the system on a USB flash drive, boot from the USB flash drive, configure the raid and transfer the system to it, finish the file - profit.
Stage 1
During the xen boot, press F2 and in the prompt that appears, write the command:
shell
The required set of libraries and drivers will be loaded and the prompt will appear.
bash-3.2#
Many guides recommend using the command to determine the device we need:
cat /proc/partitions
Its issuance is not convenient in this case, the eyes of tsiferok scatter, therefore I will use this:
fdisk -l | grep /dev/sd
According to the issue, you can see that the flash drive we have is
/ dev / sde. We will do some preparatory work.
Using
fdisk /dev/sd*
we delete all partitions on all disks (as a result of repeated experiments, many different configurations were created - now they have nothing to do with us therefore we delete them boldly (remember to remove the superblocks of the raids we created earlier)
To start the installation process, use the following command:
/opt/xensource/installer/init
After the installation is completed, we will be offered to remove all devices and reboot, agree and get into the shell we started earlier.
We perform the following actions in it:
mkdir /tmp/sda mount -t ext3 /dev/sde1 /tmp/sda chmod -R 664 /sys/block cp -R /sys/block /tmp/sda/sys/
# The errors that arise have nothing to do with us - all is well conceived
chroot /tmp/sda cd /boot ls -a
# The ls command will show us the contents of the directory
mv initrd-***xen.img initrd-***xen.img.old mkinitrd --with-usb initrd-***xen.img ***xen
# Instead of *** we insert the values that the previous command in the directory showed us (autocompletion on the tab works):
It should turn out like this:
mv initrd-2.6.32.43-0.4.1.xs1.8.0.835.170778xen.img initrd-2.6.32.43-0.4.1.xs1.8.0.835.170778xen.img.old mkinitrd –with-usb initrd-2.6.32.43-0.4.1.xs1.8.0.835.170778xen.img 2.6.32.43-0.4.1.xs1.8.0.835.170778xen
exit chroot`a:
exit
synchronic
sync
overload!
reboot
Stage 2
Here we have two options, either using iLO 4 to install in the Virtual media / Boot order section - One-Time Boot Status - USB Storage Device or when loading the server, press F11 and select item 3 in the drop-down menu. Both actions are equivalent in principle, the only thing is use the setting via iLO if the server is already overloaded (i.e., first got into the iLO settings and set up a single boot from USB, and then typed in the console reboot and pressed the Enter key).
We will perform further actions in the console that XenCenter provides us with. (as it turned out, the Proliant DL 160 Gen8 server is quite capricious, and apart from the absence of a normal hardware raid, it behaves very badly in conjunction with the KVM switch in terms of transferring the image, everything is twitching and distorted - the eyes start to hurt very quickly) and connect it to our XenCentr ʻu - we will enter with the appropriate fields the IP of our server and the password for the root account. As another advantage of working through XenCentr, I want to add the ability to copy and paste from the buffer, which will significantly speed up our work on further setting up raids.
Go to the Console tab and press Enter.
So first let's see what we have with the order of devices:
fdisk -l | grep /dev/sd
As you can see, the flash drive now has become
/ dev / sdaTeam
cat /proc/partitions
shows that on the flash drive we have 2 sections, and the team
sgdisk -p /dev/sda
gives more readable (for me) information on these sections.
Erase the partition table on all hard drives:
sgdisk --zap-all /dev/sdb sgdisk --zap-all /dev/sdc sgdisk --zap-all /dev/sdd sgdisk --zap-all /dev/sde
Install the GPT partition table there:
sgdisk --mbrtogpt --clear /dev/sdb sgdisk --mbrtogpt --clear /dev/sdc sgdisk --mbrtogpt --clear /dev/sdd sgdisk --mbrtogpt --clear /dev/sde
Create a partition table that is identical to the first disk table (attention to the numbers, set identical for the corresponding partition on the original disk with the installed system):
sgdisk --new=1:2048:8388641 /dev/sdb sgdisk --new=2:8390656:16777249 /dev/sdb sgdisk --new=3:16779264:$(expr $(sgdisk -p /dev/sdb | awk '/Disk \// {print($3)}') - 34) /dev/sdb
Here I want to draw your attention to this, since we install only the system on a USB flash drive and do not configure local storage, then the size of 3 disks must be selected depending on the size of your disk. In some instructions, there is a mention that it is necessary to give not the entire disk but a little less (as I understand it, at the end of the disk a backup copy of the GPT is stored under it and leave space).
You must replace the partition identifiers with fd00 partition identifier containing raid.
sgdisk --typecode=1:fd00 /dev/sdb sgdisk --typecode=2:fd00 /dev/sdb sgdisk --typecode=3:fd00 /dev/sdb
We copy the created sections on other disks
sgdisk -R /dev/sdc /dev/sdb sgdisk -R /dev/sde /dev/sdb sgdisk -R /dev/sdd /dev/sdb
In one of the guides were comments about the fact that problems may arise if all disks will have the same UUID, we will get rid of a potential headache in the future.
sgdisk -G /dev/sdb sgdisk -G /dev/sdc sgdisk -G /dev/sdd sgdisk -G /dev/sde
Set the boot partition flags
sgdisk /dev/sdb --attributes=1:set:2 sgdisk /dev/sdc --attributes=1:set:2 sgdisk /dev/sdd --attributes=1:set:2 sgdisk /dev/sde --attributes=1:set:2
Due to the fact that the disks were previously experimental and in order to avoid problems in the future, we will execute such commands to ensure that there are no superblocks from previous raids
mdadm --examine /dev/sdb mdadm --examine /dev/sdb1 mdadm --examine /dev/sdb2 mdadm --examine /dev/sdc mdadm --examine /dev/sdc1 mdadm --examine /dev/sdc2 mdadm --examine /dev/sdd mdadm --examine /dev/sdd1 mdadm --examine /dev/sdd2 mdadm --examine /dev/sde mdadm --examine /dev/sde1 mdadm --examine /dev/sde2
And if at least one of them will be found remnants of the raid, then you must perform the following operations:
mdadm --zero-superblock /dev/sdb1 mdadm --zero-superblock /dev/sdb2 mdadm --zero-superblock /dev/sdc1 mdadm --zero-superblock /dev/sdc2 mdadm --zero-superblock /dev/sdd1 mdadm --zero-superblock /dev/sdd2 mdadm --zero-superblock /dev/sde1 mdadm --zero-superblock /dev/sde2
Create raid arrays.
mdadm --stop /dev/md0 mknod /dev/md0 b 9 0 mknod /dev/md1 b 9 1 mdadm --create /dev/md0 --level=1 --raid-devices=4 /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1
To
mdadm -
Continue creating array? confidently answer
yesNow we create 2 raids:
mdadm --create /dev/md1 --level=10 --raid-devices=4 /dev/sdb2 /dev/sdc2 /dev/sdd2 /dev/sde2
Now we create 3 raids:
mdadm --create /dev/md2 --level=10 --raid-devices=4 /dev/sdb3 /dev/sdc3 /dev/sdd3 /dev/sde3
Then we wait for mdadm to assemble our sections into full-fledged raids. To do this, either run the command from time to time.
cat /proc/mdstat
or use this variation - which updates the state information itself and displays on the screen in real time.
watch -n 1 cat /proc/mdstat
If creating 2 raids md0 and md1 takes a couple of minutes, then it took about 3 hours to build md2 ... so you can safely go and have tea with buns.
Create and mount the file system:
mkfs.ext3 /dev/md0 mount /dev/md0 /mnt
Copy the root file system there:
cp -vxpR / /mnt
We make changes to
/ mnt / etc / fstab - we can replace the root file system with
/ dev / md0 manually with the help of
nano , or else we can.
sed -i 's/LABEL=[a-zA-Z\-]*\s\(.*\)/\/dev\/md0 \1/' /mnt/etc/fstab
Copy the bootloader to all disks.
mount --bind /dev /mnt/dev mount -t sysfs none /mnt/sys mount -t proc none /mnt/proc chroot /mnt /sbin/extlinux --raid --install /boot exit dd if=/mnt/usr/share/syslinux/gptmbr.bin of=/dev/sdb dd if=/mnt/usr/share/syslinux/gptmbr.bin of=/dev/sd dd if=/mnt/usr/share/syslinux/gptmbr.bin of=/dev/sde dd if=/mnt/usr/share/syslinux/gptmbr.bin of=/dev/sdd
Create a new boot image and unpack it:
mkdir /mnt/root/initrd-raid mkinitrd -v --fstab=/mnt/etc/fstab /mnt/root/initrd-raid/initrd-`uname -r`-raid.img `uname -r` cd /mnt/root/initrd-raid zcat initrd-`uname -r`-raid.img | cpio -i
Edit the 'init' file by inserting the lines of raidautorun:
sed -i 's/raidautorun \/dev\/md0/raidautorun \/dev\/md0\nraidautorun \/dev\/md1\nraidautorun \/dev\/md2/' init
Copy the new boot image to the / mnt / boot directory and change the boot menu
find . -print | cpio -o -Hnewc | gzip -c > /mnt/boot/initrd-`uname -r`-raid.img rm /mnt/boot/initrd-2.6-xen.img cd /mnt/boot ln -s initrd-`uname -r`-raid.img initrd-2.6-xen.img
Replace
/mnt/boot/extlinux.conf with the string “
root = LABEL = root- ...” with
“root = / dev / md0 ″ in all menu items
sed -i 's/LABEL=[a-zA-Z\-]*/\/dev\/md0/' extlinux.conf
When all three arrays are synchronized, copy the RAID settings into
/etc/mdadm.conf mdadm --detail --scan >> /etc/mdadm.conf
Work as a file
In general, our Xenserver is installed and configured, it remains to pull out the flash drive, specify the boot from the hard disk and reboot the machine. Then add one single command to create our local storage on md2.
xe sr-create content-type=user type=lvm device-config:device=/dev/md2 shared=false name-label="Local storage"
And in order to be able to sleep a little calmer, we add a notification function to our server if something happens to our raid arrays.
To begin with, we will add the information about our mail (where the alarms will come) to the /etc/mdadm.conf file
sed -i '1i MAILADDR < e-mail>' /etc/mdadm.conf
Now let's turn on the monitoring service for the state of our arrays, as it turned out, everything is very simple:
service mdmonitor start chkconfig mdmonitor on
To check that everything is set up and working, send yourself a test status:
mdadm --monitor --test /dev/md0
Actually, that's all. At the post office we have a message stating that everything is fine with our raids. My admins are already mocking the new virtual machines. And I look forward to your comments on this article.
References It would be much more difficult without them.