
Today, dear habrazhiteli and guests of Habra, I will share with you a story about how I built my amusement park with blackjack and girls. Those. as I transferred my dedicated server, obtained from Leaseweb, to software raid1 / 10 and lvm.
Why do you need this article?It so happened that Leaseweb is probably one of the most harmful hosters, because it is impossible to put the system immediately with a ready raid, according to the information I have, technical support does not do this either. KVM can be taken not for all series of servers, and it costs a lot of money. As a result, from the desire to learn linux more deeply, I began to understand this topic myself.
I found many articles on the Internet on how to do this, but they had two main problems: they require BIOS access to change the boot order (which we don’t have) or use GRUB legasy (pre-version 1), which is not used in Debian 7
The result of two weeks of life in Google, smoking manuals, Wikipedia and a variety of experiments was this instruction "for dummies."
This article is by no means an advertisement for Leaseweb hosting. Why I did not take the server from resellers with Russian support? Or why did not you take hosting from another company, where you can install raid in a machine or is KVM available? So historically it turned out and it was necessary to do something with it.
')
Since at the time of solving this problem, I myself had extremely modest knowledge of linux, in this article I will try to tell in more detail the process of transferring a working system to
raid 1/10 +
lvm . The manual uses a breakdown of the disk with mbr. GPT performance has not been tested.
So, the introductory conditions: There is a dedicated server from Leaseweb, in which 4 HDD and installed Debian 7.0 64-bit. The breakdown of the “Custom” sections:
1 section - boot (100 megabytes)
2 section - swap - 4 gigabytes
Section 3 - the main - the remaining space, we will include it in lvm.
It is necessary:
We make the boot partition an array of raid 1 on 4 disks.
Two swap sections of 4 gigabytes in two raid1 arrays (if the swap is not transferred to raid1, then if the disk fails, you can get a fallen system, because the system working from another disk from the array can try to write data to the swap on an idle disk, which can cause an error). Swap can be done less or not located on the raid array, but then I recommend reading what a swap is, how it is used, i.e. a clear answer is that there is no
The remaining space is organized in raid10 bp 4 partitions on different disks and use it already under lvm, in order to be able to resize partitions.
ATTENTION! Performing any of the operations listed below may result in data loss! Before performing these actions, make sure that you have (if necessary) backup copies of this data and the integrity of these copies.
0. Preparation
THIS ITEM IS PERFORMED ONLY WHEN INSTALLING ON A NEW SERVER! If you are translating a working server, go to step 1.
I got the server with already used disks (no more than 1000 hours each) and taking into account the fact that sda (the first disk) was with the installed system and mbr partition, and the second (sdb), the third (sdc) and the fourth (sdd) The disks were with gpt. I decided to completely remove all the information from the disks.
To do this, we need to boot into recovery mode (I used Rescue amd64). Through SCC we start recovery mode and through SSH client (I use putty) we connect to the server.
Next, using the command, we reset the surface of the disks (the operation takes time; on disks, 500 gigabytes is about one hour per disk).
For those who argue that it is enough to erase the partition table (the first 512 bytes of the disk), I personally encountered a situation where after previous experiments I created a new partition table identical to that used in the previous experience, I got the entire contents of the disk back. Therefore, I vanished the disks completely:
dd if = / dev / zero of = / dev / sda bs = 4k
dd if = / dev / zero of = / dev / sdb bs = 4k
dd if = / dev / zero of = / dev / sdc bs = 4k
dd if = / dev / zero of = / dev / sdd bs = 4k
As a result, based on the results of each command, we get an output of this type:
dd: writing `/ dev / sdd ': No space left on device
122096647 + 0 records in
122096646 + 0 records out
500107862016 bytes (500 GB) copied, 4648.04 s, 108 MB / s
For those who are not ready to wait a long time, this process can be launched in parallel by running 4 copies of the ssh client.
A useful addition from
Meklon :
can use the command
nohup dd if=dev/zero of=/dev/sda bs-4k &
If simplified, the nohup command removes the output of information to the screen and sends the information that the command displays to the file / home / username / nohup.out or /root/nohup.out if we are running as root. This allows us not to stop the work of the team in the event of a connection failure. The ampersand sign (&) at the end of the command will launch the command in the background mode, which will allow you to continue working with the system without waiting for the command to finish.
Now we need to create a clean mbr on the disks. To do this, simply run the program to work with disk partitions and exit from it while maintaining the result of the work:
fdisk /dev/sda
then press
w to save the result
Repeat the operation for
sdb ,
sdc ,
sdd drives
Reboot the server.
reboot ( shutdown –r now)
1. Install the system.
Now we do a clean installation of the operating system. Go to the SSC, in the server management section, click "reinstall", choose the operating system Debian 7.0 (x86_64). If you do not plan to use more than 3 gigabytes of RAM, you can install the x86 version. Next, select the partition partition "Custom partition".

The tmp section is removed altogether, if necessary we will render it separately, but already on the lvm section.
We press to install, wait for the end of the installation and go to the system.
2. Install the necessary modules
Install the raid modules.
apt-get install mdadm
Install lvm.
apt-get install lvm2
3. Copying the partition table to the second, third and fourth disks
At the moment we already have on the first disk (sda) a partition structure created automatically. You can see it as a team
fdisk -l /dev/sda
Its output is as follows:
Disk / dev / sda: 500.1 GB, 500107862016 bytes
255 heads, 63 sectors / track, 60801 cylinders, total 976773168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical / physical): 512 bytes / 512 bytes
I / O size (minimum / optimal): 512 bytes / 512 bytes
Disk identifier: 0x00095cbf
Device Boot Start End Blocks Id System
/ dev / sda1 2048 4095 1024 83 Linux
/ dev / sda2 * 4096 616447 306176 83 Linux
/ dev / sda3 618494 976771071 488076289 5 Extended
/ dev / sda5 618496 9003007 4192256 82 Linux swap / Solaris
/ dev / sda6 9005056 976771071 483883008 83 Linux
From the presented section structure we need:
- boot partition marked with an asterisk - it will
boot- sda5 partition type 82 - Linux swap - this is respectively
swap- section sda6 - this is the
main section .
Since we make mirror arrays, we need the identical partition structure on the second, third and fourth disks.
sfdisk -d /dev/sda | sfdisk --force /dev/sdb
Repeat the procedure, replacing sdb with sdc and sdd.
For the Grub version used in Debian 7.0, there is no need to use any additional keys like –metadata = 0.9 when creating a raid array, everything works fine on superblock 1.2, and therefore there is no need to change the partition type to fd (raid autodetect).
4. Creation of raid arrays
Create (-C key) an array with the name md0 of type raid1 for 4 partitions with one missing (missing) - this will be the boot partition (partition 2 on the disks). Add the missing section later.
mdadm -C /dev/md0 --level=1 --raid-devices=4 missing /dev/sdb2 /dev/sdc2 /dev/sdd2
The first array for swap (remember, I will use them two)
mdadm -C /dev/md1 --level=1 --raid-devices=2 missing /dev/sdb5
Second array for swap
mdadm -C /dev/md2 --level=1 --raid-devices=2 /dev/sdc5 /dev/sdd5
Now we create the main raid array of type raid10, on which we will later install lvm
mdadm - /dev/md3 --level=10 --raid-devices=4 missing /dev/sdb6 /dev/sdc6 /dev/sdd6
5. Creating an lvm partition.
First, add the resulting md3 main array to the Phisical Volume (group of physical volumes)
pvcreate /dev/md3
Now we will create a group of logical volumes vg0 (the name can be used any) for the size of the entire md3 array.
vgcreate vg0 /dev/md3
Well, now create the root partition we need (/)
If you create several sections, then do not use the entire space at once, it is better then to add space to the desired section than to suffer and cut off from the existing ones.
lvcreate -L50G -n root vg0
-L50G - the key indicates the size of the partition of 50 gigabytes, you can also use the letters K, M - kilobytes and megabytes, respectively
-n root - the key indicates that the partition created will have a name, in this case root, i.e. we can access it by the name / dev / vg0 / root
And this section is created on the logical volume group vg0 (if you used a different name in the previous command, then you enter it instead of vg0 in this command).
If you create separate sections under / tmp, / var, / home and so on, then by analogy create the necessary sections.
6. Creating file systems on partitions
On the boot section (md0 array) we will use the ext2 file system.
mkfs.ex2 /dev/md0
Create a swap
mkswap / dev / md1
mkswap / dev / md2
and turn them on with equal usage priorities (key –p and priority digit, if the priority is different, one swap will be used, and the second will stand idle until the first one overflows. And this is ineffective)
swapon -p1 / dev / md1
swapon -p1 / dev / md2
On other sections I use ext4. It allows you to increase your partition in real time without stopping the server. Shrinking a partition only with the partition off (unmounting).
mkfs.ext 4 /dev/vg0/root
7. Updating in the system information about the created raid-arrays
Let's save the original configuration file of the array, we need it later.
cp /etc/mdadm/mdadm.conf /etc/mdadm/mdadm.conf_orig
Now we will add in it the information actual at the moment.
mdadm --examine --scan >> /etc/mdadm/mdadm.conf
8. Configuring automatic mounting (connection) of disks at system startup
In Linux, you can specify the name of a disk or partition in two ways - in a symbolic form of the type / dev / sda6 or by UUID. I chose to use the second method, in theory, it should help to avoid a number of problems when the disk or partition designation can change. And in the end it is now common practice.
Get the UUIDs of the “partitions” md0 (boot), md1 (swap), md2 (swap), vg0 / root (root) we need with the command
blkid /dev/md0
We get this conclusion
/dev/md0: UUID="420cb376-70f1-4bf6-be45-ef1e4b3e1646" TYPE="ext2"
In this case, we are interested in UUID = 420cb376-70f1-4bf6-be45-ef1e4b3e1646 (without quotes) and the file system type is ext2.
We execute this command for / dev / md1, / dev / md2, / dev / vg0 / root and save the obtained values ​​(you can highlight it in putty and press Ctrl + c, you can insert it by single clicking the right mouse button. But for special masochists - rewrite manually :)
Next, open the file fstab for editing
nano /etc/fstab
and edit the file to the following form, substituting the necessary UUIDs:
# / etc / fstab: static file system information.
#
# Use 'blkid' to print the universally unique identifier for a
# device; this may be used with UUID = as a more robust way to name devices
# that works even if disks are added and removed. See fstab (5).
#
# <file system> <mount point> <type> <options> <dump> <pass>
# / was on / dev / sda6 during installation
UUID = fe931aaf-2b9f-4fd7-b23b-27f3ebb66719 / ext4 errors = remount-ro 0 1
# / boot was on / dev / sda2 during installation
UUID = 420cb376-70f1-4bf6-be45-ef1e4b3e1646 / boot ext2 defaults 0 2
# swap was on / dev / sda5 during installation
UUID = 80000936-d0b7-45ad-a648-88dad7f85361 none swap sw 0 0
UUID = 3504ad07-9987-40bb-8098-4d267dc872d6 none swap sw 0 0
If you connect other sections, the format of the line is as follows:
UUID 0 2
If you want to learn more about the mount options and the values ​​of dump and pass, then in the appendix at the end of the article you will find a link.
To save the file, press "ctrl + x" - "y" - "enter"
9. Mounting partitions
Create folders in which we mount the root partition and the boot partition
mkdir / mnt / boot
mkdir / mnt / root
mount / dev / md0 / mnt / boot
mount / dev / vg0 / root / mnt / root
10. Update Boot Loader and Boot Image
Update Grub2 loader. During this update, the bootloader collects information about new mounted partitions and updates its configuration files.
update-grub
If everything goes well, then on the screen we get this conclusion
Generating grub.cfg ...
Found linux image: /boot/vmlinuz-3.2.0-4-amd64
Found initrd image: /boot/initrd.img-3.2.0-4-amd64
Found Debian GNU / Linux (7.4) on / dev / mapper / vg0-root
done
And update the boot image under the changed conditions
update-initramfs -u
11. Copy the contents of the first disk to raid
Copy the contents of the root partition of the installed system (/) to the root partition located on lvm
cp -dpRx / /mnt/root
Next, go to the boot directory and copy its contents to a separately mounted boot partition located on the / dev / md0 raid array
cd / boot
cp -dpRx. / mnt / boot
12. Install the updated bootloader on all disks of the array
Next we install the updated bootloader on the first sda disk.
grub-install /dev/sda
We execute the same command for sdb, sdc, sdd drives
Then restart the server
reboot
We are waiting for 10 minutes and through the SelfServiceCenter we launch the Rescue (x64) recovery mode or if you installed the 32-bit version, then the corresponding version of Rescue.
If it is not launched, we restart the server via SSC and try again. I personally did not start the first time.
13. Mount disks to update server configuration
With these commands, we mount the root and boot sections from the raid arrays and the / dev, / sys, / proc utility filesystems
mount / dev / vg0 / root / mnt
mount / dev / md0 / mnt / boot
mount --bind / dev / mnt / dev
mount --bind / sys / mnt / sys
mount --bind / proc / mnt / proc
14. Changing the shell and changing the environment of the root user
In Linux, we have the opportunity to tell the system that the root user is now working on the next installed (but not running) system.
To do this, we need to run the
chroot
command. But the default recovery mode starts with the zsh shell, and you cannot execute the
chroot
command in it, at least I did not find how. To do this, we need to first change the shell to be used, and then execute the
chroot
command.
SHELL=/bin/bash
chroot /mnt
15. Adding the first sda disk to the created raid arrays
Add the boot section to the corresponding array
mdadm --add /dev/md0 /dev/sda2
Add the swap section to the first swap array
mdadm --add /dev/md1 /dev/sda5
Add the main section to the main section array
mdadm --add /dev/md3 /dev/sda6
After executing these commands, synchronization and recovery of the array is started, this process takes a long time. It took me about 1.5 hours to add a 500 gigabyte hard disk.
Watch how the process goes by the command
watch cat /proc/mdstat
After the synchronization of arrays is complete, we get the following output:
Personalities: [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md3: active raid10 sda6 [4] sdb6 [1] sdd6 [3] sdc6 [2]
967502848 blocks super 1.2 512K chunks 2 near-copies [4/4] [UUUU]
md2: active raid1 sdc5 [0] sdd5 [1]
4190144 blocks super 1.2 [2/2] [UU]
md0: active raid1 sda2 [4] sdb2 [1] sdd2 [3] sdc2 [2]
305856 blocks super 1.2 [4/4] [UUUU]
md1: active raid1 sda5 [2] sdb5 [1]
4190144 blocks super 1.2 [2/2] [UU]
unused devices: <none>
Return to the command line can be a shortcut key ctrl + c
16. Updating in the system information about the created raid-arrays
We have already performed a similar operation. Now we restore the original configuration file and add up-to-date information about raid arrays into it.
cp /etc/mdadm/mdadm.conf_orig /etc/mdadm/mdadm.conf
mdadm --examine --scan >> /etc/mdadm/mdadm.conf
17. Update Boot Loader and Boot Image
Again, update the bootloader after the changes are made
update-grub
and system boot image
update-initramfs -u
18. Installing the updated bootloader and completing the update system settings
Install the bootloader on disks
grub-install / dev / sda
grub-install / dev / sdb
grub-install / dev / sdc
grub-install / dev / sdd
Then exit the current environment.
exit
and restart the server
reboot
HOORAY! After a reboot, we get a working server on raid and lvm.
Application:
Learn more about using the mdadm raid array manager commands.Learn more about using the lvm logical volume manager commands.Read more about the fstab file, mount optionsPS: If in some places a bicycle bike has turned out or the terminology is not quite correct, please make allowances for poor knowledge of Linux and suggest the true way.
I read errors, though I could have missed something on an article of this size.