📜 ⬆️ ⬇️

Installing Debian \ Ubuntu 10.04 on a RAID 5 array

This article is posted at the request of a person who does not have an account in Habré, but who dreams of joining his ranks.
If someone has an invite and a desire to share it, please write to e-mail: cibertox (man’s friend) bk dot ru

Expanding the WEB server, I had to think about not losing the whole project, in the event of an unexpected death of hard drives, as usually happens according to the law of meanness - unexpectedly.
The answer suggests itself - to use RAID, and I also wanted high-speed operation - it means RAID5 (But this method is also suitable for using arrays of the 6th and 10th levels)
The main problem in deploying Debian \ Ubuntu is the inability to install the system, in its pure form, on this array because parts of the file are copied to all disks of the array, and assembling them before the complex calculations.
There are options for installing the system bootloader on a USB flash drive, but this is a dead-end path and I don’t think it’s right to use it on a full-fledged production system, there are still hard drives, and running to the data center due to an unexpectedly dead flash drive with a bootloader on board is insanity!
So we have a ready server with four hards, fully assembled.

Welcome to the tackle.

')

Part one: System installation.


The root file system (/) will be on RAID 5.
We start the installation and in the partitioning service of the hard disk, create 3 partitions on each physical volume, of them, the first one will be 2 GB in size, the second we give for the swap partition, how much to cut it under depends on the tasks of your server, I gave 512 MB each. The third is all the remaining space.
The first and third partitions are created as a physical RAID partition.
After creating partitions on all disks, go to the settings section of the software raid
After all the movements, we got 12 sections, 3 on each disk

/dev/sda1
/dev/sda5
/dev/sda6

/dev/sdb1
/dev/sdb5
/dev/sdb6

/dev/sdc1
/dev/sdc5
/dev/sdc6

/dev/sdd1
/dev/sdd5
/dev/sdd6


Where:
Sections numbered 1 have 2 Gb each.
Sections at number 5 are reserved for swap.
And partitions numbered 6 — all remaining disk space.
(This configuration is not an axiom, sections can be cut as much as needed).

We proceed to the creation of the array md 0:
Choose the type of RAID1 partition on the proposal to add 2 disks to it, replacing the deuce with 4 and 0 for the backup ones. Select under it all partitions with number 1 this is sda1 sdb1 sdc1 sdd1
As a result, we get 1 RAID 1 partition, 2 GB in size (This is important!).
Sections sda5 sdb5 sdc5 sdd5-set aside intact, under swap.

Next, we proceed to the creation of the md1 array, on which our RAID 5 itself will live
The offer to add 3 disks is replaced by 4 and 0 under the reserve.
We add to it all disks with the sequence number 6, it is: sda6 sdb6 sdc6 sdd6

If you did everything correctly, then we should have 2 raid devices
RAID 1 device # 0 2Gb
RAID 5 device # 1 24.7Gb

For the device, select the file system type and mount point.
For RAID 1, select the / boot mount point
For RAID 5, set the mount point /

We save changes to a disk and we start installation
We are waiting for it to finish, we agree to the proposal to install the system bootloader to disk - at 10.04 it will be installed on all 4 disks. (at 8.04, 8.10, it is automatically placed only on the first disk that is installed in BIOS; this is an important nuance!).

We finish the bootloader!
Since we have all disks marked as bootable, there will not be any particular difficulties with them if GRUB2 is left, but it is better to replace it with GRUB1.5, firstly it is well known, secondly it is more stable, thirdly - its capabilities we have enough with the head.
Update Package Lists
sudo apt-get update

Remove GRUB2
sudo apt-get purge grub2 grub-pc

install the previous version
sudo apt-get install grub

create installation menu
sudo update-grub
will be asked to create a file menu.lst answer Y

Remove remnants of grub2
sudo apt-get autoremove

Install grub on all hard drives

sudo su
grub-install --no-floppy /dev/sdb
grub-install --no-floppy /dev/sdc
grub-install --no-floppy /dev/sdd


Mark all disks as bootable.

grub
device (hd1) /dev/sdb
root (hd1,0)
setup (hd1)
device (hd2) /dev/sdc
root (hd2,0)
setup (hd2)
device (hd3) /dev/sdd
root (hd3,0)
setup (hd3)

quit

(It is not necessary to install it on hd0 during the installation automatically)

What have we got as a result ?!
We have 2 RAID1 partitions created - a copy of the boot partition was created on four disks - it is stored in its pure form as on a regular disk, so any of the four disks is bootable, then the root partition on the RAID 5 is mounted independently and the OS starts quite calmly.

Part Two: Actions when leaving the hard disk.



In the event of an unforeseen death of one of the four disks in the array, we have 3 copies of the / boot partition left and a healthy partition / -which went into degraded mode.
Before introducing the system, it is necessary to practice on cats - pull out one disc and see what happened by entering the command:

cat /proc/mdstat

Will issue a table

Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md1 : active raid5 sdb6[1] sda6[0] sdc6[3]
24087360 blocks level 5, 64k chunk, algorithm 2 [4/3] [UU_U]

md0 : active raid1 sdc1[3] sdb1[1] sda1[0]
1951680 blocks [4/3] [UU_U]


Where: md1 is the list of partitions that remained operable (from the example it is clear that the sdd6 section left us, which was the third in the [UU_U] list, the numbering starts from 0).
The same on the md0 section. sdd1 only section
Replace the hard drive - if this is a full-fledged server that supports hot-swappable, the new one should decide itself, if this did not happen for any reason, then there is nothing terrible - restart quietly from any of the 3 remaining disks and enter the command.
Pre switch to super user mode:
sudo su
fdisk –l

we get a picture of this content:
Disk /dev/sda: 10.7 GB, 10737418240 bytes
255 heads, 63 sectors/track, 1305 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x0004b95a

Device Boot Start End Blocks Id System
/dev/sda1 1 244 1951744 fd Linux raid autodetect
Partition 1 does not end on cylinder boundary.
/dev/sda2 244 1306 8530945 5 Extended
/dev/sda5 244 306 500736 82 Linux swap / Solaris
/dev/sda6 306 1306 8029184 fd Linux raid autodetect

Disk /dev/sdb: 10.7 GB, 10737418240 bytes
255 heads, 63 sectors/track, 1305 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x0008e1c9

Device Boot Start End Blocks Id System
/dev/sdb1 1 244 1951744 fd Linux raid autodetect
Partition 1 does not end on cylinder boundary.
/dev/sdb2 244 1306 8530945 5 Extended
/dev/sdb5 244 306 500736 82 Linux swap / Solaris
/dev/sdb6 306 1306 8029184 fd Linux raid autodetect

Disk /dev/sdc: 10.7 GB, 10737418240 bytes
255 heads, 63 sectors/track, 1305 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00007915

Device Boot Start End Blocks Id System
/dev/sdc1 * 1 244 1951744 fd Linux raid autodetect
Partition 1 does not end on cylinder boundary.
/dev/sdc2 244 1306 8530945 5 Extended
/dev/sdc5 244 306 500736 82 Linux swap / Solaris
/dev/sdc6 306 1306 8029184 fd Linux raid autodetect

Disk /dev/sdd: 10.7 GB, 10737418240 bytes
255 heads, 63 sectors/track, 1305 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

Disk /dev/sdd doesn't contain a valid partition table

Disk /dev/md0: 1998 MB, 1998520320 bytes
2 heads, 4 sectors/track, 487920 cylinders
Units = cylinders of 8 * 512 = 4096 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

Disk /dev/md0 doesn't contain a valid partition table

Disk /dev/md1: 24.7 GB, 24665456640 bytes
2 heads, 4 sectors/track, 6021840 cylinders
Units = cylinders of 8 * 512 = 4096 bytes
Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 65536 bytes / 196608 bytes
Disk identifier: 0x00000000


From this footcloth you can see that our new disc was defined, but does not contain a section

Disk /dev/sdd: 10.7 GB, 10737418240 bytes
255 heads, 63 sectors/track, 1305 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

Disk /dev/sdd doesn't contain a valid partition table


Copy a partition from a live disk, for example, from sda using the –force key (without using the key at 10.04, the system does not want to create it in ubuntu 8, this key is not required)
sfdisk -d /dev/sda | sfdisk /dev/sdd --force

check the copying
fdisk –l
It will show us that there are partitions on all four disks, but there is no partition on the md0 device.

Disk /dev/md0: 1998 MB, 1998520320 bytes
2 heads, 4 sectors/track, 487920 cylinders
Units = cylinders of 8 * 512 = 4096 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

Disk /dev/md0 doesn't contain a valid partition table


Correct this by adding our newly created sdd1 to the array.

mdadm --add /dev/md0 /dev/sdd1
must issue

mdadm: added / dev / sdd1
After that, our raid rebild will start immediately. 1

We do the same with our raid5
mdadm --add /dev/md1 /dev/sdd6
we get:
mdadm: added /dev/sdd6

rebuild runs, if you're interested in looking at the process, enter
cat /proc/mdstat

will give us:

Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md1 : active raid5 sdd6[4] sdb6[1] sda6[0] sdc6[3]
24087360 blocks level 5, 64k chunk, algorithm 2 [4/3] [UU_U]
[=>...................] recovery = 8.8% (713984/8029120) finish=1.1min speed=101997K/sec


Where the degree of recovery is indicated, when the work is finished, after a short wait, we re-enter the command and get the following:

Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md1 : active raid5 sdd6[2] sdb6[1] sda6[0] sdc6[3]
24087360 blocks level 5, 64k chunk, algorithm 2 [4/4] [UUUU]

md0 : active raid1 sdd1[2] sdc1[3] sdb1[1] sda1[0]
1951680 blocks [4/4] [UUUU]


This shows that all partitions are restored.
It remains for us to make a new disk bootable,
grub
device (hd2) /dev/sdd
root (hd2,0)
setup (hd2)
quit


Here, perhaps, that's all, your server is as good as new, all the information remains intact.

Source: https://habr.com/ru/post/101299/


All Articles