In order to load the linux kernel with the root filesystem lying on the RAID array, the following parameters must be passed to the kernel (a working example for Grub). Significant options for us are the first and second line of parameters.
title Gentoo Linux 3.0.8 Hardened
kernel (hd0,0)/linux-3.0.8-hardened/linux \
root=/dev/md0 \
md=0,/dev/sda1,/dev/sdc1 \
rootfstype=ext4 \
rootflags=nodelalloc,data=ordered,journal_checksum,barrier=1,acl,user_xattr \
panic=15 \
vga=792
Parameter values:
1. root = / dev / md0 sets the name of the device file with the root filesystem.
2. md = 0, / dev / sda1, / dev / sdc1I would like to dwell on this parameter in more detail. It has the following format:
md=md_device_number,raid_level,chunk_size_factor,fault_level,dev0,dev1,...,devn
- md_device_number is the number of the md device. For example, 0 means / dev / md0, 1 is / dev / md1. Please pay attention - this is the device NUMBER, and not the number of disks included in the array, as is sometimes found in the descriptions on the web.
- raid_level - RAID level. Required for linear mode (value -1) and RAID-0 (value 0). For other types of arrays, information is taken from the superblock and this value should be omitted.
- chunk_size_factor - sets the size of the chunk. The minimum value of 4kb (4k).
- fault_level - as far as I understood from the documentation, this parameter is ignored by the MD driver (what for then did they provide?)
- dev0, ..., devn - the list of devices in the array.
There is another important point.
The documentation states that the driver supports superblock versions 0.90.0 and md-1.
But it will boot from RAID-1 with the superblock version 1.2, which is created by mdadm by default, but I failed. I had to recreate the array with version 0.90.0, after which the download was successful. Perhaps, there was support for version 1.0, with the exception of versions 1.1 and 1.2.
You can create an array with a superblock of version 0.90 by specifying the mdadm key - metadata = 0.90, for example:
$ mdadm --create /dev/md0 -n 2 -l 1 --metadata=0.90 /dev/sd[ac]1
If the array is already created, but the superblock has version 1.2, you can change it to version 0.90 only by creating a new array with the command specified above, and transferring data from the old array to the new one. Those. Backup data is MANDATORY!
I'll explain why. I specifically checked the possibility of replacing the superblock from 1.2 to 0.90 without losing data on the test array. Looking ahead to say that this is not possible. In any case, I did not succeed. If you know how to do it - tell me, I will be grateful.
Theoretically, as one might think, you can wipe the superblock with the command:
#!!! . !!!
$ mdadm --zero-superblock /dev/sd[ac]1
and create a new array without synchronizing disks (--assume-clean), but version 0.90 with the command:
$ mdadm --create /dev/md0 --assume-clean -n 2 -l 1 --metadata=0.90 /dev/sd[ac]1
')
It works. An array is created, the partition table remains (one partition with ext4 was created on the array), but the file system (ext4) created earlier (before cleaning the superblocks) refuses to be mounted, swearing at the damaged superblock. After comparing the superblocks of this FS in arrays v1.2 and v0.90, it turns out that they differ. Moreover, neither the main nor the reserve superblocks are maintained (in 1 block and in 8193). So even the team
$ mount -o sb=8193,nocheck -t ext4 /dev/md0 /mnt/test
will not save. Those. for your data, the change in the version of the superblock of the RAID array was x ... In general, bad.
Therefore, it is better and more importantly safer to create a new array and transfer data to it.
By the way, restoring a damaged superblock of the same version (suppose there was an array with version 1.2, and you are restoring a damaged superblock of the same version) by the two commands above works great and the data remains in order. Thanks to the key --assume-clean, which creates only new superblocks on each disk of the array, and does not touch the data itself.
Documentation for md driver (eng.)