OS Data ONTAP 8.3 cDOT is one of the largest releases of NetApp. One of the key features of the release is the technology Advanced Drive Partitioning (ADP). More about what
's new in cDOT 8.3 here.ADP technology has two main uses:
- Root-data partitioning
- FlashPool Partitioning (Storage Pools)
This article is about Root-Data Partitioning.

Root-data partitioning
Root-Data Partitioning is used to split each disk into two partitions, one large (usually the first, let's call its data partition) and the other much smaller in size (usually the second, let's call it root-partition). Then each partition is treated as a separate disk and can independently have the first from its second owner (controller). On small partitions, root aggregates are created for both controllers, and on large ones, Data Aggregates are created. This allows instead of dedicated disks to use only a small partition as root aggregate. It is important to note that this technology is available on FAS22XX / FAS25XX and AFF8XXXX systems. Root-Data Partitioning is enabled only on disks that are in the first shelf connected to the system (for FAS2XXX, disks from the first shelf are those that are in the same chassis as the controllers). Each partition “works” as a separate disk, has its own name, and with teams that work classically with disks, we will substitute these names for “virtual” disks.
On FAS8XXX, you still need to have dedicated disks under root aggregates. Why, you ask? Because FAS8000 is architecturally designed for a very large number of disks and against this background 4-6 disks do not play a role. For small systems, these 4-6 disks can save a significant amount of space, the same applies to expensive AFF systems, where it is not rational to spend expensive and high-performance disks for system needs.
')
Root-Data Partitioning is a technology that is not configured or turned off. It is on all new systems that come with 8.3. If the old system (FAS22XX / FAS25XX and AFF8XXXX) is updated to 8.3 and re-reformatted, then it automatically turns on Root-Data Partitioning and splits the disks into two partitions. Nowhere you will not be asked and warned.
To work Root-Data Partitioning you need:
Availability of at least 4 disks on one controller (at least 8 for two controllers).
If you have a new system with 8.3 or newer, you already have Root-Data Partitioning, relax, do not need to do anything.
If you want to use Root-Data Partitioning on your existing system, then you need to convert it to cDOT (7-Mode does not support 8.3 and ADP) and all disks will need to be formatted (
all data will be deleted! There is no other way to enable Root-Data Partitioning) .
- We start both controllers, go to both in meytetenens mod (from the boot menu)
- We kill all the old units on both ( all data will be destroyed! Otherwise, do not include Root-Data Partitioning)
- Remove the drive from all disks with the command disk removeownership , reboot. This is important for Root-Data Partitioning!
- Go to the boot menu, write wipeconfig (instead of numbers), confirm, expect, the system will reboot twice
- (to convert from 7-Mode to cDOT, we reboot and enter the loader, change the boot parameter in the loader so that the system loads cDOT)
- Enter the boot menu again, select item 4 ( initialize all disks ) on each controller and wait for the disk zerroring to end.
What is the size of each partition?
It depends on the number of disks in the system, their type and the number of controllers (1 or 2). As a rule, it is necessary to have a minimum of 4 disks per each controller. The size of the partition is chosen so that, as a result, a sufficient amount of usable space for the root of the volume will be obtained, which will occupy all the space on the root unit, consisting of the root partition. All possible combinations are pre-calculated and sewn into the system, it is not configurable. More accurate information on all possible combinations can be found on the website
hwu.netapp.com , in the section Advanced Drive Partitioning.

Here I will give a couple of examples:
FAS2240-2 / FAS2552- If the system has two controllers and 8 disks, the size of the root partition will be 110.82 GiB. So in this configuration, we will get 4 root partitions per controller, one root unit per controller. Each unit will consist of 2 partitions for data and 2 partitions for pariti, without spar partitions. The size of each root unit will be 2 * 110.82
- If there are 12 disks on the same system, then the root partition will occupy 73.89 GiB. And each root unit will consist of the following combination of partitions: 3 data + 2 parity +1 Spare. The size of each root unit will be 3 * 73.89
- On 24 disks we will have 27.72 GiB root partitions and a combination of partitions under root argate : 8 data + 2 parity +2 Spare. The size of each root unit will be 8 * 27.72
FAS2240-4 / FAS2554- For HA systems FAS2240-4 and FAS2554 and 8 disks, the root partition is 215.47GiB, each unit will consist of: 2 data +2 soar, without spar partitions. The size of each root unit will be 2 * 215.47
- If there are 12 disks on the same system, then the root partition will occupy 143.65 GiB, each unit will consist of: 3 data +2 soar, 1 spar partition. The size of each root unit will be 3 * 143.65
- On 24 disks, the partition will be 53.88GiB, each unit 8 date, 2 soar, 2 Spare. The size of each root unit will be 8 * 53.88
Useful space
What do we win with Root-Data Partitioning and what are the advantages of this technology?
- Firstly, we win 4-6 disks that can be added to the data-unit, even if each of them will be a bit “shorter”.
- Secondly, we “remove” the performance from these additional 4-6 Disks. since the Root unit is lightly loaded.
- Third, we can have Active-Passive configurations, use Root-Partitioning to operate both controllers, but still have one large unit living on one controller.

Upgrade and add new drives:
When adding new drives, we have several options:
- The easiest thing is to create a new unit and not add new disks to existing Data units that live on Data partitions.
- The second option is to add new disks to the existing raid group of the existing Data-aggregate, which lives on the Data-partition. In this case, the new disks will be truncated to the size of the Data partition and will be added to this raid group. Since the size of the raid group is not infinite it can be expanded up to 28/20 disks (SAS / SSD 26 + 2, SATA / NL-SAS 18 + 2). And if the maximum is reached, you need to go to the third option.
- The third option is when we add disks to a new raid group in an existing unit, consisting initially of a raid group that uses Data partitions. In this case, we will get the first group a little shorter than the second, but this is not a problem, since in recent versions of ONTAP the mechanism of the work of the raid groups was specially optimized for this purpose, now it is allowed to have a sufficiently large spread of raid groups in one unit.
- When another regiment comes to you, you can create a new unit on this shelf, online to migrate volyum from one of the old units built on shorter disks. After that, change the output of the shortened discs ovnship on ovnorship neighbor. Next, add the released shortened waggare disks to the same shortened disks. Thus, bypassing the third option.
- When converting FAS2240 / FAS2552 / FAS2554 into a shelf and connecting them to older systems, for example, FAS8020, Root-Data Partitioning will work. This is the only way to get FAS8XXX to work with Root-Data Partitioning.

disadvantages
- In case of failure of one or several disks, we can get degraded root and data aggregates at once simultaneously. The output of the root unit is not so bad because it is protected by a HA pair, and if a controller with a damaged root unit cannot service its disks, this will be done by HA partner. In the case of a system without Root-Data Partitioning, it would be possible to avoid switching units to a neighbor.
- The truncation of the disc leads to the fact that part of the disc is not used.
Active-Pasive vs Active-Active
Let's compare in what cases it makes sense to use Active-Passive, and in which it is worth using Active-Active. If you have a system of 24 or less disks, it makes little sense to divide them between controllers only for reasons of some additional performance that could theoretically be higher due to the fact that the data will be served by two controllers at once. The fact is that each FAS2XXX controller is designed to be able to serve 144 disks (even in the case when the partner dies). It is necessary to understand that, as a rule, the system bandwidth does not rest against the controller, but against the disk subsystem on the back end. Thus, in configurations with 24 disks and less, as a rule, no additional performance will be obtained if you simply use two controllers instead of one.
In the Active-Active configuration, you will only lose 3 extra disks (2 pairs + 1 Spare), which could give you more performance on the backend and more capacity.

Summary. For FAS2XXX systems with less than 24 disks, it often makes sense to make Active-Passive configurations frequently, since front-end performance and fault tolerance do not deteriorate, performance on the back end of the disk subsystem improves and useful space increases. For all other cases, use Active-Active configuration.
After the initialization and initial setup of the system is completed, it will be in the Active-Active configuration by default, you need to “take away” the partitions intended for the data (at the end of the disk name there is the P1 or P2 ending) from one controller and give it to the second. If the selected partitions already have a data-unit, you will have to remove it, because otherwise you cannot take the disks out of the unit. This is done from the 7-Mode shell (
system node run -node local ). Before changing, make sure which partition is the root partition and which data partition (
aggr status <root_aggr> -r ). You can change the disk ovner-spike to a new one from the controller that owns it (
disk assign <disk.name.P1> -s <new-serial-number> ). Each partition “works” as a separate disk: it has its own hardware, which may differ from the physical disk and other partitions on the same physical disk, changing the partition master is performed using the same command as for changing the physical disk drive.
Example:
sys::> system node run -node local sys-01> aggr status rootA -r sys-01> disk show sys-01> disk assign 0a.10.14P1 -f -s 536880408
Here,
P1 is the data partition of disk
0a.10.14 , the partition is still owned by the
sys-01 controller (the command is executed from it), the partition will be reassigned to the sys-02 controller (its Serial Number 536880408), instead of the
-s flag you can use
-o <neighbor-name> .
Findings:
Root-Data Partitioning technology that is hidden under the hood of the system, administrators can not even guess about the scheme of its work. If you use the system "as is", there are no difficulties with its maintenance, you can say this is a "thing in itself" that just is and just works, during the operation does not cause any problems. In general, the technology improves the ratio of
usable space to raw , and also indirectly improves the backend performance (especially when it comes to AFF, because each saved SSD drive can add thousands of IOPS, and we can save up to 6 pieces thanks to Root-Data Partitioning). If you know the subtleties of Root-Data Partitioning, then at the installation stage you can win even more in the usable space without losing to fault tolerance and performance.
An ADP FAQ is available at Fieldportal.I ask to send messages on errors in the text to the LAN .Comments, additions and questions on the article on the contrary, please in the comments .