In early May 2016, before the end of the merger with Dell, EMC announced the release of a new generation of mid-level arrays called Unity. In September 2016, the Unty 400F demo array in a configuration with 10 SSD disks of 1.6TB each was brought to us. What is the difference between the models with the F index and without it can read this
link on the blog of Denis Serov. Since before the transfer of the demo to the customer, a time lag arose, it was decided to drive the array with the same test that was previously loaded with
VNXe3200 and
VNX5400 . To see at least on “synthetics”, Unity is as good as compared to previous generations of EMC arrays, as the vendor describes. Moreover, judging by the presentations of the vendor, Unity 400 is a direct replacement for the VNX5400.

And DellEMC claims that the new generation is at least
3 times more productive than the VNX2.
If it is interesting what came of all this, then ...
Description of the stand and test
Under the spoilerInitially, a test bench was built from the same old HP DL360 G5 with 1 CPU (4-core) and 4GB RAM. Only in PCI-E slots were installed two single-port 8Gb / s HBA Emulex LPE1250-E, connected directly to FC 16Gb / s Unity 400F ports. As it turned out a little later, the CPU performance of this server was not enough to load the storage system. Therefore, as an additional source of IOPS generation, the Blade HP BL460c G7 with 1 CPU (12-core) and 24GB RAM was connected to the array. The truth is that in Blade basket there are FC-switches with ports on 4G. But, as they say, “they don’t look a gift horse in the mouth”. There were no other calculators at hand anyway. The servers used OS Win2012R2 SP1 and EMP PowerPath software for managing LUN access paths.
On the Unity 400F array a pool was created in a RAID5 (8 + 1) configuration. On the pool are two test LUNs that were connected to the servers. NTFS file systems and 400GB test files were created on the LUNs to eliminate the effect of cache controllers on the result.
')
The settings in IOMETER are as follows:


Those. on each server, 4 workers worked (a total of 8), in which the number of input / output streams doubled at each subsequent testing stage. Thus, for each worker sequentially 1, 2, 4, 16, 32, 64, 128, 256, 512 threads. In total, the array accounted for 4, 8, 16, 32, 64, 128, 256, 512, 1024, 2048, 4096 streams at each stage.
By tradition, some calculations
DellEMC recommends using a maximum value of 20,000 IOPS for SSDs (document
here ) for SSD drives.

That is, the maximum in theory, our 9 disks can give out 20,000 * 9 = 180000 IOPS. We need to calculate how much IOPS will be received from these server disks, taking into account our load profile. Where the read / write ratio in percentage terms is 67% / 33%. And still need to take into account the overhead of writing to RAID5. We get the following equation with one unknown 180000 =
X * 0.33 *
4 +
X * 0.67. Where
X is our IOPS, which will get the server from our disks, and
4 is the size of the write penalty for RAID5. As a result, we obtain on average X = 180000 / 1.99 = ~
90452 IOPS.
Test and Results
As a result of the test, we got the following dependence of IOPS on the number of I / O threads:

The graph clearly shows that saturation occurred at 512 I / O streams on the tested LUNs, and a value of approximately
142,000 IOPS was achieved. If you look at testing the VNX5400, then you can see that even when testing the controller cache, the
maximum IOPS values did not exceed the 32000 IOPS threshold. And the I / O saturation of the VNX5400 array occurred at approximately 48 threads. There is still need to note that one server HP DL360 G5, in the above configuration, gave a maximum of about 72000 IOPS. Then rested on 100% of the CPU. Why actually had to look for the second "calculator".
Unity has quite good functionality for collecting performance statistics on various components of the array. So for example, you can see the load graphs for IOPS on the array disks (each individually or all at once).


From the graph it is clear that at the maximum the disks give out “somewhat” more than the value that the vendor recommends taking when calculating the performance.
The response time on the tested Unity configuration grew as follows:

Those. even at the “saturation point”, when, with an increase in the number of streams of IOPS, they cease to grow (512 streams), the response time did not exceed 5ms.
Dependence of response time on the number of IOPS.

Again, if you compare it with the
response time when testing the controller cache on the VNX5400 , you can see that on the VNX5400, a response time of 1ms was reached at approximately 31000 IOPS and about 30 I / O streams (and this is in fact to RAM). On Unity, on SSD drives, this happens only with ~ 64000 IOPS. And if we add more SSD disks to our Unity, then this intersection point with a value of 1ms on the graph will move much further along the IOPS scale.
The dependence of bandwidth on the number of input / output streams:

It turns out that the array received and gave streams of packets of 8KB in size at a speed of more than 1GB / s (gigabyte per second).
Why not bore the reader, a number of graphs of the performance of the various components of the Unity 400F array are hidden for the curious ...
Link to IOMETR source data file.findings
Conclusions, I think everyone will do for himself.
As for me, a new interesting storage system appeared on the market, which shows high performance even with a small number of SSD drives. And if you consider the SSD sizes available now (and the DellEMC for Unity already has 7.68 TB SSDs available and should receive support for 15.36TB SSDs), I think that in the next few years, hybrid arrays with a mixture of SSD and "spindle" disks will become history.
PS For fans to ask questions "how much does it cost?". In their presentations, the vendor indicates that the price tag for Unity F (All Flash) starts at $ 18k, and for Hybrid configurations from less than $ 10k. But since the presentations are all “bourgeois,” the price tag may differ in our Russian realities. In any case, it is better to clarify in each specific situation with a local vendor or its partners.