📜 ⬆️ ⬇️

ETERNUS DX - a new version of the microcode

As noted earlier, ETERNUS DX is really a single product line, therefore, as a rule, new versions of microcodes also appear simultaneously for all models of the line. Let's see what's new lately for ETERNUS DX80 / 90/410/440/8700 S2 systems. For all these systems, the update to the latest version of the microcode adds the following features. :

1) Support for FC 16 Gbs interface cards. Perhaps today is not the most frequently asked interface, but since Brocade began actively promoting its switches to 16 Gb, there is no doubt that in a year most of the projects for FC will already go to 16 Gb. However, if desired, these cards are available to order from February.

2) Cache size support over 1 TB for ETERNUS DX8700 S2.
')
3) A new Task Scheduler (Scheduler) has been added for the Quality of Service mechanism.

4) Optionally, you can disable cache mirroring. It is clear that no one will recommend this for critical configurations. Of course, when setting performance records in SPC-1 and SPC-2, nobody disconnected the cache. But, nevertheless, today the customer has the option of increasing productivity at the cost of reducing system reliability. There are a number of applications where performance is more important than the possible risk of data loss. It is clear that by default this option is disabled.

5) Secure Erase / LUN shredding is an additional possibility to delete a logical volume with forced and guaranteed actual deletion of all information on the volume by means of the array itself.

6) You can create logical volumes of a new type of WSV (Wide Striping Volume). This is a really interesting new feature that allows you to “smear” one logical volume between several RAID groups. Moreover, this is not the usual “gluing” of LUN from two logical volumes, namely the alternate use of blocks from several RAID groups. For example, when performing a read operation from disks, we will use all disks of all RAID groups on which the WSV volume is located. In fact, this technology allows the means of the array itself to significantly increase performance for one specific logical volume. The hardware resources of the dedicated hard drives and both controllers will be fully utilized in order to maximize the performance of a particular logical volume. It is clear that there are a number of restrictions and best practice. For example, it’s not a good idea to do WSV between RAID groups that are on different types of disks or in different RAID levels.

7) Zero Reclamation for Thin Provisioning volumes.

8) Support and tight Thin Provisioning integration of the array itself with Thin Provisioning Windows Server 2012.

9) Support for ODX technology (Offloaded Data Transfer) for Windows Server 2012. In fact, this is an analogue of VAAI for VmWare (which, by the way, has long been supported), but for Hyper-V.

10) Cache Partitioning - the ability to limit the maximum cache size that will be allocated for a particular logical volume.

11) Support long wave SFP +. Now, by installing these SFPs, you can spread the actual array from the switch or host to 4 kilometers. It may, by the way, be a very cheap and effective solution to increase the reliability of the entire system - take and install a disk array in the next building.

12) IPv6 support.

13) Support Internet Explorer 10 as a web browser to monitor or control.

14) Support up to 256 sessions of SnapOPC snapshots from a single source.

Source: https://habr.com/ru/post/166085/


All Articles