📜 ⬆️ ⬇️

EMC VNXe1600 Storage System Overview



VNXe storage systems form a line of lower-level storage systems for EMC. Recently, the line was replenished with a new model - VNXe1600, which will be the subject of our close attention.

VNXe1600 - entry-level model with the most simple installation and configuration. The system is positioned as a single storage for small infrastructures where typical modern tasks can be placed: databases, Exchange mail systems, VMware and Microsoft virtual servers or dedicated storage for a project. Of the access protocols, only Fiber Channel and iSCSI are offered - no file functionality. At least for the current moment.
')

Immediately, I note that the review was made on the basis of available documentation and other information - the system has not reached us yet, in order to touch it fully. And we will start with a historical excursion - to whom only the actual technical essence is interesting, can skip it painlessly.

Historical insight to understand the pedigree: where roots and possible rudiments grow from.
Historically, EMC has released a line of powerful block midrange storage systems (although this is not their main pride: the flagship is the high-end product line of EMC Symmetrix), which for several generations have been called EMC CLARiiON. Their direct heirs changed their name to EMC VNX Unified Storage, and now their second generation is relevant.
Over the years, this is one of the main midrange storage lines on the market, setting and shaping trends for the industry as a whole. In particular, these include the active development of the Fiber Channel protocol and RAID5.
True, the word "historically", from which this part was started, is applicable with reservations. Initially, CLARiiON was developed by Data General, at the same time some architectural concepts were laid down that have survived to this day. Data General even tried to compete with its innovative CLARiiON product with the flagship product EMC Symmetrix (now EMC Symmetrix VMAX or simply VMAX). According to the memoirs left in the blogs by people who were at the origin of the architecture of CLARiiON, some architectural solutions were justified by this competition with EMC. But later, and for a long time, Data General was purchased by EMC. Today, visually, the roots of the origin of VNX systems can be seen in the fact that the volumes (LUNs) of CLARiiON / VNX systems are recognized by many systems as DGC RAID or DGC LUNZ, where DGC is the abbreviation of Data General Corporation.


Figure 1. Approximately the first CLARiiON looked like this.

For a long time, EMC also had a line of NAS systems called EMC Celerra. Architecturally, these systems were NAS gateways connected to block storage systems (at some moments, if I’m not mistaken, support for third-party arrays was announced, but basically these block systems had to be produced by EMC). The NAS gateway did not have its own capacity — it used the capacity of these systems and added access functionality via file protocols: CIFS, NFS, iSCSI, etc. The severity for: iSCSI is, of course, a block protocol and, in fact, SAN, but iSCSI support is typical for NAS-like storage systems with access over Ethernet.


Figure 2. This is how the high-end storage system, providing NAS functionality, looked around 2000. A real harsh industrial look - no blue light. But the power tires were painted in different colors.

There were also integrated models that included both Celerra and CLARiiON components, but the management of block and file functionality was divided. Those. In many ways, these were two systems in one rack box. Active integration began with the advent of the Unisphere management interface on the CLARiiON CX4 generation systems, which brought together the management of file and block functionality, and was established with the advent of the EMC VNX Unified Storage line. In VNX systems, block and file controllers are still hardware implemented separately, but are fully integrated for management and authentication.

For me, these systems (CLARiiON, Celerra and VNX) are the direct heirs, and, having made a reservation, I can easily call the VNX a clarion, and speaking of the file functionality, I can call it a celer.

Just a month or two after the release of VNX, the VNXe generation system, VNXe3100, appeared, in which, on the contrary, there was only network functionality and no Fiber Channel out. That is, in a small box, on the basis of existing experience and software developments, the functionality of both block and NAS systems, implemented on the same controller, was assembled, and only NAS functionality was set out. Moreover, the functionality was somewhat limited, and management is greatly simplified. The system was an attempt to enter a cheap segment, for customers who do not have and do not plan any Fiber Channel, and there is no particular desire to learn something about RAID groups (there is no Raid Group concept in the system interface).
A further development, and while the apogee of the VNXe series, was the VNXe3200 system, which already provided both file and block functionality, including all the main “advanced” functions of the older brethren. That is, almost like VNX Unified, but within a single hardware box. The model, I must say, really turned out very worthy. In my opinion, only two things can be attributed to the controversial decisions in the model: the built-in 10 GE Base-T ports are not very convenient for those who have 10 GE in optics (you need to install interface modules), and limiting the possibilities of configuring the disk subsystem to certain templates that were not very explicitly described in specifications and marketing materials.


Figure 3. Currently proposed VNX and VNXe portfolio

Recently, the younger colleague of the VNXe3200 system, the VNXe1600 system, was introduced to the market, which we will consider, which currently provides only block functionality.



VNXe family of systems


To clarify the differences within the VNXe family, we briefly list the main modules and milestones:
VNXe3100 - GA March 2011. Pure Network Storage. Supports only Ethernet: NAS protocols and iSCSI.
VNXe3300 - GA March 2011. Pure Network Storage System. That is, only supports Ethernet: NAS protocols and iSCSI. Significantly more powerful option than the VNXe3100.
VNXe3150 - GA August 2012. "Improved" version of the VNXe3100 with lots of memory and further development of the functional. Also only Ethernet: NAS protocols and iSCSI.
VNXe3200 - GA May 2014. The logical version of the development of the series, and while its apotheosis. In addition to NAS and iSCSI over Ethernet, the Fiber Channel feature was introduced, an advanced functionality of older brothers.
VNXe1600 - new model GA August 2015, currently only block functionality: Fiber Channel or iSCSI over Ethernet.

VNXe1600


The system architecture is traditional: a controller shelf with disks (DPE) in 2U format, to which additional disk shelves (DAE) are also 2U connected to SAS. Both the controller shelf and the additional ones can be available on 12 3.5-inch disks (what some people call LFF) and 25-inch 2.5 disks (what some people call SFF).

Characteristics of the system in comparison with the older brother:

Table 1 Technical characteristics of the VNXe1600 compared to the VNXe3200

This table does not reflect that the principal difference is that the VNXe1600 has only block functionality. This, perhaps, slightly compensates for the difference in RAM, which the VNXe1600 has three times less: you do not need to keep the file code and service data in memory. Nevertheless, the memory is much smaller: 8 GB per controller, and fewer cores.

From the characteristics it can be noted that the system allows you to install even a larger number of disks than the older colleague VNXe3200: 200 against 150 in VNXe3200. At the same time, taking into account the smaller amount of memory and cores, there is less computing power per unit of capacity. In general, with the release of the VNX line, EMC seriously limited the maximum supported number of disks - the restriction is not technical, but reasonably marketing (here, marketing is like proper positioning, that is, in a good way). This aspect made comparison of models with solutions of other vendors on this basis devoid of meaning. The downside was that for the younger models of the series, an almost linear increase in performance was achieved as disks were added: at the maximum number of disks, the saturation point was not exceeded. 200 disks are only 7 additional shelves to the controller, if all of them are 25 ”2.5” disks - i.e. total 16U.

There is an additional limitation - the specification for a footnote states that the maximum “raw” volume is 400 TB. Those. to score the system to the eyeballs with 4 TB disks will not work. It is worth considering that the limit in reality is not the number of disks, but the number of disk slots. Those. if the system already has 16 shelves with 12 disks (16x12 = 192), add another shelf, even if it has less than 8 disks, it doesn't work out - the number of disk slots will be exceeded (192 +12 = 204; 192 + 25 = 217 ).

Interface: in one of the first, EMC has support for 16 Gb / s Fiber Channel, and there is no support for 10 GE BASE-T, while VNXe3200 has such ports built-in.

Let's look at iron in more detail.


This is what the controller shelf (DPE - Disk Processor Enclosure) looks like at the back. There is nothing particularly interesting in front, except for a false-panel with a blue backlight, behind which there are 12 3.5 ”or 25 by 2.5” disks:

Figure 4. Rear view of the controller shelf (DPE)

We see that the system can be divided into two parts, the components located in which will be called A and B - on the right and on the left, if viewed from the rear, respectively. On the system, this is marked with arrows.

At the top of the shelf are the power supply and cooling unit, consisting of three fan modules (1) and the power supply unit itself (2), and the lower part is the controller with ports. Each of the power supplies is able to power the entire shelf. In order for the system not to try to save the cache and shut down, at least one of the power supplies and at least two fan modules on the controller must be running.

As with other VNXe-series systems, the cache is protected using batteries (BBU) installed in the controller itself. The task of such a battery is not to support the work and not to support the power supply of the memory when it is disconnected, but to let the system reset the contents of the cache memory to the built-in solid-state drive (the system boots from it).


Figure 5. DPE components (half of the system is shown)

Each controller has two onboard SFP + ports, labeled CNA (3) - i.e. ports are universal and can be configured as Fiber Channel 8 or 16 Gb / s or 10 GE. And they installed the corresponding SFP-transceivers. This configuration is done when ordering - i.e. on the spot can not be changed (the customer, at least, and at the moment).

There is a possibility to install an additional interface module (10), in the basic configuration there is a stub instead. It would be more accurate and correct to say not “additional”, but “additional” - since during expansion a pair of identical modules is installed in both controllers.

At the start, a choice of three modules is available, all are 4-port: Fiber Channel 8 Gb / s or Ethernet 10 GE Optical or GE Base-T. The module can be selected either initially or added later as an upgrade. To install the modules, you need to remove the controller, but the benefit is two - you can organize it as a non-disruptive upgrade, in terms of data availability.

Two square ports (4) are SAS ports 6 Gb / s - i.e. The system has two buses for connecting disk shelves (BE). The system shelf itself is shelf # 0 on bus # 0. The connectors are SAS HD form factor.

A set of LEDs (it makes no sense to look at it in detail) turns on the Unsafe to Remove icon, which lights up when the controller cannot be removed so as not to lose the cache data — that is, for example, if you remove the second controller, or during the start and save processes cache. It can be noted that the VNXe1600 refused to offer "single-headed" configuration, which was possible for previous systems from the youngest line.

There is an Ethernet port for management and a service port. As in the VNXe3200, there is no hardware serial port, and if necessary for service intervention, a virtual console over Ethernet is used.

Disk subsystem


Disk shelves, as well as system shelves, can be of two types:



Figure 6. A 25-inch 2.5 "shelf


Figure 7. Shelf for 12 disks 3.5 "

Disk shelves are connected to the controller via SAS with uniform distribution over two BE-buses. From the point of view of addressing, the system shelf itself is shelf # 0 on the bus # 0.

Table 2. Maximum configuration of disk shelves and the number of disks for them

SAS, NL_SAS or Flash drives are available. In fact, all these drives have an SAS interface for connecting to disk shelves, but each vendor is a bit eager to teach people to speak in their own terms. To make it easier to understand:
SAS is 10K or 15K revolving disks.
NL_SAS - high capacity discs with a rotation speed of 7200 rpm. Only available in 3.5 "form factor
Flash - solid state drives. There are two types: with the possibility of using them as FAST Cache (these are SLC drives that have a greater resource and reliability) and do not support this, but are somewhat less expensive (FAST VP Flash). Drives that support FAST Cache, if desired, can be used to form a pool.


Table 3 Supported Drive Types

From the table you can see that high-capacity disks are available only in the 3.5-inch form factor, and the rest in any. This is understandable: now often a 3.5-inch disk is 2.5-inch in the corresponding skid.

The configuration of the disk subsystem is reduced to configuring pools. A pool is a set of disks of the same type (since the system does not support FAST VP), on which virtual volumes are subsequently created - LUN. Those. when a pool is created, the system will cut one or more RAID groups with a given type of protection within itself, and present it to the user as a single space for placing volumes (LUNs). Within the pool can also be reserved capacity for creating snapshots.

The policy for hot swap drives is not customizable, but follows the usual EMC recommendations: 1 hot-spare for 30 disks of the appropriate type.

Target sizes of RAID groups - i.e. the number of disks that is recommended to be pooled when creating it:


When creating a pool, you can specify the desired size. From the description it is not yet clear whether this will be a hard constraint, as on VNXe siblings, or, if the number of disks is multiple, the system will create slightly uneven RAID groups, as on older VNX. While it seems that you can only specify multiple sizes. On VNXe systems, this restriction to templates, apparently, was caused by optimization of the file functionality. I will not describe the subtleties here - all the more, I don’t know exactly how VNXe works 100%, but the point is that these limitations are not whims, but technical optimization.

It is possible to note an inherited feature about which it is important to know: the first four disks are systemic - in them a part of the capacity is used for service needs, and in some operations these disks also behave a little differently than the others. Net capacity of system disks will be less. If they are combined into one RAID group (namely, a group, not a pool, although the ideology of VNXe suggests not to think about groups) with other drives, the same capacity will be “lost” to those. These discs are “eaten off” by approximately 82.7 GB of disk space. For example, if the first 4 disks are 900 GB of nominal volume (about 818.1 usable) and configured as RAID 1/0, then the usable volume will be 1470.8 (736.4 GB on the system disk), and on the same group not on the system we will get about 1636.2 (818.1 GB per disk). If, for example, we have only 14 900 GB disks in the system, and we choose RAID5 12 + 1 + hot-spare for the pool, then we get 8824.80 usable volume, having lost 82.7 GB each and on nine non-system disks. Why do we need such a reserve, if the system is loaded from a solid-state drive and saves the cache to it - it is not completely clear. Perhaps this is an additional reservation, and the developers decided not to break the existing model yet.

Hint: when thinking about the system configuration, ask the supplier to calculate the required breakdown. The partners have a capacity calculator, where you can specify the required sets of disks and groups, issuing pdf output, where it is calculated in detail what and how, and how much space will be available.


Figure 8. Capacity Calculator will draw and calculate the effective capacity and height in the rack

Available FAST Cache - a functionality that implements the expansion of the cache memory at the expense of solid-state drives, but the configuration is limited to a maximum of two solid-state drives. For this functionality, a separate RAID 1/0 group is created (1 in the case of two drives). Those. Taking into account the mirroring capacity can be 100 or 200 GB of nominal volume. "Nominal", because the drive, labeled as 100 GB, gives about 91.69 binary GB, and 200 GB - about 183.41 GB. Fair ratios of capacities are in the specification sheet for the model.

Functional

We reflect some of the key features of the system. In general, for many EMC storage systems, a significant part of the functionality appears some time after the launch of the storage system itself. I don’t know if this will be true for the VNXe1600 too - the roadmap didn’t look, and if I did, I couldn’t say.


Figure 9. VNXe1600 system functionality

The following functionality is included in the basic delivery of the system. From the additional functionality currently available, you can choose EMC PowerPath — software installed on the server to provide multipathing (path failover and load balancing).

Data Access Protocols: iSCSI and Fiber Channel.

Unisphere - management of the system is possible both by means of the embedded web-interface, and through the Unisphere CLI command line (for those who want to write something, for example). The management interface is similar to other VNXe, and indeed quite simple. In addition to the control, the interface implements the possibilities of monitoring the use and performance characteristics. To manage large infrastructures, centralized management of disparate systems (VNX, VNXe, CLARiiON CX4) is supported using Unisphere Central, a separate management server deployed as an appliance in a virtualized environment.

Figure 10. Unisphere Management Web Interface

System management uses RBAC model with a set of predefined roles. LDAP integration is possible for user authentication when accessing the management interface.

FAST Cache - a functional on the use of solid-state drives as a cache expansion. Support is limited to using only two SLC solid state drives - i.e. maximum 200 nominal GB.

Thin Provisioning - the ability to create "thin" volumes, for which the actual allocation of capacity occurs as it is used, and organize re-subscription. Thing is more than standard for modern storage systems.

Snapshots - the ability to create copies of volumes in the form of snapshots. Work on technology ROW (Redirect on Write) - i.e. the mere presence of snapshots does not slow down the main LUN. There is a built-in functionality for ensuring consistent creation of snapshots for a volume group.

Asynchronous Replication - asynchronous replication is supported at the LUN or VMFS Datastores level (these are the same LUNs, but highlighted separately) between the VNXe1600 and VNXe3200 storage systems. The asynchronous replication mechanism actually uses the snapshot mechanism and transfers the difference between them. Accordingly, there is the possibility of using consistency groups. It is possible to set the automatic synchronization with a certain interval, or to hold it at the command of the user.

Virtual environments support - in the best traditions, the VNXe system most fully supports integration with VMware, including: VMware Aware Integration (VAI) - the ability to see VMware data from the VNXe interface - which datastore lies on the LUN, which virtual machines lie on it and t .d .; Virtual Storage Integrator (VSI) - there is an additional plug-in for the VMware vSphere client, allowing the opposite — it is more convenient to see and manage data on storage objects from the VMware interface; VMware API for Array Integration (VAAI) - interaction with storage systems for transferring part of tasks, such as copying virtual machines, zeroing blocks, locking, and mutual understanding in the field of thin provisioning from a server to internal storage processes.

Fast start


After the system is mounted, it must be initialized using a special utility - Connection Utility, supplied with the system or available on the vendor support site. The Broadcast utility (that is why you need to connect to the same network segment) finds and establishes communication with the storage system and allows you to set the management IP.

After that, the system configuration can be continued via the Web-interface at the assigned ip-address. When you first start, the wizard will be launched, which will step by step ask about most of the infrastructure settings and help you quickly configure basic things.
There are no tricky settings: just configure the pools and create storage resources on them. Moreover, when using wizards, the system, for example, can immediately pick up a new LUN created as VMware datastore after creation - you do not need to separately scan adapters and match LUN numbers from the vSphere management interface.

Global support


To provide timely support, the storage system must be registered and correctly displayed on the manufacturer's support portal support.emc.com. If this is the first EMC product, additional registration will be required on the portal. If everything is done correctly, and the system has a route to the Internet, then most of the information can be accessed directly from the storage system interface, including HowTo video and access to the forums.

For almost all of its products, EMC believes that Call-home and Dial-In should be configured regularly to ensure timely response to problems. Call-home — send messages from the system to the EMC Global Support Center. Dial-In - the ability to remotely connect global support specialists to diagnose and solve problems. In the minimal version, Call-Home is sending e-mail messages, and Dial-In is implemented through the Webex web-conferencing system. The recommended option is to use the EMC Secure Remote Support Gateway (ESRS), a special EMC product that implements a secure https tunnel between the customer’s infrastructure and the global support infrastructure. In the general case, such a server is now implemented as a virtual appliance, but on the VNXe systems there is a built-in component — you do not need to deliver anything, it is enough to allow the installation of https connections outside the firewall to the addresses of EMC support servers on the firewall.

Conclusions and possible uses


The new storage system fills a certain niche in the EMC product line - purely block entry-level storage from EMC. Previously, this niche at EMC occupied the system CLARiiON AX4-5 and VNX5100. There is a regular request for such storage systems - it will be in demand. The system was officially announced on August 10, but we didn’t touch it yet, and we still don’t have feedback from customers.

A decent margin for scaling and a range of disks allows you to solve a fairly wide range of tasks where there are no requirements for advanced functionality on the storage system. From possible applications at first glance, you can see:
• An entry-level block system is a good option to replace existing block storage systems of previous generations for which support has ended, and where advanced functionality is not required;
• A good option for the new entry-level system for data consolidation, where there is no need for file protocols, or they are considered more convenient to implement independently on file servers.
• Possibly and just back-end storage for a file server.
• Relatively inexpensive solution for building clusters, including first and foremost, for virtual infrastructures.
• Storage for video surveillance systems (in practice, it is often still built with connection via a block SAN, not a NAS) due to the ability to collect high-capacity disks, and possibly fast ones (if the system architecture implies a base or directory).
• There may be interesting All-Flash storage configurations that are more relevant for NAS than for SAN.
• Possible backup storage option Backup to Disk to connect to the backup server. Although we are primarily behind EMC Data Domain and client-direct, but this is another story.

Links to vendor resources:


White Paper: Introduction to the EMC VNXe1600. A detailed Review
Specification Sheet: EMC VNXe1600 Block Storage System

Source: https://habr.com/ru/post/265987/


All Articles