📜 ⬆️ ⬇️

NVMe over Fabric, Fiber Channel and others

image About the impending death of Fiber Channel say a little less than about the death of tape drives. Even when the speed was limited to 4 Gbps, even then, FC replaced the new-fangled iSCSI (although the sane budget is only for 1 Gbps option, but 10 is somewhere very near). Time passed, and 10 Gbit ethernet remained too expensive and, moreover, could not provide low latency. ISCSI as a protocol for communicating servers with disk systems, although it was widely spread, could not completely force out FC.

The past years have shown that the infrastructure of the Fiber Channel continues to grow rapidly, the speed of interfaces is growing and it is clearly premature to talk about the coming demise. And in the spring of this (2016) year, the Gen 6 standard was announced, doubling the maximum speed from 16GFC to 32GFC. In addition to the traditional increase in productivity, the technology has received a number of other innovations.

The standard allows you to combine 4 FC lines into one 128GFC channel for connecting switches with each other via a high-speed ISL link. Error Correction (Forward Error Correction, FEC) was already available in fifth-generation FC products as an option, but in Gen 6 its support became mandatory. At such high speeds, not only does the likelihood of errors increase (the BER for Gen 6 is 10-6), but the effect of errors on performance increases even more because of the need to re-send frames. FEC allows the receiving party to correct errors without having to make repeated requests for frame forwarding. As a result, we get a smoother data transfer rate. Energy efficiency was also ignored - to reduce energy consumption, copper ports can be completely shut off, and optical ports can reduce power by up to 60%.

Still, the strength of the FC technology is the low latency (which is still 70% lower compared to the currently widely used 8 Gbps standard). It is the combination of low latency and high performance that makes 32GFC a suitable solution for connecting All-Flash arrays. On the horizon, NVMe systems, which place the highest demands on the storage network infrastructure, are becoming more and more visible, and 32GFC has every chance of winning a worthy place.
')
FC Gen 6 chips, adapters and a Brocade G620 switchboard were announced in the spring along with the standard itself, and not so long ago new directors (chassis switches) of the Brocade X6 Director family were announced. In the maximum configuration (8 slots), it supports up to 384 32GFC ports + 32 128GFC ports with a total bandwidth of 16Tbps. Depending on the chassis, you can install 8 or 4 FC32-48 line cards (48 32GFC ports) or multi-protocol SX6 cards (16 32GFC ports, 16 1 / 10GbE ports, and two 40GbE ports). Blades SX6 allow you to use IP networks to connect switches. Unfortunately, the chassis was not upgraded and the good old DCX-8510 cannot be upgraded to 32GFC, but for the X6 lineup it is declared to support Gen 7 standard cards.

Considerable attention is paid not only to the hardware capabilities, but also to the control system. Brocade Fabric Vision with IO Insight technology allows proactive monitoring of the entire I / O channel, including not only to physical servers, but also from individual virtual machines to specific LUNs on storage systems. In a situation where many different applications are consolidated on one storage system, the performance analysis of the entire complex is rather complicated and collecting metrics at the switch level can significantly simplify the search for a problem. Customizable alerts help to quickly respond to potential problems and prevent degradation of performance of key applications.

But of course, the Fiber Channel is not the only one living and Mellanox announced the upcoming release of the BlueField chip family. These are systems on a chip (SoC) with support for NVMe over Fabric and an integrated controller ConnecX-5. The chip supports Infiniband up to EDR speeds (100Gb / s), as well as 10/25/40/50 / 100Gb Ethernet. BlueField aims to be used both in NVMe AllFlash arrays and in servers for connecting NVMe over Fabric. It is expected that the use of such specialized devices will provide an opportunity to increase the efficiency of servers, which is very important for HPC. Use as a network controller for NVMe storage eliminates PCI express switches and powerful processors. Someone may say that such specialized devices are contrary to the ideology of software defined storage and the use of commodity hardware. But I think that since we have the opportunity to reduce the price of the solution and optimize performance, this is the right approach. The first BlueField shipments are promised in early 2017.

image

In the near future, the number of NVMe storage systems will steadily increase. Connecting servers through a PCI-express switch, although it provides maximum speed, but has a number of drawbacks, therefore, the published version 1.0 of the standard “NVM Express over Fabrics” came to the fore. FC or RDMA factory can be used as transport, the latter in turn can be physically implemented on the basis of Infiniband, iWARP or RoCE.

RDMA transport through Infiniband will prevail rather in HPC systems, as well as where it is possible to attach hands with “do-it-yourself”. There is no negative point in this phrase - Fiber Channel has been a recognized corporate standard for many years and the likelihood of running into problems is much lower than when using RDMA. This applies to both compatibility with a wide range of application software, and ease of management. All this has a price that the corporate market is following closely.

At one time, some manufacturers predicted the great success of FCoE technology, as it allows unifying the storage network with a conventional data transfer network, but in fact it was not possible to achieve significant success in gaining the market. Nowadays, the NVMe storage system with an Ethernet connection and NVMe over Fabric data transfer via RoCE (RDMA over Converged Ethernet) is actively developing. There is a possibility that success here will be more significant than with the introduction of FCoE to the masses, but I am sure that we will see more than one generation of Fiber Channel devices. And now it is very early to say that “at last only ethernet can be dispensed with” - yes, it is often possible, but it’s far from the fact that it will be cheaper.

Today, if the FC network is already deployed, it is very rare to introduce alternative solutions - it is better to upgrade the equipment to Gen 6 or Gen 5 standards - the effect will be even with a partial upgrade. Despite the fact that the existing storage system does not support maximum speed, updating the storage network often reduces latency and increases the integrated performance of the entire complex.

Trinity engineers will be happy to advise you on server virtualization, storage systems, workstations, applications, networks.

Visit the popular technical forum of Trinity or order a consultation .

Other Trinity articles can be found on the Trinity blog and hub. Subscribe!

Source: https://habr.com/ru/post/309086/


All Articles