
Some time ago we came across a curious article. It concisely and without specific explanations listed everything that should or may affect the market of storage systems in the next few years. Now there are more and more disparate publications about the trends and technologies mentioned in it, and, as it turned out, this article brings pleasant integrity to the picture of what is happening. So, albeit with a delay, it will be correct to share it with Runet in the form of a free translation.
Data Storage Market: Down with the Old, Meet the New')
Vendors of traditional storage systems are going through hard times. NetApp recently had to
cut 12% of the state . In the field of data storage, market forecasts for 2016 look encouraging only for SSD. Where HDD-based products have a hard time,
SSD solutions are increasing sales and seem to be preparing to accelerate the pace of the inevitable market capture.
According to some forecasts, by 2017
SSD should be equal in price to HDD . Already, decent quality SATA SSD can be bought cheaper HDD. These are MLC disks, but a report
published by Google on the results of six years of using various drives in data centers shows that MLC drives (multi-level cell) are not inferior in reliability to SLC (single-level cell) drives, although they are noticeably cheaper them.
The price of a gigabyte for SLC is several times higher than that of the MLC. Since both technologies appeared on the market, SLC has been positioned as more reliable drives, preferred for storing important data. While the MLC was a budget solution, less reliable and durable. Which is logical: in drives with multi-level cells, each of them stores more information. This reduces the cost per gigabyte, but each cell is more likely to be overwritten. On something this should affect. The RBER (raw bit eror rate) coefficient of MLC disks is orders of magnitude higher than that of SLC. For a long time, it was considered the main indicator of the reliability of the drive. But these studies demonstrate that RBER actually does not affect the service life of the disk and the likelihood of uncorrectable errors. The
document presented at USENIX states, in particular, that in the conditions of the MLC data center, the disks had to be replaced no more often than SLC. Solid state drives generally served longer than traditional hard drives, although they had more uncorrectable errors. If you pay attention to the tables presented in the document - it is clear that the service life of a solid-state drive depends more on the manufacturer and model - no one has canceled the existence of quality and not very good devices. In fact, this news sharply makes high-quality SSD solutions more accessible, and now nothing prevents them from finally displacing the HDD in the corporate segment.
Serious damage is indirectly associated with these processes. It was probably he who became the main reason for the merging of EMC with Dell and the abbreviations in NetApp. For decades, data storage has been based on a RAID array connected to a Fiber Channel storage network. And this class of solutions has problems when working with faster solid state drives.
Interfaces do not have enough speed, and controllers do not cope with the number of random I / O operations per second (random IOPS) even with a small number of SSD drives. In addition, software-defined storage systems like Ceph are now gaining popularity. These are affordable scalable (
highly scalable ) solutions that provide fault tolerance and do not require expensive specialized hardware. Now add here the ODMs that are starting to
supply equipment under their own brands to enterprises and to cloud centers, consider the pressure that equipment prices are subject to - and see what a precarious position the manufacturers of traditional storage systems are in.
From the point of view of the consumer, we are entering into a long period of lower prices for both storage systems and individual drives. Probably the first to suffer from this company EMC, which will not be easy with its 10-20-fold markups in the merger with Dell, in which the margins are more realistic - 3-5 times. But by the beginning of 2017, Dell will feel a decline in revenue from end-users.
Sales of SAN storage systems are falling everywhere. This, in turn, will affect sales of the Fiber Channel, despite the fact that the 32 GB version has just appeared on the market. The game includes a 25-gigabit Ethernet, with all the advantages of low cost, flexibility and remote access memory (RDMA). Ethernet is a common pattern of connecting clients to cloud services and connecting servers in the cloud. Life is greatly simplified when we need only one type of network connection. This is one of the factors that can lead to the abandonment of the Fiber Channel for data storage.
Some manufacturers - for example, Mellanox -
predict the rapid spread of 25 GB Ethernet. And given that Google and AWS support this initiative, helping it to enter the market, it looks like 25 GB of Ethernet is ready for rapid implementation. Some companies are now
working on supporting the iWARP (Internet Wide Area RDMA Protocol) protocol for remote direct memory access via IP networks, which allows building low-latency Ethernet networks for 25 GB Ethernet. This will not only help the transition to Ethernet, but may also lead to the fact that the highly efficient NVMe protocol will force out iSCSI and Fiber Channel, becoming the main protocol for working with storage networks. However, this will take more than one year.
NVM Express is a protocol for accessing solid-state drives connected via PCI Express. At some point, the possibilities of SSD technology were faced with restrictions on the bandwidth of the interfaces SATA and SAS, inherited solid-state drives inherited from electromechanical hard drives. The idea of connecting SSDs through PCIe grew out of a desire to maximize the speed potential of SSDs, without inventing a new interface. However, the exorbitant prices of drives for PCIe so far away from their widespread implementation.There are several completely new factors in the field of data storage. HPE and Supermicro have
non-volatile NVDIMM server
solutions . In them, flash memory with a battery insures the contents of DRAM against loss in the event of a power outage. If the power goes out, the data from the DRAM chips will be copied to non-volatile media. This is a relatively inexpensive way to significantly improve server performance in memory operations. Intel is also entering the market in tandem with Micron. Their product is called
3D X-Point . This is a completely new type of memory, which, according to the manufacturer, can be used not only as a drive, but also as RAM. We are talking about speeds 1000 times higher than the usual NAND flash. These products will impact servers and storage systems later in 2016.
This year for the data storage market promises to be hectic. New business models and new products oust a lot of existing solutions from the market. Those that will replace them will be faster and more versatile, and their cost will be lower.
Do not forget that the article was written, first of all, about the Western market and many things will come to us even a few years later. With forecasts, of course, you can argue. However, with what it is impossible?