A modern server is an electronic device where there are almost no moving mechanical parts. Almost - because the hard disk, for example, stands out brightly in this row.
In some cases, all this can be replaced with one compact device.When transferring information between the electronic components of the server technological limit is the speed of light.
The hard disk can not spin unlimitedly fast, and its speed rests on the mechanics. Consequently, the speed of information processing is hundreds and thousands of times slower than in processors and memory.
')
While the speed of processors increased tenfold, hard drives stagnated and remained at the same level as at the end of the twentieth century. Because of this imbalance, many applications for which data centers are built suffer. In fact, many highly loaded processes get a bottleneck and are forced to stand idle while information is being read and written to the hard disk.
Does the question solve flash memory?
Yes. Flash memory has a much higher response speed: the optimal (in the laboratory) hard drive fulfills a request on average in 6-7 milliseconds, and flash memory does the same in 0.1 millisecond. At the same time, it can process tens and hundreds of times more transactions, compared with a hard disk having a limit of 150-200 operations per second.
So far, flash memory is more expensive and not suitable for all tasks for this very reason. And, as many decades predict the death of a magnetic tape, and it is still alive, so the hard drives will live for quite some time. At the same time, our customers, and in general, clearly have tasks on which you can save money when transferring from conventional storage systems to flash memory.
How to compare the cost for the company?
Here it is important to understand whether the company has IT-tasks that will help make more money if they are executed faster. For example, there is a report that I would like to run every day, but it is considered a day, and therefore it is launched every week. As a result, the price forecast in the trading network is not entirely correct, or partners are given not very relevant information, or depositors go to another bank, dissatisfied with the slow operation of ATMs. If there are such tasks, they can almost always be accelerated with the help of flash memory. One may reason that the root cause is in an incorrectly written application, but very often problems can be solved by speeding up the disk subsystem. Only at the expense of technology, without rewriting the application, you can seriously improve efficiency.
The value of flash memory is not in volume, but in speed, and it should be used for suitable applications, considering the cost not in rubles per gigabyte, but in rubles per transaction (IOPS).
More specifically, the large databases that live on high-end disk arrays feel very good to flash. If you have a huge application system that maintains the lion's share of the IT budget, this is one signal to think about. A single flash storage system can replace one or even several racks in the data center. For example, when SAP users complain that everything is slow, most likely, transferring storage to flash memory will help improve their work.
If there is a big project for virtualization of workstations, you also need to think about flash. Already at several customers I saw that when there are hundreds of virtual machines, the existing storage system simply does not cope. This is not to say that flash memory storage systems are suitable only for large companies, although it is they who can get the greatest gain in absolute terms. Even if you are a medium-sized company, and you think whether it’s possible to buy a storage system from 20-40 hard disks for an important application, it’s quite possible that 3-4 flash disks can be just as effective.
How can I embed flash memory into existing storage infrastructure?
There are several basic ways:
- The first is to directly put flash memory into the server . The most budget option. There are flash hard drives, which most probably came across, if not in working life, then in their computers and laptops, there are cards with PCI Express interface, which contain flash memory chips. This is an inexpensive way to speed up a single server. It has a number of drawbacks, due to which most companies at one time refused to store data on internal disks and went in the direction of storage: reduced fault tolerance, difficulty in maintenance, insufficient capacity, inability to use flash memory simultaneously for several servers and so on. The capacity of flash memory within a single server is limited by the number of PCI-e slots and the performance of the RAID controller. It is unlikely that it will be possible to get more than 2 TB.

- A more advanced and already accepted method of data storage is centralized. This is when there is a single storage system to which consumers of information are connected through a network — servers that solve particular tasks in a company. Plus in fault tolerance, plus the fact that you can share the resources of this expensive flash memory between several tasks. In my practice, even with large customers, few servers can load such a storage system.

- Here, too, there are options: the first is the vendors of traditional storage systems : IBM, HP, EMC, Hitachi, which for many years have been doing storage on conventional mechanical hard drives. They have supported SSD for several years. Thus, it turns out to be a rather simple way to use flash memory for those who already have such a system - several hard drives from flash memory are bought and inserted into the storage shelves. Plus in simplicity, and in the fact that you buy a solution from a trusted vendor. The downside is that these systems come from the past, they do not have enough powerful controllers, which contain millions of lines of code, sharpened by mechanics. Not always, these algorithms are suitable for flash memory. Even the traditional RAID5 in the flash world is not as effective and requires rethinking.
- There are a number of new vendors who started developing from scratch in the 21st century . The advantage is that the systems were created specifically for flash memory. They control the pool of flash memory as a whole and minimize the drawbacks - restrictions on the number of rewrite cycles, insufficient write speed compared to reading and so on. One of the most successful examples is Violin Memory, the leader in this market. Several reputable companies have invested in Violin, one of the most serious investors is Toshiba, which invented NAND memory. If there is a highly loaded application, then you can simply transfer it entirely to such a new storage system. If it is very large or so it turns out too expensive - move the most loaded volumes. Specialized storage systems are scaled to tens and hundreds of terabytes of flash memory.
- And the last approach is to use not just flash storage, but an attempt to add another cache layer to the storage area network (SAN) between servers and existing storage systems, that is, when this flash caching storage only contains the most up-to-date data. The approach is very progressive, but very risky, it is still offered by young companies or even start-up companies. If suddenly the slightest failure happens somewhere, you can lose the data, and accordingly money, time. Therefore, this method is in the status of an interesting experiment, and we cannot recommend using it immediately. The remaining options are completely industrial run-in solutions.
What is the result?
Flash speeds up the servers, optimizes the space occupied in the data center, saves energy. Today, storage systems built entirely on flash memory are serious competitors to high-end arrays. Such arrays are often filled with tens and hundreds of hard drives to give the application the desired speed, capacity is often secondary. If the company pays for the lease of a commercial data center, then this is quite a serious argument. Since most corporate software — Oracle, SAP, and so on — is licensed by core, you can save on licenses by streamlining processes and reducing the number of cores involved. If processors spend less machine time, waiting for storage, they will be able to solve more problems per unit of time. As a result, we will need fewer cores to solve the same problem.
And one more important moment: the lifetime of the flash memory is much longer than that of conventional hard drives: accordingly, less support costs and less risk of data loss if two disks fail at once (which is what conventional storage systems are).
By the cost of storing information for a gigabyte, flash storage systems will lose for several more years, but now the cost of processing information (the cost of translation) is already several times higher than traditional systems. There are a lot of cases in the Russian and international practice, when huge storage systems were replaced with small storage systems on flash memory that cost several times less, which at the same time showed an amazing acceleration of applications. I would venture to suggest that in the future, SLC and MLC chips will take the place of today's disks with a rotation speed of 15K and 10K. Low-speed drives of large volume (SATA, 7.2K) will be relevant for a long time.
Is it possible to combine storage systems on flash and traditional storage systems?
There are lots of tasks that do not need ultra-high processing speed. Usually, you need to identify in your data center those applications that require increased speed of the disk system, transfer them to flash storage. The applications remaining on the “regular” disk array will breathe more freely and their speed will also increase. Because the data is inexorably growing, than to occupy the vacant place on the storage system is never a puzzle.